Learn R Programming

klausuR (version 0.12-14)

klausur: Evaluate multiple choice tests

Description

The function klausur expects an object of class klausuR.answ-class, containing some identification data on all subjects and their answers to the test items, a vector with the correct answers, and optionally a vector with marks assigned to the points achieved. It will compute global test results as well as some item analysis (including Cronbach's alpha, discriminatory power and Lienert's selection index of the test items), and anonymous feedback for the test subjects.

Usage

klausur(
  data,
  marks = NULL,
  mark.labels = NULL,
  items = NULL,
  wght = NULL,
  score = "solved",
  matn = NULL,
  na.rm = TRUE,
  cronbach = TRUE,
  item.analysis = TRUE,
  sort.by = "Name",
  maxp = NULL
)

Arguments

data

An object of class klausuR.answ-class.

marks

A vector assigning marks to points achieved (see details). Alternatively, set it to "suggest" to let klausur.gen.marks calculate suggestions under the assumption of normal distribution. If NULL, this value must be set in the data object.

mark.labels

If marks="suggest", use these as the marks you want to give.

items

Indices of a subset of variables in data to be taken as items.

wght

A vector with weights for each item (named also according to Item###). If NULL, the value from the data object will be used.

score

Specify the scoring policy, must be one of "solved" (default), "partial", "liberal", "pick-n", "NR", "ET", "NRET", or "NRET+".

matn

A matriculation number of a subject, to receive detailed results for that subject.

na.rm

Logical, whether cases with NAs should be ignored in data. Defaults to TRUE.

cronbach

Logical. If TRUE, Cronbach's alpha will be calculated.

item.analysis

Logical. If TRUE, some usual item statistics like difficulty, discriminatory power and distractor analysis will be calculated.

sort.by

A character string naming the variable to sort the results by. Set to c() to skip any re-ordering. If cronbach is TRUE, too, it will include the alpha values if each item was deleted.

maxp

Optional numeric value, if set will be forced as the maximum number of points achievable. This should actually not be needed, if your test has no strange errors. But if for example it later turns out you need to adjust one item because it has two instead of one correct answers, this option can become handy in combination with "partial" scoring and item weights.

Value

An object of class klausuR-class with the following slots.

results

A data.frame with global results

answ

A data.frame with all given answers

corr

A vector with the correct answers

wght

A vector with the weights of items

points

A data.frame with resulting points given for the answers

marks

A vector with assignments of marks to achieved score

marks.sum

A more convenient matrix with summary information on the defined marks

trfls

A data.frame of TRUE/FALSE values, whether a subject was able to solve an item or not

anon

A data.frame for anonymous feedback

mean

A table with mean, median and quartiles of the test results

sd

Standard deviation of the test results

cronbach

Internal consistency, a list of three elements "alpha", "ci" (confidence interval 95%) and "deleted" (alpha if item was removed)

item.analysis

A data.frame with information on difficulty, discriminatory power, discriminant factor and Lienert's selection index of all items.

distractor.analysis

A list with information on the selected answer alternatives for each individual item (only calculated if item.analysis=TRUE). Also lists the discriminatory power of each alternative, being the point-biserial (a.k.a Pearson) correlation of it with the global outcome.

misc

Anything that was stored in the misc slot of the input data.

Not all slots are shown by default (refer to show and plot).

Details

For details on the ecpected data structure refer to klausur.data,

Scoring functions

In combination with multiple (correct) answers for certain items you can specify one of six scoring policies via the score-parameter. If you set it to something othe than "solved", as the names may suggest you allow partially given answers under the condition that the test subject didn't check more alternatives than there are correct ones (that is, if you checked four alternatives where three correct ones were possible, you're out):

  • "solved" Multiple Choice: Check all correct alternatives. This is the default, which means that one will only get any points for an item if the answer was 100% correct (that is, all or nothing).

  • "partial" Multiple Choice: Check all correct alternatives, allow partially given answers, but none of the distractors must be checked.

  • "liberal" Multiple Choice: Check all correct alternatives, allow partially given answers, even distractors can be checked.

  • "pick-n" Multiple Choice: Check all correct alternatives, allow partially given answers, even distractors can be checked. Difference to "liberal" is that you will also get points for unchecked distractors.

  • "ET" Elimination Testing: In contrast to the usual MC procedure, eliminate/strike all wrong alternatives.

  • "NRET" Number Right Elimination Testing: Like ET+MC, eliminate/strike all wrong alternatives and check the correct one.

  • "NRET+" Number Right Elimination Testing, more strict: Like NRET, but if more alternatives are checked right than there are right anwers, it will automatically yield to 0 points for that item.

  • "NR" Number Right: The usual MC scoring, but works with ET/NRET data. It was implemented for completeness, e.g. to compare results of different scoring techniques.

An example for "solved", "partial" and "liberal": If an item has five answer alternatives, the correct answer is "134" and a subject checked "15", "solved" will give no point (because "15" is not equal to "134"), as will "partial" (because "5" is wrong), but "liberal" will give 1/3 (because "1" is correct), and "pick-n" will give 2/5 (because "1" was correctly checked and "2" correctly unchecked).

(Number Right) Elimination Testing

Note that "ET", "NRET"/"NRET+" and "NR" will disable wght as of now, and need the data in different format than the other scoring functions (see klausur.data for details). klausur will evaluate each answer individually and sum up the points for each item. The alternative-wise evaluations will be documented in the trfls slot of the results. Therefore, in these cases that matrix is not boolean, but more complex. For each item and each subject, a character string represents the evaluated answer alternatives, with the following elements:

  • P True positive: Alternative was checked as right and is right. Points: +1 (+ constant)

  • p False positive: Alternative was checked as right but is wrong. Points: 0 (+ constant)

  • N True negative: Alternative was checked as wrong and is wrong. Points: +1 (+ constant)

  • n False negative: Alternative was checked as wrong but is right. Points: -(alternatives-1) (+ constant)

  • 0 Missing: Alternative wasn't checked at all. Points: 0 (+ constant)

  • * Error: Alternative was checked both wrong and right. Points NRET+: 0 (+ constant); NR scores 1 point if this was the correct alternative, ET 1 point if it hit a wrong one, and NRET sums up the points for both the positive and negative answer (all + constant)

An example: If we have an item with four alternatives, and the third one is right (i.e., "--+-"), and a test subject considered the first alternative to be correct and eliminated all others (i.e., "+---"), it would be evaluated as "pNnN", that is 0+1-3+1=-1 point, not considering the constant. As you can see, it would be possible to end up with a negative sum of points. If you consider how in the end a mark will be assigned to the achieved points, this would be a problem, because a vactor cannot have negative indices. To circumvent this issue, klausuR automatically adds a constant to all results, so that the worst possible result is not negative but 0. This constant is simply (alternatives-1), i.e. 3 for the example. In other words, if our test had 10 such items, the results minus 30 would be equivalent to scoring without that constant. You can use nret.rescale to remove the constant from the results afterwards.

Marks

The assigned marks are expected to be in a certain format as well (see klausur.data for details), as long as you don't want klausur to suggest them itself. If you want to let klausuR make a suggestion, set marks="suggest", and klausur.gen.marks kicks in and takes either the mark.labels you have defined here or will ask you step by step. See the documentation of that function for details. To see the suggested result in detail, have a look at the slot marks of the returned object.

To calculate Cronbach's alpha and item analysis methods from the package psych are used. Lienert's selction index ("Selektionskennwert") aims to consider both discriminatory power (correlation of an item with the test results) and difficulty to determine the quality of an item. It is defined as $$S = \frac{r_{it}}{2 \times{} \sqrt{Difficulty \times{} (1-Difficulty)}}$$ Item analysis also includes item discrimination.

See Also

klausur.data, klausur.report, compare, klausur.gen, klausur.gen.marks, klausur.gen.corr, plot

Examples

Run this code
# NOT RUN {
data(antworten)

# vector with correct answers:
richtig <- c(Item01=3, Item02=2, Item03=2, Item04=2, Item05=4,
 Item06=3, Item07=4, Item08=1, Item09=2, Item10=2, Item11=4,
 Item12=4, Item13=2, Item14=3, Item15=2, Item16=3, Item17=4,
 Item18=4, Item19=3, Item20=5, Item21=3, Item22=3, Item23=1,
 Item24=3, Item25=1, Item26=3, Item27=5, Item28=3, Item29=4,
 Item30=4, Item31=13, Item32=234)

# vector with assignement of marks:
notenschluessel <- c()
# scheme of assignments: marks[points_from:to] <- mark
notenschluessel[0:12]  <- 5.0
notenschluessel[13:15] <- 4.0
notenschluessel[16:18] <- 3.7
notenschluessel[19:20] <- 3.3
notenschluessel[21]    <- 3.0
notenschluessel[22]    <- 2.7
notenschluessel[23]    <- 2.3
notenschluessel[24]    <- 2.0
notenschluessel[25:26] <- 1.7
notenschluessel[27:29] <- 1.3
notenschluessel[30:32] <- 1.0

# now combine all test data into one object of class klausur.answ
data.obj <- klausur.data(answ=antworten, corr=richtig, marks=notenschluessel)

# if that went well, get the test results
klsr.obj <- klausur(data.obj)

# to try pick-n scoring, we must also define all distractors
falsch <- c(Item01=1245, Item02=1345, Item03=1345, Item04=1345, Item05=1235,
 Item06=1245, Item07=1235, Item08=2345, Item09=1345, Item10=1345, Item11=1235,
 Item12=1235, Item13=1345, Item14=1245, Item15=1345, Item16=1245, Item17=1235,
 Item18=1235, Item19=1245, Item20=1234, Item21=1245, Item22=1245, Item23=2345,
 Item24=1245, Item25=2345, Item26=1245, Item27=1234, Item28=1245, Item29=1235,
 Item30=1235, Item31=245, Item32=15)

data.obj <- klausur.data(answ=antworten, corr=richtig, wrong=falsch,
      marks=notenschluessel)
klsr.obj <- klausur(data.obj, score="pick-n")

############################
 # example for an NRET test
############################
# load sampla data in SPSS format
data(spss.data)
# define correct answers
spss.corr <- c(
   item01=2, item02=3, item03=3, item04=3, item05=2,
   item06=2, item07=3, item08=1, item09=1, item10=2)

# convert into klausuR type coding
klausuR.data <- nret.translator(spss.data, spss="in")
klausuR.corr <- nret.translator(spss.corr, spss="in", corr=TRUE,
  num.alt=3, spss.prefix=c(corr="item"))
# now create the data object; "Nickname" must be renamed
data.obj <- klausur.data(answ=klausuR.data, corr=klausuR.corr,
  rename=c(Pseudonym="Nickname"))

 # finally, the test can be evaluated, using the scoring functions available
NRET.results <- klausur(data.obj, marks="suggest", mark.labels=11, score="NRET")
NRETplus.results <- klausur(data.obj, marks="suggest", mark.labels=11, score="NRET+")
NR.results <- klausur(data.obj, marks="suggest", mark.labels=11, score="NR")
ET.results <- klausur(data.obj, marks="suggest", mark.labels=11, score="ET")
# }

Run the code above in your browser using DataLab