Learn R Programming

languageR (version 1.5.0)

selfPacedReadingHeid: Self-paced reading latencies for Dutch neologisms

Description

Self-paced reading latencies for Dutch neologisms ending in the suffix -heid.

Usage

data(selfPacedReadingHeid)

Arguments

Format

A data frame with 1280 observations on the following 18 variables.

Subject

a factor with subjects as levels.

Word

a factor with words as levels.

RT

a numeric vector with logarithmically transformed reading latencies.

RootFrequency

a numeric vector for the logarithmically transformed frequency of the lowest-level base of the neologism (e.g., lob in [[[lob]+ig]+heid].

Condition

a factor for the priming conditions with levels baseheid (neologism is preceded 40 trials back by its base word) and heidheid (the neologism is preceded 40 trials back by itself).

Rating

a numeric vector for the word's subjective frequency estimate.

Frequency

a numeric vector for the neologism's frequency (all zero).

BaseFrequency

a numeric vector for the base adjective underlying the neologism (e.g., lobbig in [[[lob]+ig]+heid]).

LengthInLetters

a numeric vector coding word length in letters.

FamilySize

a numeric vector for the logaritmically transformed count of a word's morphological family members.

NumberOfSynsets

a numeric vector for the count of synonym sets in WordNet in which the word is listed.

RT4WordsBack

a numeric vector for the log-transformed reading latencies four trials back.

RT3WordsBack

a numeric vector for the log-transformed reading latencies three trials back.

RT2WordsBack

a numeric vector for the log-transformed reading latencies two trials back.

RT1WordBack

a numeric vector for the log-transformed reading latencies one trial back.

RT1WordLater

a numeric vector for the log-transformed reading latencies one trial later.

RT2WordsLater

a numeric vector for the log-transformed reading latencies two trials later.

RTtoPrime

a numeric vector for the log-transformed reading latency for the prime.

References

De Vaan, L., Schreuder, R. and Baayen, R. H. (2007) Regular morphologically complex neologisms leave detectable traces in the mental lexicon, The Mental Lexicon, 2, in press.

Examples

Run this code
# NOT RUN {
data(selfPacedReadingHeid)

# data validation
plot(sort(selfPacedReadingHeid$RT))   
selfPacedReadingHeid = selfPacedReadingHeid[selfPacedReadingHeid$RT > 5 & 
  selfPacedReadingHeid$RT < 7.2,]

# fitting a mixed-effects model

require(lme4)
require(lmerTest)
require(optimx)
x = selfPacedReadingHeid[,12:15]
x.pr = prcomp(x, center = TRUE, scale = TRUE)
selfPacedReadingHeid$PC1 = x.pr$x[,1]

selfPacedReadingHeid.lmer = lmer(RT ~ RTtoPrime + LengthInLetters + 
  PC1 * Condition + (1|Subject) + (1|Word), 
  control=lmerControl(optimizer="optimx",optCtrl=list(method="nlminb")),
  data = selfPacedReadingHeid)  
summary(selfPacedReadingHeid.lmer)

# model criticism

selfPacedReadingHeid.lmerA = lmer(RT ~ RTtoPrime + LengthInLetters + 
  PC1 * Condition + (1|Subject) + (1|Word), 
  control=lmerControl(optimizer="optimx",optCtrl=list(method="nlminb")),
  data = selfPacedReadingHeid[abs(scale(resid(selfPacedReadingHeid.lmer))) < 2.5, ])

qqnorm(resid(selfPacedReadingHeid.lmerA))
summary(selfPacedReadingHeid.lmerA)
# }

Run the code above in your browser using DataLab