How to obtain the p-value (check significance) of an effect in a lme4 mixed model?

  • I use lme4 in R to fit the mixed model

    lmer(value~status+(1|experiment)))
    

    where value is continuous, status and experiment are factors, and I get

    Linear mixed model fit by REML 
    Formula: value ~ status + (1 | experiment) 
      AIC   BIC logLik deviance REMLdev
     29.1 46.98 -9.548    5.911    19.1
    Random effects:
     Groups     Name        Variance Std.Dev.
     experiment (Intercept) 0.065526 0.25598 
     Residual               0.053029 0.23028 
    Number of obs: 264, groups: experiment, 10
    
    Fixed effects:
                Estimate Std. Error t value
    (Intercept)  2.78004    0.08448   32.91
    statusD      0.20493    0.03389    6.05
    statusR      0.88690    0.03583   24.76
    
    Correlation of Fixed Effects:
            (Intr) statsD
    statusD -0.204       
    statusR -0.193  0.476
    

    How can I know that the effect of status is significant? R reports only $t$-values and not $p$-values.

    Going by the answers provided to this question, one wonders in what actually OP is interested here: testing coefficients against a null (vanilla $t$-test one does in regular linear regression against a null $H_0:\beta=\beta_{\text{null}}$), or testing for minimization of variance (the $F$-test we get from the many types of ANOVA). Those two aim at different things. An enlightening answer, while not about mixed-effects models, is found here.

  • Ben Bolker

    Ben Bolker Correct answer

    9 years ago

    There is a lot of information on this topic at the GLMM FAQ. However, in your particular case, I would suggest using

    library(nlme)
    m1 <- lme(value~status,random=~1|experiment,data=mydata)
    anova(m1)
    

    because you don't need any of the stuff that lmer offers (higher speed, handling of crossed random effects, GLMMs ...). lme should give you exactly the same coefficient and variance estimates but will also compute df and p-values for you (which do make sense in a "classical" design such as you appear to have). You may also want to consider the random term ~status|experiment (allowing for variation of status effects across blocks, or equivalently including a status-by-experiment interaction). Posters above are also correct that your t statistics are so large that your p-value will definitely be <0.05, but I can imagine you would like "real" p-values.

    I don't know about this answer. `lmer` could just as easily report the same kinds of p-values but doesn't for valid reasons. I guess it's the comment that there are any "real" p-values here that bugs me. You could argue that you can find one possible cutoff, and that any reasonable cutoff is passed. But you can't argue there's a real p-value.

    For a classical design (balanced, nested, etc.) I think I can indeed argue that there's a real p-vaue, i.e. a probability of getting an estimate of beta of an observed magnitude or greater if the null hypothesis (beta=0) were false ... lme4 doesn't provide these denominator df, I believe, because it's harder to detect in general from an lme4 model structure when the model specified is one where some heuristic for computing a classical denominator df would work ...

    try `summary(m1)` instead (I use this with nlme package)

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM