Sunday, April 27, 2014

Happy (almost!) End of Year!

Hello everyone! The semester is winding down, and so are we.  We wanted to take this opportunity to thank everyone for an INCREDIBLE semester!  We had an unprecedented number of clients and found ourselves challenged with all manner of new and exciting statistical techniques!

As the semester comes to a close, we are finishing up with our clients, and we are sad to say, will soon not be taking any more.  Please send us any last minute questions you may have, for soon, we will close our doors for the summer!

Thank you again for a GREAT semester, and good luck with all of your statistical summer adventures!

Thursday, April 3, 2014

Statistical mistakes are all too common in any scientific discipline. Like any tool, statistics can be misused, both deliberately and by accident. The important lesson is to understand how statistical mistakes happen, why they happen, and educate yourself to recognize and prevent them from happening to you.

Statistics Done Wrong is a website run by Alex Reinhart, a doctoral student in statistics at Carnegie Mellon University, that addresses common statistical mistakes in scientific disciplines. Don't be a statistic, check it out today:

http://www.statisticsdonewrong.com/index.html


Tuesday, March 25, 2014

Are you interested in studying social networks? 

Are you curious about how to model the way information - or gossip - spreads?

 
If you answered yes to any of these questions, 

please come to our presentation entitled

“Social Network Analysis: Or ‘How I Learned to Stop Individuating and Love the Web’” 

March 28, 2014

1:00 PM 

BPS 1142


We look forward to seeing you there!

Saturday, March 22, 2014

Statistical Cognition

As consultants, members of DaSAL are frequently exposed to new statistical techniques that drive our continual learning. Unsurprisingly, we often find learning new techniques and approaches to be a challenging process. Perhaps more surprising, however, are findings that even trained researchers find it challenging to understand the fundamental statistical techniques that are used in almost all psychological research, and may be overconfident in their true level of understanding.

Shedding light on this phenomenon is research on statistical cognition, the study of how people understand and statistical concepts and the presentation of statistical analyses. Some statistical cognition research has examined how people interpret findings from NHST and estimation like confidence intervals. While researchers can often make more accurate interpretations of data using confidence intervals than significance testing (Coulson et al., 2012) researchers’ understanding of confidence intervals is far from  perfect. In fact, several findings have shown that researchers misunderstand confidence intervals (Belia et al., 2005; Hoekstra et al., 2014). Reading about common misconceptions of fundamental statistics to our field highlights the need to review our own understanding of these “basic” concepts even as we develop our knowledge of increasingly complex statistical procedures. Below are some interesting articles that provide insight into some common shortcomings of our statistical cognition.

Belia, S., Fidler, F., Williams, J., & Cumming, G. (2005). Researchers misunderstand confidence intervals and standard error bars. Psychological Methods, 10, 389-396.

Beyth-Marom, R., Fidler, F., & Cumming, G. (2008). Statistical cognition: Towards evidence-based practice in statistics and statistics education. Statistics Education Research Journal., 7, 20-39

Coulson, M., Healey, M., Fidler, F., & Cumming, G. (2010). Confidence intervals permit, but do not guarantee, better inference than statistical significance testing. Frontiers in Quantitative Psychology and Measurement, 1:26. doi:10.3389/fpsyg.2010.00026.

Hoekstra, R., Morey, R. D., Rouder, J. N., Wagenmakers, E.-J. (2014). Robust misinterpretation of confidence intervals. Psychonomic Bulletin & Review, 1-8.

Tuesday, March 18, 2014

Linear Mixed Models in R


The Linear Model
To understand mixed-effects models (or simply mixed models), it is helpful to first revisit the normal linear model. The basic linear model can be expressed with the following equation:

yi = β0 + β1x1i + εi

In this basic model, y is a linear expression of a model’s intercept (β0), regression coefficients (β1x1i) and error (εi). The only random effect in this model is the error term and the predictors represent fixed effects. The terms “random” and “fixed” are often used to mean different things (see Andrew Gelman’s blog citing five different definitions), but we will define fixed effects as effects that (1) do not vary across individuals or units and (2) whose values of interested are fully represented in the data.     

Linear Mixed Models
Mixed models are used when additional random effects are included in a model. This is common when data are non-independent as in the case when data are clustered or longitudinal. A common form of a mixed model involves modeling a random intercept. Random intercepts allow for a model’s intercept to vary by subject or cluster, accounting for the fact that mean levels of some outcome vary significantly by subject/cluster. A random intercept model can be expressed with the following two equations:

yi = β0j + β1xij + εii
β0j = γ00 + u0j

The equation for the intercept is composed of a grand mean (γ00) and some variation around this grand mean (u0j).

A Brief Example in R
The lmer package can be used to run mixed models in R and is fairly straightforward to implement. To illustrate this, we will use an example of data in which we wanted to test the overall effect of a relationship across multiple studies. As such, we wanted to model random intercepts in which intercepts were allowed to vary across studies.

Below is a sample of how data can be set up for linear mixed modeling, with the inclusion of a cluster variable, “Study.”


Once, your data is set up and imported in R, a simple command can run a linear mixed model analysis using the lmer package. We have to label our model (mixedmodel) and define it with the lmer command. The dependent variable (MOL) is being predicted (~) by two fixed variables, (NFC, glorification) and includes a random effect of study.  The (1|Study) specification specifices that we want a random intercept to be modeled for the Study effect. We also specify the appropriate dataframe for use (MergedData) and how to handle missing data (na.action = na.omit). After defining our model, we can review our results.

Commands
>mixedmodel <-lmer(MOL ~ NFC + glorification + (1|Study), MergedData, na.action=na.omit)
>summary(mixedmodel)
Output


Our summary gives us information about our fixed effects (regression coefficients and statistical tests for NFC and glorification) as well as the random effects, or variances, that were estimated as part of the model. This model serves as a simple illustration of how to use the lmer package to estimate linear mixed models in R, but many other variations are possible and addressed in lmer resources.