Tuesday, September 24, 2013

Research and Ethics



Textbooks typically focus on the ethical treatment of participants when discussing ethical issues in research. While participant treatment is a central issue in research ethics, other ethical concerns -- often overlooked in textbooks -- are equally important to address. Let's assume research involves at least four distinct stages:
  • Literature review
  • Generating new hypotheses
  • Gathering data
  • Testing and reporting hypotheses
Each of these stages involves a number of ethical considerations. First, when conducting a literature review, two common ethical issues are plagiarism and failing to check primary citations. Plagiarism in any form is obviously unethical. Researchers should strive to ensure that they are not falsely misrepresenting others’ ideas as their own, and that they give appropriate credit for ideas and prior findings to their originators. Similarly, when generating new hypotheses, researchers should not “steal” others research ideas.

A second ethical consideration related to the literature review process is checking primary sources. Often, researchers will find a citation in one of the sources from which they are drawing information, and will simply cite that original work as the author cited it without reading the original source. This oversight has led to the false propagation of the idea that a cronbach’s alpha of 0.7 (a measure of scale reliability) is acceptable, "according to" Nunnally and Bernstein (1994). However, a careful reading of the original source reveals that Nunnally and Bernstein (1994) actually espoused that: 

“In the early stages of predictive or construct validation research, time and energy can be saved using instruments that have only modest reliability, e.g., .70. . . . In contrast to standards used to compare groups, a reliability of .80 may not be nearly high enough in making decisions about individuals. Group research is often concerned with the size of correlations and with mean differences among experimental treatments, for which a reliability of .80 is adequate. . . . If important decisions are made with respect to specific test scores, a reliability of .90 is the bare minimum, and a reliability of .95 should be considered the desirable standard” (pgs. 264-265). 

At no point do Nunnally and Bernstein assert that a reliability of 0.7 is universally acceptable. As you might expect, it is not uncommon for researchers to put a “spin” on others’ findings and theories in order to better support their own research and logic. Proper investigation of original sources is necessary for ethical, rigorous research and the correct attribution of conclusions.

Third, when gathering data, researchers must strive to protect participants from psychological and physical harm. As you all hopefully already know, the Institutional Review Board (IRB) was created to protect the rights of human research participants. Since the IRB is covered in depth in most research textbooks, we will not spend a great deal of time discussing its function here.

Finally, when testing and reporting hypotheses, a number of ethical considerations are salient, including honest reporting of decisions made during research (including elimination of “outliers”), the analyses conducted, the findings, and the limitations of the research. With respect to data cleaning, for example, a great many subjective decisions are made concerning subject selection and the removal of outliers. Any selective data analysis technique should be fully disclosed in research reports, especially as outliers are typically removed, and subjects selected for analysis, to maximize the chance of supporting the researcher’s theory. Further, when analyzing data, researchers are often taught not to “snoop” in their data (i.e., mining data for significant results) after having testing their hypothesis of interest, as snooping in data results in inflated Type I error. In other words, “significant” effects found during data snooping are more likely to be due to chance than would be expected given the set alpha level, especially without any alpha controls. However, since data collection involves a great deal of effort at the onset, some may argue that “snooping” is actually an ethical imperative --  that researchers should derive the most benefit from the efforts they invest in data collection. If you ever do yield to the temptation of data snooping, anything you find should be verified through replication of the research.

Other ethical issues arise in the reporting of research. One common ethical issue is the tendency to make misleading and inaccurate claims of causality, due to the importance ascribed to causal relationships in social science. Another ethical consideration in reporting is the extent to which researchers claim that their results are generalizable. Many times, samples represent a limited sub-population rather than the entire population of interest, especially given psychology's reliance on student samples. Researchers should be careful in generalizing beyond their sub-population, as the phenomena under investigation may be unique to a certain group of individuals.

Additionally, within the past several decades, a research technique known as meta-analysis has risen to prominence. Meta-analysis involves the quantitative synthesis and review of a body of literature on a given topic. According to some, meta-analysis is an ethical imperative for researchers as it makes the best use of existing data and may prevent unnecessary data from being collected. Rosenthal (1994) urges any researcher considering resolving a “controversy” in the literature to first conduct a meta-analysis to determine whether or not there truly is a controversy to resolve. However, while meta-analysis is a powerful technique and possibly an ethical imperative, it is not the panacea that some scholars assume it to be. Notably, meta-analysis involves a number of subjective decision points (e.g., definition of phenomena, what literature to include, including unpublished studies, measures accepted/collected, etc.). Consequently, two meta-analyses conducted on the same topic may result in different conclusions.

Finally, with respect to publication, Rosenthal (1994) highlights two additional considerations. The first is the general ethical obligation of researchers to share findings, either through publications, presentations, or citation of unpublished data. Rosenthal (1994) holds that censoring findings is unethical due to the time investment of the participants and researchers in obtaining data in the first place. The second ethical obligation is to ensure that authorship is assigned according to contribution, by some impartial ruling.

The above discussion is not intended to provide a comprehensive summary of all of the ethical issues involved in research – of which there are a great many. Instead, this discussion is intended only to provide an overview of some of the salient ethical issues throughout all stages of the research process. For additional reading on science and ethics, we recommend you read Rosenthal (1994).

References
Nunnally, J. C. & Bernstein, I. H. (1994).  Psychometric Theory (Third Edition).  NY: McGraw-Hill.

Rosenthal, R. (1994). Science and ethics in conducting, analyzing, and reporting psychological research. Psychological Science, 5, 127-134.

Tuesday, September 17, 2013

Mediation Wars: SEM v. Baron & Kenny

Researchers testing mediation models are likely at least passingly familiar with the Baron & Kenny approach to mediation. The Baron & Kenny (1986) approach involves running several regressions; a test for mediation using Baron & Kenny's approach might involve establishing: a) a relationship between X -- the predictor -- and Y -- the outcome, b) a relationship between X and M -- the mediator, c) a relationship between M and Y, and d) a relationship between M and Y, controlling for X. Of course, not all analyses using the Baron & Kenny approach require all four of these conditions to be met (e.g., X and Y do not necessarily need to be directly correlated), and, to support full mediation, an additional step is required -- establishing that the relationship between X and Y dissapears when controlling for M. Advances in computing power and statistical programs, however, have made another method of mediation testing popular: testing mediation using structural equation modeling (SEM). If you are testing -- or thinking of testing -- a mediation model, you'll definitely want to check out James, Mulaik, and Brett's "A Tale of Two Methods."

James et al. (2006) compare the SEM approach to testing mediation with the Baron & Kenny approach. If you are testing a partially mediated model (e.g., X affects Y both directly, but also through M), the authors argue that these two approaches are essentially equivalent. However, if you are testing a fully mediated model (e.g., X affects Y only through M), these approaches test complete mediation differently. To guide researchers testing fully and partially mediated models, James et al. (2006) supply information regarding how to test for full and partial mediation using each of these techniques.

Of course, the aspiring mediation researcher is primarily interested in the question: "So, which is the best method for me to use when testing mediation?" Fortunately, James et al. (2006) provide pretty clear guidance on this question.

How should researchers test mediation?
1. First, revisit your theory. Is your mediation model of interest full or partial mediation? If your theory is not well-developed enough to guide this question, James et al. (2006) recommend testing for full mediation to satisfy the principle of parsimony -- although they also confess that partial mediation, being common, may be the most practical model to test. Our recommendation? Don't use James et al.'s (2006) guidance as an excuse to not think about your theory. Before you default to testing full mediation, invest time thinking about your theory and about which of these models make more sense.
2. If possible, test your model using SEM rather than Baron & Kenny. What would make this possible? Sample size is one deciding factor -- SEM is typically better suited for analyses of larger samples (perhaps 100 or above) rather than smaller samples (for example, 20-40 observations).

In short, if you want to test a mediation model, James et al. (2006) is a must-read!

How have you tested for mediation in the past?


References
Baron, R. M. & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173-1182.
James, L. R., Mulaik, S. A., & Brett, J. M. (2006). A tale of two methods. Organizational Research Methods, 9, 233-244.

Thursday, September 12, 2013

R Webinar

Have you been yearning to learn more about working with R? If so, take advantage of this coming Monday's free webinar on R for data analysis and visualization! More information on this event is below:

HFES Webinar: R for Data Analysis and Visualization
Please join the upcoming webinar on Monday, September 16, by registering for free athttps://www2.gotomeeting.com/register/266549378. 
R for Data Analysis and Visualization
Monday, September 16, 2013
9:30-11:00 a.m. Pacific / 10:30 a.m.-12:00 noon Mountain / 11:30 a.m.-1:00 p.m. Central / 12:30-2:00 p.m. Eastern
This webinar will give attendees a glimpse of the usefulness and power of R, the statistical software package for human factors/ergonomics researchers and practitioners. R can produce very simple graphics with only one line of code, but it also has the power to produce stunning graphics, in every color and style imaginable, with add-on packages and additional coding. There is a steep learning curve associated with R, as it encourages users to think of statistics in a way that they may not have considered. In this webinar - which is intended for students, academics, and industry professionals with basic knowledge of statistics - the presenters will demonstrate why this additional learning is truly worthwhile.
Read the full description at http://www.hfes.org/web/Webinars/sepleeboyle.html

Tuesday, September 10, 2013

MCMC/Bayes Workshop Announcement

MCMC/Bayes Workshop Announcement

Summary

What MCMC/Bayesian Data Analysis Workshop
Where BPS1142
When Fridays at 10am ES

Lengthier Description

By now, we've all heard that null-hypothesis significance testing is fundamentally flawed and that the good Reverend Bayes' star is ascendant. John Kruschke has a great paper comparing the t-test and it's Bayesian alternative, (and a book that expands the idea a bit ), but self-directed reading tends to fall by the wayside during the semester. Surely there's a course that teaches more about Bayesian data analysis.
While it's not local, Andrew Gelman will be teaching just such a course! Per Gelman's public suggestion, we're organizing a group at UMCP to follow along and figure out how to use R and JAGS, (or more probably, STAN), to implement Bayesian modeling. Each week we'll follow his public lectures on Google hangouts and work through the posted homework, all accompanied by weekly meetings to ask each other questions and discuss the week's topics.
The official course text is Gelman et al.'s Bayesian Data Analysis, 3rd Ed. We have a month before the real class starts, though, so we're also planning to work through a few chapters of Robert & Casella's book on Monte Carlo methods with the goal of knowing something about the major random search process that underlies Bayesian data analysis.
Meetings are at 10am EST on Friday mornings in room BPS1142. We're organizing our local activities via a Google group that all are welcome to join. We hope to see you there. Contact jchrabaszcz at gmail dot com with any questions.

Friday, September 6, 2013

Quantitative Training Scholarship Announcement

Psychology graduate students at the University of Maryland, College Park: is there a statistics/methods training program or conference you have wanted to attend but could not due to cost? If so, please apply for the Aiken Scholarship for Quantitative Training, which aims to ensure that psychology graduate students have as many opportunities as possible to enhance their statistics and methods skills. Details on the scholarship are below.

Scholarship Announcement
Aiken Scholarship for Quantitative Training

This scholarship will assist graduate students interested in quantitative aspects of Psychology to attend statistically and methodologically oriented programs and conferences. Examples include, but are not limited to training through the programs such as those sponsored by the Center for the Advancement of Research Methods and Analysis (CARMA) and conferences such as the Joint Statistical Meetings. There will be two awards of $500 each.

To apply for the scholarship, interested students should supply:

1) A 2-page, double-spaced proposal detailing which program or conference they would like to attend, how the program or conference will advance their research or academic pursuits in quantitative issues in Psychology, and their financial need. Applicants should be as specific as possible in identifying the program or conference they will attend. However, there will be some flexibility, with approval of the Department, if an opportunity more useful to the student is announced after the awarding of the scholarship.

2) A brief letter of support from the applicant’s faculty mentor to be sent separately to the committee.

3) A transcript for graduate courses completed (unofficial is acceptable).

Scholarship winners will be asked to provide a short report of their experience funded by the scholarship.
Application materials (hard copy or email) should be sent to Christina Garcia, 1141 Biology- -Psychology Building, mcgarcia@umd.edu

The deadline for application is October 1st.


For questions regarding the scholarship, contact Dr. David Yager, ddyager@umd.edu

Tuesday, September 3, 2013

Fall 2013 Welcome Reception

Welcome back from the Design and Statistics Analysis Laboratory (DaSAL)! Please feel free to come by DaSAL’s Fall Semester Welcome Reception. At this reception, we will give a 10 minute presentation on what DaSAL can do for you. The remainder of the reception will be an open house, where you will have the opportunity to ask any questions you might have. Please feel free to come to whatever portion(s) of the reception you are most interested in. 

Perhaps most importantly, light refreshments will be provided.

Reception details:
Date: Thursday, September 5th
Time: 3:30 pm - 4:30 pm
Location: BPS 1142