Sunday, September 30, 2012

Science Utopia (Continued): Methods Integrity Workshop

"Winter is coming." --Ned Stark/Greg Francis
On Friday afternoon I attended a seminar in methods integrity in research (here). The speakers were Hal Pashler of UC San Diego and Greg Francis of Purdue University. In the seminar, the speakers raised a number of interesting points that I think add to last week's post on PYM about questionable research practices (here). I'll summarize the main points that I took from the seminar:

Hal Pashler was the first speaker. He discussed the important distinction between direct replication and conceptual replication. For those who aren't familiar with the distinction, while a direct replication tries to recreate an experiment exactly as it was conducted, a conceptual replication tries to recreate an original researcher's effect after changing a bunch of the conditions of the research.

Pashler argues that the incentives in place to reward replication research strongly favor the publication of conceptual replication and the total neglect of direct replication. The logic of this argument is elegant: If a researcher runs a direct replication experiment and it doesn't yield a significant finding, then the researcher feels some combination of negative emotions, and perhaps grumbles to his/her colleagues about the lack of replication. If however, a research runs a direct replication and it works, the researcher feels good about the phenomenon, but there is no real place to publish that replication because most journals tend to favor novelty (although you could publish at Pashler's Psychfiledrawer website).

For a conceptual replication, the conditions are different. Running a conceptual replication that does not work is not all that disconcerting. After, the researchers changed a bunch of things in their new design, so any one of those unsystematic changes could have caused the non-replication. These changes leave faith in the original research unchallenged. In addition, if a researcher runs a successful conceptual replication, well, that research can be published in the best journals of our field. The result of these conditions is our field undervaluing direct replication work and overvaluing conceptual replication.

Greg Francis spoke next and his first words were, "Winter is coming." Francis' main point is that real findings based on sound science and research integrity should not reach conventional levels of statistical significance in every study. In terms of probability, there is always a random chance that even well designed and well powered studies will find that a real effect does not replicate in their design. Francis argues that when a literature contains these non-replication it represents a natural condition of good science.

The implication of this point is that the more a literature finds a significant effect without ever running into problems with replication, then that literature suffers from researchers using questionable and unscientific research practices (e.g., hiding the studies that did not replicate the effect).

Francis' argument is very compelling: Basically, researchers tend to run under-powered experiments with less-than-ideal sample sizes and then report many of these studies in a single paper demonstrating an effect. Over time, we as a field have become accustomed to thinking that this is sound research because, just look at how many times the researchers demonstrated their results! Francis' point is that, given the size of an effect, and the under-powered nature of the research design, that the researchers came to find these results in this many consecutive studies is so unlikely that cheating--with questionable data analysis or by hiding failed studies--had to be involved. In the end, Francis called for two changes that would help stop researchers from being questionable in their analyses: (1) running well-powered studies, and (2) relying less on dichotomous decision rules where we reject the null hypothesis or not.

Brent Roberts of the University of Illinois led a discussion following these talks and he pointed out the need for institutional change in psychology, to help young researchers deal with these issues (see parts of his argument here). Roberts' point is that researchers aren't going to be able to make changes to wipe out these questionable research practices, without mandates from controlling bodies like journals, universities, and granting agencies who regulate behavior in psychology. Without such institutional change, there isn't a lot of incentive for young researchers to do the right thing in their data practices. Some of the changes he suggested:

(1) Granting agencies provide funds for direct replication.
(2) Hiring committees put less emphasis on publication number, and more emphasis on publication quality.
(3) Journals should publicly encourage replication.

A lot to think about after this thoughtful set of presentations and I'd love to read your comments!



Francis G (2012). Evidence that publication bias contaminated studies relating social class and unethical behavior. Proceedings of the National Academy of Sciences of the United States of America, 109 (25) PMID: 22615347

No comments:

Post a Comment