Abstract
This article introduces a new approach for evaluating replication results. It combines effect-size estimation with hypothesis testing, assessing the extent to which the replication results are consistent with an effect size big enough to have been detectable in the original study. The approach is demonstrated by examining replications of three well-known findings. Its benefits include the following: (a) differentiating “unsuccessful” replication attempts (i.e., studies yielding p > .05) that are too noisy from those that actively indicate the effect is undetectably different from zero, (b) “protecting” true findings from underpowered replications, and (c) arriving at intuitively compelling inferences in general and for the revisited replications in particular.
Keywords
Affiliated Institutions
Related Publications
Does problem-based learning work? A meta-analysis of evaluative research
The purpose of this review is to synthesize all available evaluative research from 1970 through 1992 that compares problem-based learning (PBL) with more traditional methods of ...
Publication Info
- Year
- 2015
- Type
- article
- Volume
- 26
- Issue
- 5
- Pages
- 559-569
- Citations
- 752
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1177/0956797614567341