Abstract

When a meta-analysis on results from experimental studies is conducted, differences in the study design must be taken into consideration. A method for combining results across independent-groups and repeated measures designs is described, and the conditions under which such an analysis is appropriate are discussed. Combining results across designs requires that (a) all effect sizes be transformed into a common metric, (b) effect sizes from each design estimate the same treatment effect, and (c) meta-analysis procedures use design-specific estimates of sampling variance to reflect the precision of the effect size estimates.

Keywords

Meta-analysisStatisticsSample size determinationMetric (unit)Repeated measures designAnalysis of varianceDesign of experimentsVariance (accounting)MathematicsEconometricsSampling (signal processing)Statistical analysisMain effectComputer scienceEngineering

MeSH Terms

HumansMeta-Analysis as TopicModelsPsychological

Affiliated Institutions

Related Publications

Publication Info

Year
2002
Type
article
Volume
7
Issue
1
Pages
105-125
Citations
2269
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

2269
OpenAlex
176
Influential
1728
CrossRef

Cite This

Scott B. Morris, Richard P. DeShon (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs.. Psychological Methods , 7 (1) , 105-125. https://doi.org/10.1037/1082-989x.7.1.105

Identifiers

DOI
10.1037/1082-989x.7.1.105
PMID
11928886

Data Quality

Data completeness: 81%