Abstract

Experiments that find larger differences between groups than actually exist in the population are more likely to pass stringent tests of significance and be published than experiments that find smaller differences. Published measures of the magnitude of experimental effects will therefore tend to overestimate these effects. This bias was investigated as a function of sample size, actual population difference, and alpha level. The overestimation of experimental effects was found to be quite large with the commonly employed significance levels of 5 per cent and 1 per cent. Further, the recently recommended measure, ω 2 , was found to depend much more heavily on the alpha level employed than the true population ω 2 value. Hence, it was concluded that effect size estimation is impractical unless scientific journals drop the consideration of statistical significance as one of the criteria of publication.

Keywords

StatisticsSample size determinationEconometricsStatistical significanceMathematicsPopulationPopulation sizeDemography

Affiliated Institutions

Related Publications

Publication Info

Year
1978
Type
article
Volume
31
Issue
2
Pages
107-112
Citations
172
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

172
OpenAlex
5
Influential
110
CrossRef

Cite This

David M. Lane, William P. Dunlap (1978). Estimating effect size: Bias resulting from the significance criterion in editorial decisions. British Journal of Mathematical and Statistical Psychology , 31 (2) , 107-112. https://doi.org/10.1111/j.2044-8317.1978.tb00578.x

Identifiers

DOI
10.1111/j.2044-8317.1978.tb00578.x

Data Quality

Data completeness: 77%