Abstract
The consequences of prejudice against accepting the null hypothesis were examined through (a) a mathematical model intended to stimulate the research-publication process and (b) case studies of apparent erroneous rejections of the null hypothesis in published psychological research.The input parameters for the model characterize investigators' probabilities of selecting a problem for which the null hypothesis is true, of reporting, following up on, or abandoning research when data do or do not reject the null hypothesis, and they characterize editors' probabilities of publishing manuscripts concluding in favor of or against the null hypothesis.With estimates of the input parameters based on a questionnaire survey of a sample of social psychologists, the model output indicates a dysfunctional research-publication system.Particularly, the model indicates that there may be relatively few publications on problems for which the null hypothesis is (at least to a reasonable approximation) true, and of these, a high proportion will erroneously reject the null hypothesis.The case studies provide additional support for this conclusion.Accordingly, it is concluded that research traditions and customs of discrimination against accepting the null hypothesis may be very detrimental to research progress.Some procedures that can help eliminate this bias are prescribed.In a standard college dictionary (Webster's New World, College Edition, 1960), null is defined as "invalid; amounting to nought; of no value, effect, or consequence; insignificant."In statistical hypothesis testing, the null hypothesis most often refers to the hypothesis of no difference between treatment effects or of no association between variables.Interestingly, in the behavioral sciences, researchers' null hypotheses frequently satisfy the nonstatistical definition of null, being "of
Keywords
Affiliated Institutions
Related Publications
Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses
In this paper, we develop a classical approach to model selection. Using the Kullback-Leibler Information Criterion to measure the closeness of a model to the truth, we propose ...
Some Methods for Strengthening the Common χ 2 Tests
Since the x2 tests of goodness of fit and of association in contingency tables are presented in many courses on statistical methods for beginners in the subject, it is not surpr...
Model Misspecification and Probabilistic Tests of Topology: Evidence from Empirical Data Sets
Probabilistic tests of topology offer a powerful means of evaluating competing phylogenetic hypotheses. The performance of the nonparametric Shimodaira-Hasegawa (SH) test, the p...
Sifting the evidence---what's wrong with significance tests? Another comment on the role of statistical methods
The findings of medical research are often met with considerable scepticism, even when they have apparently come from studies with sound methodologies that have been subjected t...
Why Most Published Research Findings Are False
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number...
Publication Info
- Year
- 1975
- Type
- article
- Volume
- 82
- Issue
- 1
- Pages
- 1-20
- Citations
- 1051
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1037/h0076157