Abstract
A study of a sample provides only an estimate of the true (population) value of an outcome statistic. A report of the study therefore usually includes an inference about the true value. Traditionally, a researcher makes an inference by declaring the value of the statistic statistically significant or non significant on the basis of a P value derived from a null-hypothesis test. This approach is confusing and can be misleading, depending on the magnitude of the statistic, error of measurement, and sample size. The authors use a more intuitive and practical approach based directly on uncertainty in the true value of the statistic. First they express the uncertainty as confidence limits, which define the likely range of the true value. They then deal with the real-world relevance of this uncertainty by taking into account values of the statistic that are substantial in some positive and negative sense, such as beneficial or harmful. If the likely range overlaps substantially positive and negative values, they infer that the outcome is unclear; otherwise, they infer that the true value has the magnitude of the observed value: substantially positive, trivial, or substantially negative. They refine this crude inference by stating qualitatively the likelihood that the true value will have the observed magnitude (eg, very likely beneficial). Quantitative or qualitative probabilities that the true value has the other 2 magnitudes or more finely graded magnitudes (such as trivial, small, moderate, and large) can also be estimated to guide a decision about the utility of the outcome.
Keywords
Affiliated Institutions
Related Publications
Sifting the evidence---what's wrong with significance tests? Another comment on the role of statistical methods
The findings of medical research are often met with considerable scepticism, even when they have apparently come from studies with sound methodologies that have been subjected t...
Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses
In this paper, we develop a classical approach to model selection. Using the Kullback-Leibler Information Criterion to measure the closeness of a model to the truth, we propose ...
Model Uncertainty, Data Mining and Statistical Inference
This paper takes a broad, pragmatic view of statistical inference to include all aspects of model formulation. The estimation of model parameters traditionally assumes that a mo...
Correlation Coefficients: Appropriate Use and Interpretation
Correlation in the broadest sense is a measure of an association between variables. In correlated data, the change in the magnitude of 1 variable is associated with a change in ...
An <i>R</i><sup>2</sup> statistic for fixed effects in the linear mixed model
Abstract Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R 2 statistic in the linear univariate model...
Publication Info
- Year
- 2006
- Type
- article
- Volume
- 1
- Issue
- 1
- Pages
- 50-57
- Citations
- 2131
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1123/ijspp.1.1.50