Abstract
OBJECTIVE: To test the feasibility of creating a valid and reliable checklist with the following features: appropriate for assessing both randomised and non-randomised studies; provision of both an overall score for study quality and a profile of scores not only for the quality of reporting, internal validity (bias and confounding) and power, but also for external validity. DESIGN: A pilot version was first developed, based on epidemiological principles, reviews, and existing checklists for randomised studies. Face and content validity were assessed by three experienced reviewers and reliability was determined using two raters assessing 10 randomised and 10 non-randomised studies. Using different raters, the checklist was revised and tested for internal consistency (Kuder-Richardson 20), test-retest and inter-rater reliability (Spearman correlation coefficient and sign rank test; kappa statistics), criterion validity, and respondent burden. MAIN RESULTS: The performance of the checklist improved considerably after revision of a pilot version. The Quality Index had high internal consistency (KR-20: 0.89) as did the subscales apart from external validity (KR-20: 0.54). Test-retest (r 0.88) and inter-rater (r 0.75) reliability of the Quality Index were good. Reliability of the subscales varied from good (bias) to poor (external validity). The Quality Index correlated highly with an existing, established instrument for assessing randomised studies (r 0.90). There was little difference between its performance with non-randomised and with randomised studies. Raters took about 20 minutes to assess each paper (range 10 to 45 minutes). CONCLUSIONS: This study has shown that it is feasible to develop a checklist that can be used to assess the methodological quality not only of randomised controlled trials but also non-randomised studies. It has also shown that it is possible to produce a checklist that provides a profile of the paper, alerting reviewers to its particular methodological strengths and weaknesses. Further work is required to improve the checklist and the training of raters in the assessment of external validity.
Keywords
Affiliated Institutions
Related Publications
The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers
Introduction: Appraising the quality of studies included in systematic reviews combining qualitative, quantitative and mixed methods studies is challenging. To address this chal...
Test-Retest Reliability of the Diagnostic Interview Schedule for Children (DISC 2.1)
Test-retest reliability findings were consistent with or superior to those reported in previous studies. Results support the usefulness of the DISC in further clinical and epide...
Quality of Life Instruments for Caregivers of Patients With Cancer
The quality of life of caregivers of patients with cancer is an important construct given the substantial impact of caring on the physical, psychological, social, and financial ...
Being Inconsistent About Consistency: When Coefficient Alpha Does and Doesn't Matter
One of the central tenets of classical test theory is that scales should have a high degree of internal consistency, as evidenced by Cronbach's a, the mean interitem correlation...
Observer Reliability and Agreement
Abstract The terms observer reliability and observer agreement represent different concepts. Reliability coefficients express the ability to differentiate between subjects. Agre...
Publication Info
- Year
- 1998
- Type
- article
- Volume
- 52
- Issue
- 6
- Pages
- 377-384
- Citations
- 7953
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1136/jech.52.6.377