Abstract
Objective To examine the effectiveness and appropriateness of peer- delivered health promotion for young people. Design Systematic review of experimental studies assessing impact on health outcomes and 'qualitative' studies evaluating intervention processes. Methods Studies were sought by searching electronic databases and hand searching. Those which met the review's inclusion criteria were assessed for methodological quality. Outcome evaluations were reviewed for four methodological qualities. Process evaluations were mapped according to criteria commonly used to establish the reliability and validity of 'qualitative' studies. Results Four hundred and thirty reports relevant to the topic area were identified. Two hundred and ten reported evaluations of peer-delivered interventions with 64 (49 outcome evaluations and 15 process evaluations) meeting the inclusion criteria. Only 12 (24 per cent) of the outcome evaluations were judged methodologically sound. Of these, seven found the method to be effective for at least one behavioural outcome. However, five sound studies directly compared the effectiveness of peers to other providers and found contradictory results. The majority of process evaluations examined the implementation ( n=9, 60 per cent) and acceptability of the method ( n=10, 67 per cent) and their findings provided insights into possible reasons for success or failure. Common methodological problems within studies included unclear details of sample and methodology suggesting that their conclusions may not be reliable. Conclusion The evidence for the effectiveness of peer-delivered health promotion for young people is not yet clear. Whilst the current evidence- base is able to suggest possible reasons for success or failure of this method, more systematic research into the conditions under which peer- delivered health promotion is effective in comparison to other methods of health promotion is needed. Integrating the evidence from experi mental studies and qualitative studies is complicated by the lack of standards for assessing reliability and validity in qualitative research.
Keywords
Affiliated Institutions
Related Publications
Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better?
Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and the choo...
Measuring the Psychological Outcomes of Falling: A Systematic Review
The objectives were to identify fall‐related psychological outcome measures and to undertake a systematic quality assessment of their key measurement properties. A Cochrane revi...
Evaluating information skills training in health libraries: a systematic review
Abstract Introduction: Systematic reviews have shown that there is limited evidence to demonstrate that the information literacy training health librarians provide is effective ...
From Systematic Reviews to Clinical Recommendations for Evidence- Based Health Care: Validation of Revised Assessment of Multiple Systematic Reviews (R-AMSTAR) for Grading of Clinical Relevance~!2009-10-24~!2009-10-03~!2010-07-16~!
Research synthesis seeks to gather, examine and evaluate systematically research reports that converge toward answering a carefully crafted research question, which states the p...
Information skills training: a systematic review of the literature<sup>*</sup>
Abstract The objectives of this study were to undertake a systematic review to determine the effectiveness of information skills training, to identify effective methods of train...
Publication Info
- Year
- 2001
- Type
- review
- Volume
- 60
- Issue
- 4
- Pages
- 339-353
- Citations
- 152
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1177/001789690106000406