Abstract
This paper analyses the methodological issues inherent in evaluating healthcare education and considers approaches for addressing these. Recent policies have exhorted practitioners to base their practice on evidence; however in healthcare education the evidence base is not extensive. Whilst educational evaluation has advanced in the last decades, standardised designs and toolkits are not available. Each evaluation has different aims and occurs in specific contexts, thus the design has to fit the circumstances, yet meet the challenge of scientific credibility. Indicators of educational processes and outcomes are not scientifically verified; no toolkit of standardised 'off-the-shelf' valid, reliable and sensitive measures exists. The evidence base of educational practice is largely derived from small-scale, single case studies; the majority of measures are self-devised, unvalidated tools of unproven reliability, thus meta-synthesis is not appropriate and results are not generalisable. Healthcare educational evaluators need valid and reliable assessments of both knowledge acquisition and its application to practice. The need to establish and explain attribution, i.e. the relationship between educational inputs and outcomes is complex and requires experimental/quasi-experimental design. In addition, educational evaluators face the pragmatic challenge of practice in healthcare contexts, where confounding variables are hard to control and resources are scarce. © 2006 Elsevier Ltd. All rights reserved.
Original language | English |
---|---|
Pages (from-to) | 640-646 |
Number of pages | 6 |
Journal | Nurse Education Today |
Volume | 26 |
Issue number | 8 |
DOIs | |
Publication status | Published - Dec 2006 |
Keywords
- Education
- Evaluation
- Methodological issues