Evaluating the impact of healthcare interventions using routine data (2024)

  1. Education
  2. Evaluating the impact...
  3. Evaluating the impact of healthcare interventions using routine data

CCBY Open access

Practice Essentials BMJ 2019; 365 doi: https://doi.org/10.1136/bmj.l2239 (Published 20 June 2019) Cite this as: BMJ 2019;365:l2239

Read the full collection

  1. Geraldine M Clarke, senior data analyst1,
  2. Stefano Conti, senior statistician2,
  3. Arne T Wolters, senior analytics manager1,
  4. Adam Steventon, director of data analytics1
  1. 1The Health Foundation, London, UK
  2. 2NHS England and NHS Improvement, London, UK
  1. Correspondence to G Clarke Geraldine.clarke{at}health.org.uk

What you need to know

  • Assessing the impact of healthcare interventions is critical to inform future decisions

  • Compare observed outcomes with what you would have expected if the intervention had not been implemented

  • A wide range of routinely collected data is available for the evaluation of healthcare interventions

Interventions to transform the delivery of health and social care are being implemented widely, such as those linked to Accountable Care Organizations in the United States,1 or to integrated care systems in the UK.2 Assessing the impact of these health interventions enables healthcare teams to learn and to improve services, and can inform future policy.3 However, some healthcare interventions are implemented without high quality evaluation, in ways that require onerous data collection, or may not be evaluated at all.4

A range of routinely collected administrative and clinically generated healthcare data could be used to evaluate the impact of interventions to improve care. However, there is a lack of guidance as to where relevant routine data can be found or accessed and how they can be linked to other data. A diverse array of methodological literature can also make it hard to understand which methods to apply to analyse the data. This article provides an introduction to help clinicians, commissioners, and other healthcare professionals wishing to commission, interpret, or perform an impact evaluation of a health intervention. We highlight what to consider and discuss key concepts relating to design, analysis, implementation, and interpretation.

What are interventions, impacts, and impact evaluations?

A health intervention is a combination of activities or strategies designed to assess, improve, maintain, promote, or modify health among individuals or an entire population. Interventions can include educational or care programmes, policy changes, environmental improvements, or health promotion campaigns. Interventions that include multiple independent or interacting components are referred to as complex.5 The impact of any intervention is likely to be shaped as much by the context (eg, communities, work places, homes, schools, or hospitals) in which it is delivered, as the details of the intervention itself.6789

An impact is a positive or negative, direct or indirect, intended or unintended change produced by an intervention. An impact evaluation is a systematic and empirical investigation of the effects of an intervention; it assesses to what extent the outcomes experienced by affected individuals were caused by the intervention in question, and what can be attributed to other factors such as other interventions, socioeconomic trends, and political or environmental conditions. Evaluations can be categorised as formative or summative (table 1).

Table 1

Impact evaluations

View this table:

Approaches such as the Plan, Do, Study, Act cycle11, which is part of the Model for Improvement, a commonly used tool to test and understand small changes in quality improvement work12 may be used to undertake formative evaluation.

With either type of evaluation, it is important to be realistic about how long it will take to see the intended effects. Assessment that takes place too soon risks incorrectly concluding that there was no impact. This might lead stakeholders to question the value of the intervention, when later assessment might have shown a different picture. For example, in a small case study of cost savings from proactively managing high risk patients, the costs of healthcare for the eligible intervention population initially increased compared with the comparison population, but after six months were consistently lower.14

This article focuses on impact evaluation, but this can only ever address a fraction of questions.15 Much more can be accomplished if it is supplemented with other qualitative and quantitative methods, including process evaluation. This provides context, assesses how the intervention was implemented, identifies any emerging unintended pathways, and is important for understanding what happened in practice and for identifying areas for improvement.16 The economic evaluation of healthcare interventions is also important for healthcare decision making, especially with ongoing financial pressures on health services.17

What are the right evaluation questions?

An effective impact evaluation begins with the formulation of one or more clear questions driven by the purpose of the evaluation and what you and your stakeholders want to learn. For example, “What is the impact of case management on patients’ experience of care?

Formulate your evaluation questions using your understanding of the idea behind your intervention, the implementation challenges, and your knowledge of what data are available to measure outcomes. Review your theory of change or logic model2122 to understand what inputs and activities were planned, and what outcomes were expected and when. Once you have understood the intended causal pathway, consider the practical aspects of implementation, which include the barriers to change, unexpected changes by recipients or providers, and other influences not previously accounted for. Patient and public involvement (PPI) in setting the right question is strongly recommended for additional insights and meaningful results. For example, if evaluating the impact of case management, you could engage patients to understand what outcomes matter most to them. Healthcare leaders may emphasise metrics such as emergency admissions, but other aspects such as the experience of care might matter more to patients.523

What methods can be used to perform an impact evaluation?

Randomised control designs, where individuals are randomly selected to receive either an intervention or a control treatment, are often referred to as the “gold standard” of causal impact evaluation.24 In large enough samples, the process of randomisation ensures a balance in observed and unobserved characteristics between treatment and control groups. However, while often suitable for assessing, for example, the safety and efficacy of medicines, these designs may be impractical, unethical, or irrelevant when assessing the impact of complex changes to health service delivery.

Observational studies are an alternative approach to estimate causal effects. They use the natural, or unplanned, variation in a population in relation to the exposure to an intervention, or the factors that affect its outcomes, to remove the consequences of a non-randomised selection process.25 The idea is to mimic a randomised control design by ensuring treated and control groups are equivalent—at least in terms of observed characteristics. This can be achieved using a variety of well documented methods, including regression control and matching,26 eg, propensity scoring27 or genetic matching.28 If the matching is successful at producing such groups, and there are also no differences in unobserved characteristics, then it can be assumed that the control group outcomes are representative of those that the treated group would have experienced if nothing had changed, ie, the counterfactual. For example, an evaluation of alternative elective surgical interventions for primary total hip replacement on osteoarthritis patients in England and Wales used genetic matching to compare patients across three different prosthesis groups, and reported that the most prevalent type of hip replacement was the least cost effective.29

Assessing similarity is only possible in relation to observed characteristics, and matching can result in biased estimates if the groups differ in relation to unobserved variables that are predictive of the outcome (confounders). It is rarely possible to eliminate this possibility of bias when conducting observational studies, meaning that the interpretation of the findings must always be sensitive to the possibility that the differences in outcomes were caused by a factor other than the intervention. Methods that can help when selection is on unobserved characteristics include difference-in-difference,30 regression discontinuity,31 instrumental variables,18 or synthetic controls.32Table 2 gives a summary of selected observational study designs.

Table 2

Observational study designs for quantitative impact evaluation

View this table:

Observational studies are often referred to as natural (for natural or unplanned interventions), or quasi (for planned or intentional interventions) experiments. Natural experiments are discussed to evaluate population health interventions.41

What’s wrong with a simple before-and-after study?

Before-and-after studies compare changes in outcomes for the same group of patients at a single time point before and after receiving an intervention without reference to a control group. These differ from interrupted time series studies, which compare changes in outcomes for successive groups of patients before and after receiving an intervention (the interruption).

Before-and-after studies are useful when it is not possible to include an unexposed control group, or for hypothesis generation. However, they are inherently susceptible to bias since changes observed may simply reflect regression to the mean (any changes in outcomes that might occur naturally in the absence of the intervention), or influences or secular trends unrelated to the intervention, eg, changes in the economic or political environment, or a heightened public awareness of issues.

For example, a before-and-after study of the impact of a care coordination service for older people tracked the hospital utilisation of the same patients before and after they were accepted into the service. They found that the service resulted in savings in hospital bed days and attendances at the emergency department.42 Reduced hospital utilisation could have reflected regression to the mean here rather than the effects of the intervention; for example, a patient could have had a specific health crisis before being invited to join the service and then reverted back to their previous state of health and hospital utilisation for reasons unconnected with the care coordination service.

Various tools are available to evaluate the risk of bias in non-randomised designs due to confounding and other potential biases.4344

Where can I find suitable routine data?

Healthcare systems generate vast amounts of data as part of their routine operation. These datasets are often designed to support direct care, and for administrative purposes, rather than for research, and use of routinely collected data for evaluating changes in health service delivery is not without pitfalls. For example, any variation observed between geographical regions, providers, and sometimes individual clinicians may reflect real and important variations in the actual healthcare quality provided, but can also result from differences in measurement.45 However, routine data can be a rich source of information on a large group of patients with different conditions across different geographical regions. Often, data have been collected for many years, enabling construction of individual patient histories describing healthcare utilisation, diagnoses, comorbidities, prescription of medication, and other treatments.

Some of these data are collected centrally, across a wider system, and routinely shared for research and evaluation purposes, eg, secondary care data in England (Hospital Episode Statistics), or Medicare Claims data in the United States. Other sources, such as primary care data, are often collected at a more local level, but can be accessed through, or on behalf of, healthcare commissioners, provided the right information governance arrangements are in place. Pseudonymised records, where any identifying information is removed or replaced by an artificial identifier, are often used to support evaluation while maintaining patient confidentiality. See table 3 for commonly used routine datasets available in England.

Table 3

Commonly used routine datasets available in the NHS in England

View this table:

Healthcare records can often be linked across different sources as a single patient identifier is commonly used across a healthcare system, eg, the use of an NHS number in the UK. Using a common pseudonym across different data sources can support linkage of pseudonymised records. Linking into publicly available sources of administrative data and surveys can further enrich healthcare records. Commonly used administrative data available for UK populations include measures of GP practice quality and outcomes from the Quality and Outcomes Framework (QOF),52 deprivation, rurality, and demographics from the 2011 Census,53 and patient experience from the GP Patient Survey.54

Are there any additional considerations?

It is essential to consider threats to validity when designing and evaluating an impact evaluation; validity relates to whether an evaluation is measuring what it is claiming to measure. See Rothman et al55 for further discussion.

Internal validity refers to whether the effects observed are due to the intervention and not some other confounding factor. Selection bias, which results from the way in which subjects are recruited, or from differing rates of participation due, for example, to age, gender, cultural or socioeconomic factors, is often a problem in non-randomised designs. Care must be taken to account for such biases when interpreting the results of an impact evaluation. Sensitivity analyses should be performed to provide reassurance regarding the plausibility of causal inferences.

External validity refers to the extent to which the results of a study can be generalised to other settings. Understanding the societal, economic, health system, and environmental context in which an intervention is delivered, and which makes its impact unique, is critical when interpreting the results of evaluations, and considering whether they apply to your setting.56 Descriptions of context should be as rich as possible.

Often, the impact of an intervention is likely to vary depending on the characteristics of patients. These can be usefully explored in subgroup analyses.57

Clear and transparent reporting using established guidelines (eg, STROBE58 or TREND59)to describe the intervention, study population, assignment of treatment, and control groups, and methods used to estimate impact should be followed. Limitations arising as a result of inherent biases, or validity, should be clearly acknowledged.

Around the world, many interventions designed to improve health and healthcare are under way. An evaluation is an essential part of understanding what impact these changes are having, for whom and in what circ*mstances, and help inform future decisions about improvement and further roll out. There is no standard, ‘‘one size fits all’’ recipe for a good evaluation: it must be tailored to the project at hand. Understanding the overarching principles and standards is the first step towards a good evaluation.

Further Resources

See The Health Foundation. Evaluation: what to consider. 201560 for a list of websites, articles, webinars and other guidance on various aspects of impact evaluation, which may help locate further information for the planning, interpretation, and development of a successful impact evaluation.5 23 55

Education into practice

  • What interventions have you designed or experienced aimed at transforming your service? Have they been evaluated?

  • What types of routine data are collected about the care you deliver? Do you know how to access them and use them to evaluate care delivery?

  • What resources are available to you to support impact evaluations for interventions?

Footnotes

  • Contributors GMC, SC, ATW and AS designed the structure of the report. GMC wrote the first draft of the manuscript. SC wrote table 2. ATW wrote table 3. AS and GMC critically revised the manuscript for important intellectual content. All authors approved the final version of the manuscript.

  • Competing interests We have read and understood BMJ policy on declaration of interests. All authors work in the Improvements Analytics Unit, a joint project between NHS England and the Health Foundation, which provided support for work reported in references of this report.133760

  • Provenance and peer review: This article is part of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ, including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ’s quality improvement editor post are funded by the Health Foundation.

  • Patient and/or members of the public were not involved in the creation of this article.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/.

References

    1. Davis K,
    2. Guterman S,
    3. Collins S,
    4. Stremikis G,
    5. Rustgi S,
    6. Nuzum R

    . Starting on the path to a high performance health system: Analysis of the payment and system reform provisions in the Patient Protection and Affordable Care Act of.The Commonwealth Fund, 2010, https://www.commonwealthfund.org/publications/fund-reports/2010/sep/starting-path-high-performance-health-system-analysis-payment.

    1. NHS NHS Long Term Plan

    . 2019. https://www.england.nhs.uk/long-term-plan/

    1. Djulbegovic B

    . A framework to bridge the gaps between evidence-based medicine, health outcomes, and improvement and implementation science. J Oncol Pract2014;10:200-2. doi:10.1200/JOP.2013.001364pmid:24839282

    1. Bickerdike L,
    2. Booth A,
    3. Wilson PM,
    4. et al

    . Social prescribing: less rhetoric and more reality. A systematic review of the evidence. BMJ Open2017;7:e013384.

    1. Campbell M,
    2. Fitzpatrick R,
    3. Haines A,
    4. et al

    . Framework for design and evaluation of complex interventions to improve health. BMJ2000;321:694-6. doi:10.1136/bmj.321.7262.694pmid:10987780

    1. Rickles D

    . Causality in complex interventions. Med Health Care Philos2009;12:77-90. doi:10.1007/s11019-008-9140-4pmid:18465202

    1. Hawe P

    . Lessons from complex interventions to improve health. Annu Rev Public Health2015;36:307-23. doi:10.1146/annurev-publhealth-031912-114421pmid:25581153

    1. Greenhalgh T,
    2. Papoutsi C

    . Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med2018;16:95. doi:10.1186/s12916-018-1089-4pmid:29921272

    1. Pawson R,
    2. Tilley N

    . Realistic evaluation.Sage, 1997.

  1. Smith J, Wistow G. (Nuffield Trust comment) Learning from an intrepid pioneer: integrated care in North West London. https://www.nuffieldtrust.org.uk/news-item/learning-from-an-intrepid-pioneer-integrated-care-in-north-west-london

    1. Improvement NHS

    . Plan, Do, Study, Act (PDSA) cycles and the model for improvement.Handb Qual Serv Improv Tools, 2010.

  2. Academy of Medical Royal Colleges. Quality Improvement—training for better outcomes. 2016. https://www.aomrc.org.uk/wp-content/uploads/2016/06/Quality_improvement_key_findings_140316-2.pdf

  3. Lloyd T, Wolters A, Steventon A. The impact of providing enhanced support for care home residents in Rushcliffe. 2017. http://www.health.org.uk/sites/health/files/IAURushcliffe.pdf

    1. Ferris TG,
    2. Weil E,
    3. Meyer GS,
    4. Neagle M,
    5. Heffernan JL,
    6. Torchiana DF

    . Cost savings from managing high-risk patients. In: Yong PL, Saunders RS, Olsen LA, editors. The healthcare imperative: lowering costs and improving outcomes: workshop series summary.Nat Acad Press (US); 2010:301. https://www.ncbi.nlm.nih.gov/books/NBK53910/

    1. Greenhalgh T,
    2. Papoutsi C

    . Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med2018;16:4-9.

    1. Moore GF,
    2. Audrey S,
    3. Barker M,
    4. et al

    . Process evaluation of complex interventions: Medical Research Council guidance. BMJ2015;350:h1258. doi:10.1136/bmj.h1258pmid:25791983

    1. Drummond M,
    2. Weatherly H,
    3. Ferguson B

    . Economic evaluation of health interventions. BMJ2008;337:a1204. doi:10.1136/bmj.a1204pmid:18824485

    1. Baiocchi M,
    2. Cheng J,
    3. Small DS

    . Instrumental variable methods for causal inference. Stat Med2014;33:2297-340. doi:10.1002/sim.6128pmid:24599889

    1. Lorch SA,
    2. Baiocchi M,
    3. Ahlberg CE,
    4. Small DS

    . The differential impact of delivery hospital on the outcomes of premature infants. Pediatrics2012;130:270-8. doi:10.1542/peds.2011-2820pmid:22778301

    1. Martens EP,
    2. Pestman WR,
    3. de Boer A,
    4. Belitser SV,
    5. Klungel OH

    . Instrumental variables: application and limitations. Epidemiology2006;17:260-7. doi:10.1097/01.ede.0000215160.88317.cbpmid:16617274

    1. Center for Theory of Change

    . http://www.theoryofchange.org

    1. Davidoff F,
    2. Dixon-Woods M,
    3. Leviton L,
    4. Michie S

    . Demystifying theory and its use in improvement. BMJ Qual Saf2015;24:228-38. doi:10.1136/bmjqs-2014-003627pmid:25616279

  4. Gertler PJ, Martinez S, Premand P, Rawlings LB, Vermeersch CMJ. Impact evaluation in practice. The World Bank Publications. 2017. https://siteresources.worldbank.org/EXTHDOFFICE/Resources/5485726-1295455628620/Impact_Evaluation_in_Practice.pdf

  5. Cochrane A. Effectiveness and efficiency: random reflections on health services. London; 1972. https://www.nuffieldtrust.org.uk/research/effectiveness-and-efficiency-random-reflections-on-health-services

    1. Portela MC,
    2. Pronovost PJ,
    3. Woodco*ck T,
    4. Carter P,
    5. Dixon-Woods M

    . How to study improvement interventions: a brief overview of possible study types. Postgrad Med J2015;91:343-54. doi:10.1136/postgradmedj-2014-003620reppmid:26045562

    1. Stuart EA

    . Matching methods for causal inference: A review and a look forward. Stat Sci2010;25:1-21. doi:10.1214/09-STS313pmid:20871802

    1. Austin PC

    . An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behav Res2011;46:399-424. doi:10.1080/00273171.2011.568786pmid:21818162

    1. Diamond A,
    2. Sekhon JS

    . Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Rev Econ Stat2013;95:932-45doi:10.1162/REST_a_00318.

    1. Pennington M,
    2. Grieve R,
    3. Sekhon JS,
    4. Gregg P,
    5. Black N,
    6. van der Meulen JH

    . Cemented, cementless, and hybrid prostheses for total hip replacement: cost effectiveness analysis. BMJ2013;346:f1026.

    1. Wing C,
    2. Simon K,
    3. Bello-Gomez RA

    . Designing difference in difference studies: best practices for public health policy research. Annu Rev Public Health2018;39:453-69. doi:10.1146/annurev-publhealth-040617-013507pmid:29328877

    1. Venkataramani AS,
    2. Bor J,
    3. Jena AB

    . Regression discontinuity designs in healthcare research. BMJ2016;352:i1216. doi:10.1136/bmj.i1216pmid:26977086

    1. Abadie A,
    2. Gardeazabal J

    . The economic costs of conflict: a case study of the Basque country. Am Econ Rev2003;93:113-32doi:10.1257/000282803321455188.

    1. Stuart EA

    . Matching methods for causal inference: A review and a look forward. Stat Sci2010;25:1-21. doi:10.1214/09-STS313pmid:20871802

    1. McNamee R

    . Regression modelling and other methods to control confounding. Occup Environ Med2005;62:500-6, 472. doi:10.1136/oem.2002.001115pmid:15961628

    1. Ho DE,
    2. Imai K,
    3. King G,
    4. Stuart EA

    . Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Polit Anal2007;15:199-236doi:10.1093/pan/mpl013.

    1. Sutton M,
    2. Nikolova S,
    3. Boaden R,
    4. Lester H,
    5. McDonald R,
    6. Roland M

    . Reduced mortality with hospital pay for performance in England. N Engl J Med2012;367:1821-8. doi:10.1056/NEJMsa1114951pmid:23134382

  6. Stephen O, Wolters A, Steventon A. Briefing: The impact of redesigning urgent and emergency care in Northumberland 2017. https://www.health.org.uk/sites/health/files/IAUNorthumberland.pdf

    1. Geneletti S,
    2. O’Keeffe AG,
    3. Sharples LD,
    4. Richardson S,
    5. Baio G

    . Bayesian regression discontinuity designs: incorporating clinical knowledge in the causal analysis of primary care data. Stat Med2015;34:2334-52. doi:10.1002/sim.6486pmid:25809691

    1. Bernal JL,
    2. Cummins S,
    3. Gasparrini A

    . Interrupted time series regression for the evaluation of public health interventions: a tutorial.Int J Epidemiol2016, 46:348-55.

    1. Donegan K,
    2. Fox N,
    3. Black N,
    4. Livingston G,
    5. Banerjee S,
    6. Burns A

    . Trends in diagnosis and treatment for people with dementia in the UK from 2005 to 2015: a longitudinal retrospective cohort study. Lancet Public Health2017;2667:1-8.

    1. Craig P,
    2. Cooper C,
    3. Gunnell D,
    4. et al

    . Using natural experiments to evaluate population health interventions: new Medical Research Council guidance. J Epidemiol Community Health2012;66:1182-6. doi:10.1136/jech-2011-200375pmid:22577181

    1. Mayhew L

    . On the effectiveness of care co-ordination services aimed at preventing hospital admissions and emergency attendances. Health Care Manag Sci2009;12:269-84. doi:10.1007/s10729-008-9092-5pmid:19739360

    1. Sterne JA,
    2. Hernán MA,
    3. Reeves BC,
    4. et al

    . ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ2016;355:i4919. doi:10.1136/bmj.i4919pmid:27733354

    1. Chen YF,
    2. Hemming K,
    3. Stevens AJ,
    4. Lilford RJ

    . Secular trends and evaluation of complex interventions: the rising tide phenomenon. BMJ Qual Saf2016;25:303-10. doi:10.1136/bmjqs-2015-004372pmid:26442789

    1. Powell AE,
    2. Davies HT,
    3. Thomson RG

    . Using routine comparative data to assess the quality of health care: understanding and avoiding common pitfalls. Qual Saf Health Care2003;12:122-8. doi:10.1136/qhc.12.2.122pmid:12679509

  7. NHS Digital. Data Access Request Service (DARS) https://digital.nhs.uk/services/data-access-request-service-dars

  8. NHS Digital. Secondary Uses Service (SUS) https://digital.nhs.uk/services/secondary-uses-service-sus

  9. Medicines and Healthcare Regulatory Agency and National Institute for Health Research (NIHR). Clinical Practice Research Datalink(CPRD). https://www.cprd.com

    1. Office for National Statistics

    . 2011 Census. https://www.ons.gov.uk/census/2011census

    1. NHS England

    . GP Patient Survey (GPPS). https://www.gp-patient.co.uk/

    1. Rothman KJ,
    2. Greenland S,
    3. Lash T

    . Modern Epidemiology.Lippincott Williams & Williams, 2005.

    1. Minary L,
    2. Alla F,
    3. Cambon L,
    4. Kivits J,
    5. Potvin L

    . Addressing complexity in population health intervention research: the context/intervention interface. J Epidemiol Community Health2018;72:319-23.pmid:29321174

    1. Sun X,
    2. Briel M,
    3. Walter SD,
    4. Guyatt GH

    . Is a subgroup effect believable? Updating criteria to evaluate the credibility of subgroup analyses. BMJ2010;340:c117.

    1. von Elm E,
    2. Altman DG,
    3. Egger M,
    4. Poco*ck SJ,
    5. Gotzsche PC,
    6. Vandenbroucke JP

    . The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med2007;147:573-7.

    1. Des Jarlais DC,
    2. Lyles C,
    3. Crepaz N

    . the TREND. Improving the reporting quality of nonrandomized evaluations: the TREND statement. Am J Public Health2004;94:361-6.

  10. The Health Foundation. Evaluation: what to consider. 2015. https://www.health.org.uk/publications/evaluation-what-to-consider

Back to top

Evaluating the impact of healthcare interventions using routine data (2024)

FAQs

How do you evaluate the impact of an intervention? ›

Prioritizing interventions for impact evaluation should consider: the relevance of the evaluation to the organisational or development strategy; its potential usefulness; the commitment from senior managers or policy makers to using its findings; and/or its potential use for advocacy or accountability requirements.

How will you evaluate the effectiveness of your interventions? ›

By examining the three elements of an intervention – process, impact, and outcomes – your evaluation can tell you whether you did what you had planned; whether what you did had the influence you expected on the behaviors and factors you intended to influence; and whether the changes in those factors led to the intended ...

How do you measure the effectiveness of a health intervention? ›

Designing Effectiveness Studies

Ideally, all evidence on effectiveness of an intervention would be based on studies in which an intervention is randomized to one group and a placebo or standard of care to another group.

What is impact evaluation in healthcare? ›

Impact evaluation: Impact evaluation assesses a program's effect on participants. Appropriate measures include changes in awareness, knowledge, attitudes, behaviors, and/or skills.

What is an example of an impact evaluation? ›

For example, an impact evaluation might assess the impact of a development project or programme that aims to improve child health through the construction of public water pumps.

What are the methods used to evaluate interventions? ›

To analyze intervention processes two methodological approaches have widely been used: quantitative (often questionnaire data), or qualitative (often interviews).

How do you measure the outcome of an intervention? ›

It is common for researchers to evaluate interventions by having the participants answer standardized questions before and after, and measuring the effect of the intervention as a statistical difference between the two.

What are the 4 measures of effectiveness? ›

The measures of effectiveness are the emergency response time, false alarm rate, operational availability, and total cost of ownership.

How is impact evaluation measured? ›

The most common counterfactual is to use a comparison group. The difference in outcomes between the beneficiaries of the intervention (the treatment group) and the comparison group, is a single difference measure of impact.

How do you write an impact evaluation? ›

  1. A step-by-step guide to impact. evaluation. ...
  2. Step 2: Define timeline and budget 4. Timeline. ...
  3. Step 3: Set up an evaluation team 7. Troubleshooting: Setting up an. ...
  4. Step 4: Develop an evaluation plan 10. ...
  5. Step 5: Develop and pilot a. ...
  6. Step 6: Conduct a baseline. ...
  7. Step 7: Conduct follow-up survey. ...
  8. Step 8: Disseminate findings.

What is an impact analysis in healthcare? ›

Health Impact Assessment (HIA)

Helps evaluate the potential health effects of a plan, project, or policy before it is built or implemented and can provide recommendations to increase positive health outcomes and minimize adverse health outcomes.

How do you measure the impact of a learning intervention? ›

Methods of Measuring Learning Impact
  1. Surveys and interviews.
  2. Observations and assessments.
  3. Performance data analysis.
  4. Control groups or A/B testing.
  5. Feedback from stakeholders.
  6. Long-term follow-up.
  7. Case studies and success stories.
  8. Data analysis and reporting.
Aug 1, 2023

How do you monitor the impact of interventions? ›

  1. Be clear on the desired outcome of each intervention.
  2. Track the impact of your interventions.
  3. Gather a range of evidence.
  4. Show impact through case studies.
  5. Present outcomes to governors.

How do you evaluate the impact of something? ›

To conduct an effective Impact Analysis, use the following steps:
  1. Prepare for Impact Analysis. The first step is to gather a good team, with access to the right information sources. ...
  2. Brainstorm Major Areas Affected. ...
  3. Identify All Areas. ...
  4. Evaluate Impacts. ...
  5. Manage the Consequences.

Top Articles
Latest Posts
Article information

Author: Wyatt Volkman LLD

Last Updated:

Views: 6301

Rating: 4.6 / 5 (66 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Wyatt Volkman LLD

Birthday: 1992-02-16

Address: Suite 851 78549 Lubowitz Well, Wardside, TX 98080-8615

Phone: +67618977178100

Job: Manufacturing Director

Hobby: Running, Mountaineering, Inline skating, Writing, Baton twirling, Computer programming, Stone skipping

Introduction: My name is Wyatt Volkman LLD, I am a handsome, rich, comfortable, lively, zealous, graceful, gifted person who loves writing and wants to share my knowledge and understanding with you.