OUP user menu

Selecting indicators for patient safety at the health system level in OECD countries

Vivienne McLoughlin, John Millar, Soeren Mattke, Margarida Franca, Pia Maria Jonsson, David Somekh, David Bates
DOI: http://dx.doi.org/10.1093/intqhc/mzl030 14-20 First published online: 5 September 2006

Abstract

Background. Concerns about patient safety have arisen with growing documentation of the extent and nature of harm. Yet there are no robust and meaningful data that can be used internationally to assess the extent of the problem and considerable methodological difficulties.

Purpose. This article describes a project undertaken as part of the Organization for Economic Cooperation and Development (OECD) Quality Indicator Project, which aimed at developing an initial set of patient safety indicators.

Methods. Patient safety indicators from OECD countries were identified and then rated against three principal criteria: importance to patient safety, scientific soundness, and potential feasibility. Although some countries are developing multi-source monitoring systems, these are not yet mature enough for international exchange. This project reviewed routine data collections as a starting point.

Results. Of an initial set of 59 candidate indicators identified, 21 were selected which cover known areas of harm to patients.

Conclusions. This project is an important initial step towards defining a usable set of patient safety indicators that will allow comparisons to be made internationally and will support mutual learning and quality improvement in health care. Measures of harm should be complemented over time with measures of effective improvement factors.

  • patient safety
  • quality indicator

The objective of the Organization for Economic Cooperation and Development (OECD) Health Care Quality Indicator (HCQI) [1] Project is to develop a set of indicators to describe the state of the quality of health care that can be reliably reported across countries using comparable data. The project would be incomplete without the indicators of ‘patient safety’, as there is increasing recognition of error and lack of reliability in biomedical interventions [2].

This article describes the emergence of the supportive science in patient safety and the problems that beset measurement and presents a set of indicators. These were selected by an international panel of government representatives and experts from Australia, Canada, the EU, the OECD, Portugal, Spain and the United States.

Is patient safety a distinct domain of quality?

Safety is a basic tenet of high health care quality, yet it is recognized as a distinctive domain in recent national reports on quality [3,4]. The patient safety domain assumes a reasonable consensus about the effectiveness of treatment and focuses on whether treatments have been delivered safely. Where ‘slips and lapses’ [5] occur across treatment modalities, their impact is not always readily detected using methods designed to compare relative effectiveness.

The term ‘patient safety’ encompasses harm to patients, incidents that may give rise to harm, the antecedents or processes that increase the likelihood of incidents, and the attributes of organizations that help guard against harm and enable rapid recovery when risk escalates.

The Institute of Medicine report [6] defined patient safety as ‘the freedom from accidental injury due to medical care or from medical errors’. Well-known examples include wrong-site surgery or lethal dose administration. Yet the public view patient safety much more broadly. High-profile failures [7] occur where service outcomes are believed to be significantly below an acceptable standard. Media coverage of health care acquired infections, errors of medication, and blood product use has increased. Some argue that criminal acts committed during medical care such as homicide, rape, and abduction should be excluded [8], as they involve deliberate violations of a rule or standard of behaviour.

The National Patient Safety Agency (NPSA) in the United Kingdom describes patient safety as ‘the process by which an organization makes patient care safer’ [9]. This involves risk assessment, the reporting and analysis of incidents, and the capacity to implement solutions to minimize the risk of recurrence. A patient safety incident is ‘any unintended or unexpected incident that could have or did lead to harm ...’ [10]. This excludes effects recognized as a risk of treatment. Some incidents include medical errors in planning or execution. The Institute of Medicine defines medical error as ‘the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim’ [11].

Without a specific focus on patient safety, strategies to reduce and protect patients from accidental injury may not be developed.

Background

As early as 1991, the Harvard Medical Practice Study documented an alarming incidence of medical errors [12]. Designed to investigate medical negligence, it identified 3–4% of hospital admissions as associated with an adverse event. But public awareness of the importance of patient safety first occurred in 1999 with the publication of To Err is Human [2]. This report estimated that adverse events arising from health care were the third leading cause of death in the United States [13]. Other countries replicating the Harvard study found similar error rates [12,1420]. These findings brought patient safety to the top of the policy agenda internationally. In 2000, the Australian Council for Safety and Quality in Health Care (http://www.safetyandquality.org) was established in response to concerns about safety. In the United Kingdom, the Chief Medical Officer’s report in 2000, An Organisation with a Memory [21], documented the need to pay serious attention to patient safety. In 2004, the World Health Organization launched its World Alliance for Patient Safety (http://www.who.int/patientsafety/en) to ensure that the learning about safety is shared rapidly internationally.

Out of the policy interest, science for understanding and improving safety emerged with a massive rise in patient safety research over the last decade [22]. Much of this consists of research in psychology, sociology, ergonomics, and organizational risk management [23,24]. It reflects the learning from other industries that protocols, culture, equipment, and design are an integral part of making care safer.

Parallel efforts were started to introduce measurement systems although few countries have had extensive experience in developing patient safety indicators and fewer still have been able to generate national data on patient safety. Table 1 describes national measurement activities occurring in different countries.

These activities in turn illuminate the considerable problems of definition and measurement in this field [25]. Routine hospital administrative data collection is designed for other purposes such as billing and operational management and may not paint a comprehensive picture of safety and quality problems. Adverse events studies rely on medical chart reviews but are expensive and provide only a snapshot of the extent and nature of incidents. The accuracy of these estimates has been widely debated [26]. Some countries have introduced national incident monitoring systems to capture ‘near miss’ incidents, but there is significant under-reporting [27–29]. Although there is still no internationally accepted taxonomy for patient safety incidents [30], some countries have introduced mandatory reporting of sentinel events or confidential national reporting schemes [31]. Information is used from many disparate sources such as coroners’ reports, confidential enquiries into mortality, routine scrutiny of surgical or all deaths, complaint processes, medical insurance data sets, patients, and staff surveys [31]. Progress in this area has been slow [6], but there are initiatives underway that show some promise [10]. This project attempts to further the design and implementation of patient safety monitoring systems by seeking international consensus of which measures to include.

Materials and methods

The time and resource constraints on the project dictated that only existing indicators could be reviewed rather than new indicators be developed. The team of the OECD HCQI Project compiled lists of existing safety indicators from existing measurement and reporting systems in OECD countries. The sources for the indicators are summarized in Tables 1 and 2.

View this table:
Table 1

National measurement activities across countries

National measurements of patient safetyFunction
The United Kingdom—The National Patient Safety Agency (NPSA)The agency created an observatory to draw together information from a range of sources [10]. Other national reports on patient safety have been produced for example by the National Audit Office
Australia—Australian Commission on Safety and Quality in Health CareA new organization is being established with a reporting function [34]. It takes over from the Australian Council for Safety and Quality in Health Care, which commissioned a report on a sustainable framework for measuring the quality of health care
The European CommissionSIMPATIE Project will be conducting a stock take of a range of patient safety activities including measurement studies [35]
The United StatesNo comprehensive nationwide monitoring system exists for patient safety [3,4] although the Institute for Healthcare Improvement has initiated a national safety improvement campaign (http://www.ihi.org/IHI/Programs/Campaign)
Canada—The National Patient Safety Institute [36]A new organization supported also by the work of the Canadian Institute for Health Information (CIHI) that published a report based on available national level data [37]
Denmark—National Danish Indicators ProjectThe new Danish Patient Safety Act requires reporting, and a system to collect and disseminate the learning from these events has been established through the Danish Patient Safety Association [38]

View this table:
Table 2

Indicator sources

Set nameDescription
AHRQ Patient Safety IndicatorsThis measure set was developed by the University of California at San Francisco (UCSF) for the US Agency for Healthcare Research and Quality (AHRQ). Safety measures were developed using (i) a background literature review, (ii) structured clinical panel reviews of candidate indicators, (iii) expert review of diagnosis codes, and (iv) empirical analyses of potential indicators. These indicators are all derived from hospital administrative data. A full report contains many more indicators that were not selected
AHRQ/CIHI Safety IndicatorsAHRQ safety indicators adapted for use in Canada by the Canadian Institute for Health Information (CIHI)
Australian Council for Safety and QualityThe Australian Council for Safety and Quality in Health Care has developed indicators for sentinel events (binomial, catastrophic, and symptomatic of system failure) that have been agreed by all Australian Health Ministers. These indicators were selected on the basis of causing serious harm, having the potential to undermine public confidence in the health system, warrant robust investigation and analysis. The information system issues are being addressed by each state/territory to enable reporting
Complications Screening Programme BIHThis measure set was developed at the Beth Israel Hospital (BIH) in Boston, USA. The BIH Complications Screening Programme algorithm uses discharge abstract data to identify complications that raise concerns about the quality of hospital care. This set includes 27 complication rate indicators to screen for patterns that could be prevented by improving processes of care
JCAHO IM system: infection controlThe indicator measurement (IM) system was developed by the Joint Commission on Accreditation of Healthcare Organization (JCAHO). It is designed to incorporate continuous performance measurement into the accreditation process and provide periodic feedback to health care organizations. This indicator set measures adverse patient outcomes in infection control
JCAHO sentinel eventsThis measure set is a collection of sentinel events that the US JCAHO collects through a voluntary reporting process

The set contained a total of 59 candidate indicators that were then reviewed for the following:

  1. their importance to patient safety—severity of impact, robustness of link to a failure of implementation or planning; the extent of public concern; link to attributes capable of influence by services; and sensitivity to policy changes;

  2. their scientific soundness—clinical face validity, content validity, and;

  3. their potential feasibility—likely data availability across countries; and estimated reporting burden.

After a structured review process developed by the RAND Corporation, each panel member ranked each of the 59 candidate indicators on a scale of 1–9 for the importance and scientific soundness criteria [32]. As panellists had only limited information about data systems in all OECD countries and thus feasibility, they were only asked to rate feasibility in the three broad categories—likely, possible, and unlikely—based on their previous knowledge. Ratings were summarized and measures of disagreement calculated. The panel discussed the ratings by teleconference and then assigned a final rating for each.

All indicators with a final median score of >7 for both importance and scientific soundness and at least ‘likely’ feasibility were accepted. All indicators with a median score ≤5 for importance or scientific soundness were rejected. The remaining indicators with median ratings of 6 or 7 were selected or rejected on a case-by-case basis through panel discussion.

Results

The final list of 21 indicators selected is summarized in Table 3. Five domains emerged: hospital acquired infections, operative and post-operative complications, sentinel events, obstetrics, and other care-related events. Infection due to medical care, of wounds, pneumonia in patients on ventilators, and decubitus ulcer were selected, as were technical difficulty in operations, complications of anaesthesia, post-operative pulmonary embolism and deep vein thrombosis, sepsis, and hip fracture. Sentinel events included wrong-site surgery and retaining foreign bodies, transfusion, equipment, or medication errors. Injury to the neonate or to the mother was selected in obstetrics, and other care-related areas included patient falls.

View this table:
Table 3

Final list of patient safety indicators

1Hospital-acquired infectionsVentilator pneumonia
Wound infection
Infection due to medical care
Decubitus ulcer
2Operative and post-operative complicationsComplications of anaesthesia
Post-operative hip fracture
Post-operative pulmonary embolism or deep vein thrombosis
Post-operative sepsis
Technical difficulty with procedure
3Sentinel eventsTransfusion reaction
Wrong blood type
Wrong-site surgery
Foreign body left in during procedure
Medical equipment-related adverse event
Medication errors
4ObstetricsBirth trauma—injury to neonate
Obstetric trauma—vaginal delivery
Obstetric trauma—Caesarean section
Problems with childbirth
5Other care-related adverse eventsPatient falls
In-hospital hip fracture or fall

Detailed descriptions of these indicators with operational definitions can be found in the OECD publication: Selecting Indicators for Patient Safety at the Health Systems Level in OECD Countries [33].

Discussion

The 21 indicators that the panellist selected assessed clinically important patient safety events that are widely perceived to be indicative of lapses in care such as procedural complications, child birth trauma, and medication error. The measures, selected by an international expert panel using a structured process, represent progress towards the goal of identifying consensus-based measures for international benchmarking of patient safety. However, this panel considers the set only as a preliminary step in a long-term process towards implementing a measurement system for patient safety on the health system level. There are four major challenges to this set: two of a conceptual nature and two related to the feasibility of the measures.

The first conceptual limitation is that the set is restricted to manifest adverse events and leaves out indicators for near misses and also for adherence to safe care processes. The second is that the indicators focus on hospital care, leaving out important areas such as primary care, mental health, nursing home care, and self-care. Both limitations are a result of the restriction of this review to existing indicator sets, which have traditionally focused on adverse outcomes of in-patient care.

The first feasibility problem will be data availability. There are strong disincentives for disclosing safety incidents, such as fear of litigation or shame, so that few dedicated reporting systems exist. Even where there are incident monitoring systems that capture adverse events and/or ‘near miss’ events, events may be under-reported. Alternative data sources also have limitations. Administrative data commonly lack the detail and completeness to capture such events, and the use of chart reviews is limited by cost concerns. The second feasibility problem is that the set includes events that are very rare and that may also be under-reported in administrative data, which are commonly used to identify those events. Thus, even at the health system level, the events might not occur with a sufficient frequency and variance to use them to draw inferences about differential safety of care. Key issues for each indicator are summarized in Table 4.

View this table:
Table 4

Selected indicators, key data issues, and capacity to influence

Selected indicatorKey data issueCapacity to influence
Ventilator pneumoniaDifferential reportingVentilator ‘bundle’ of specific interventions
Wound infectionConsistency of severity classificationStrict hygiene including hand washing and use of disinfectants
Infection due to medical careVariation in coding practicesHygiene and rational use of antibiotics
Decubitus ulcerDifficulties of differentiating pre-existing practicesGood quality nursing care
Complications of anaesthesiaMorbid events difficult to classify as avoidableProcedural improvements
Post-operative hip fractureUnder-reporting in adverse event dataAppropriate prescribing and nursing procedures
Post-operative pulmonary embolism or deep vein thrombosisKnown to frequently go undiagnosedAppropriate use of anti-coagulants and other preventive measures
Post-operative sepsisUsually reliably coded and availableAppropriate use of prophylactic antibodies, good surgical site preparation, careful, sterile surgical techniques, and good post-operative care
Technical difficulty with procedureControversy over inclusion and exclusion criteriaTraining, work practices, and peer review
Transfusion reactionRisk related to transfusion rather than infection difficult to differentiateSystem redesign of procedures or blood component lock systems
Wrong blood typeData availability where no programme to quantify non-infection riskSystem redesign
Wrong-site surgeryMay be under-reportedProcedures to verify patient identification, structured communications, standardized procedures, and timely access to records
Foreign body left in during procedureMay be under-reportedStandardized counting practices and work practices that ensure reduction in fatigue
Medical equipment-related adverse eventMay cover error due to the lack of training and device failureDevice inspection, maintenance, training check lists, and use of simulation
Medication errorsData likely to be significantly under-reported in the absence of mandatory reporting systemsComputerized physician order entry systems with clinical decision support, unit dosing; pharmacy review processes
Birth trauma—injury to neonateNeed for adjustment for high-risk conditions to ensure comparabilityGood antenatal and obstetric practice and sound systems
Obstetric trauma—vaginal deliveryNeed for adjustment for high-risk conditions to ensure comparabilityGood obstetric care
Obstetric trauma—Caesarean sectionNeed for adjustment for high-risk conditions to ensure comparabilityGood obstetric care
Problems with childbirthNeeds consistent definition and reporting practices for complications across countriesProper pre- and perinatal care and monitoring
Patient fallsUnder-reporting and need for risk adjustmentOrientation/training, communication, patient assessment, and reducing environmental risk
In-hospital hip fracture or fallUnder-reporting and need for risk adjustmentOrientation/training, communication, patient assessment, and reducing environmental risk

Implications of this work

The issues of definition and measurement should benefit from the World Alliance for Patient Safety streams of work on classification, measurement, and reporting over the next few years (http://www.who.int/patientsafety/en). Where electronic data systems are not available, more labour-intensive techniques are needed such as critical incident reporting systems, the use of administrative databases, retrospective randomized chart audits, patient recall, incident reporting systems, and simulations. Even electronic health records do not always contain the fields to adequately capture patient safety data. Data collected internationally are unlikely to be useful for longitudinal analysis in the short term. Reporting levels are likely to vary significantly across countries and over time.

The adoption and use of these indicators may help raise the awareness of harm. Without them, the general public will not be adequately informed or empowered to play a role in improving patient safety.

Promoting continuous case note reviews and incident reporting at the local level will remain the bulwark against safety improvement. It could also lead to the improvement of routinely collected large data sets.

Beyond clinical indicators, indicators need to reflect other patient safety issues such as the national and local infrastructure that exists to promote learning and aspects of organizational performance. There are several typological and dimensional tools available to measure safety culture [9]. This could also include measurement of the implementation of effective safety practices—e.g. the Institute for Healthcare Improvement has designed a range of measures that enable hospitals to track their compliance with safety practices for The Health Foundation’s Safer Patient Initiative (personal communication).

In summary, patient safety is now recognized as a key dimension of health care quality. The OECD has selected an initial set of indicators to encourage further development. It is hoped that this will support international cooperation in improving health care quality around the world.

Acknowledgements

This article describes the key recommendations made by the Patient Safety Panel of the OECD Health Care Quality Indicators Project. It reflects the opinion of the authors and not an official position of the OECD, its member countries, or institutions participating in the project. The full report of the panel proceedings can be found at www.oecd.org/dataoecd.

References

View Abstract