OUP user menu

A comparison of hospital adverse events identified by three widely used detection methods

James M. Naessens, Claudia R. Campbell, Jeanne M. Huddleston, Bjorn P. Berg, John J. Lefante, Arthur R. Williams, Richard A. Culbertson
DOI: http://dx.doi.org/10.1093/intqhc/mzp027 301-307 First published online: 17 July 2009

Abstract

Objective Determine the degree of congruence between several measures of adverse events.

Design Cross-sectional study to assess frequency and type of adverse events identified using a variety of methods.

Setting Mayo Clinic Rochester hospitals.

Participants All inpatients discharged in 2005 (n = 60 599).

Interventions Adverse events were identified through multiple methods: (i) Agency for Healthcare Research and Quality-defined patient safety indicators (PSIs) using ICD-9 diagnosis codes from administrative discharge abstracts, (ii) provider-reported events, and (iii) Institute for Healthcare Improvement Global Trigger Tool with physician confirmation. PSIs were adjusted to exclude patient conditions present at admission.

Main outcome measure Agreement of identification between methods.

Results About 4% (2401) of hospital discharges had an adverse event identified by at least one method. Around 38% (922) of identified events were provider-reported events. Nearly 43% of provider-reported adverse events were skin integrity events, 23% medication events, 21% falls, 1.8% equipment events and 37% miscellaneous events. Patients with adverse events identified by one method were not usually identified using another method. Only 97 (6.2%) of hospitalizations with a PSI also had a provider-reported event and only 10.5% of provider-reported events had a PSI.

Conclusions Different detection methods identified different adverse events. Findings are consistent with studies that recommend combining approaches to measure patient safety for internal quality improvement. Potential reported adverse event inconsistencies, low association with documented harm and reporting differences across organizations, however, raise concerns about using these patient safety measures for public reporting and organizational performance comparison.

  • adverse events
  • patient safety
  • reported events
  • quality measurement

Background

Recently attention has been focused in the USA on the issue of safety in medical care, with hospital complications and patient safety indicators (PSIs) being reported on the internet [13], actions by the Center for Medicare and Medicaid Systems (CMS) to eliminate payment for selected hospital-acquired conditions [4] and the requirement by some states that providers report the incidence of the National Quality Forum's list of 28 ‘never’ events [5]. While much of the focus to improve healthcare quality has been on identifying medical errors regardless of their association with subsequent adverse events, some patient safety advocates have stressed that identifying and reporting adverse events with harm rather than all errors in care would lead to safer environments for patients [6].

When measuring patient safety, it is important to differentiate between medical errors and adverse events with patient harm. Moreover, not all adverse events are preventable, nor are they always the result of medical errors. Medical errors are mistakes or failures in the process of care. While they have the potential to be harmful, often they are not linked to patient injury [6]. Moreover, when medical errors do lead to adverse events, many are minor in terms of patient harm. By the Institute of Medicine (IOM) definition, an adverse event is always associated with ‘unintended harm to the patient by an act of commission or omission rather than by the underlying disease or condition of the patient’ [1].

Walshe [7] reviewed multiple definitions of adverse events and found agreement on three key characteristics: (i) negativity—adverse events were undesirable; (ii) patient impact—all definitions included negative impact or potential impact to patients; and (iii) causation—the event is a result of some part of the healthcare process, rather than a patient's own actions or disease progression. The IOM recommends that the focus of patient safety external reporting should be on measuring harm rather than measuring error [1]. PSIs are often confounded by this distinction between process (medical errors) and outcome (adverse events) of care.

Commonly used patient safety measures are derived from different data sources, often with different purposes, methods, dimensions of the care process and levels of patient harm [8]. The Agency for Healthcare Research and Quality (AHRQ) PSIs, for example, are based on retrospective review of ICD-9 diagnosis codes from hospital discharge abstracts and screen for patient safety events that appear to have occurred [9]. Several states, including Minnesota, collect and report the National Quality Forum's list of 28 ‘never’ events based on hospital event reporting systems. Such events are rare and when they occur, most cause serious patient harm including disability and death. CMS has incorporated a process that will not pay for conditions that ‘could reasonably have been prevented through the application of evidence-based guidelines’ [10]. The Leapfrog Group survey collects hospital information on selected conditions not present on admission, such as injuries and pressure ulcers [11].

The objective of this study is to compare patients identified with adverse events, using three widely used methods of detection currently in use in the USA—real-time provider-reported events, PSIs constructed from administrative data and medical record review-based trigger tools. Where feasible, the extent of harm associated with a detected event was estimated and used to classify it as an adverse event.

Methods

We carried out a retrospective cross-sectional study on all hospital inpatients discharged in 2005 (including deaths) from the three Mayo Clinic Rochester hospitals (n = 60 599) to assess adverse events. Because no gold standard exists [8], adverse events were identified using three different methods: (i) AHRQ PSI from administrative discharge abstracts, (ii) events reported to a central system by providers as they occur (provider-reported events), and (iii) medical records-based trigger tools. PSIs use administrative data to flag records suspected of an adverse event. ‘Trigger tools’, a form of medical records review, use a quick review of the medical record for specific criteria (triggers) to generate an enriched sample of cases for more extensive medical record assessment to identify adverse events. The Institute for Healthcare Improvement (IHI) has claimed success in the use of trigger tools to identify and reduce medication errors [12]. Additional information on patient demographics, length of stay, cost, diagnoses and procedures was obtained from administrative data systems. To maintain anonymity, patient identifiers were removed after records were linked from original sources. The study was approved by the Mayo Clinic Institutional Review Board.

Measures and data

Adverse events

For purposes of this analysis, we followed the IOM definition of an adverse event as an event leading to patient harm and caused by medical management rather than the underlying condition of the patient. The three widely used approaches selected for the measurement of adverse events differ in important ways. These differences are described in more detail below.

AHRQ PSIs: The PSIs are a set of computer algorithms developed by AHRQ to identify potential adverse events during the hospital stay using secondary diagnosis codes from hospital discharge abstracts [9]. The PSIs include 20 indicators with ‘reasonable face and construct validity, specificity and potential for fostering quality improvement’. Each indicator is defined by selected ICD-9-CM diagnosis or procedure codes on a specific type of case (e.g. surgery, obstetrics or all inpatients) that suggests the occurrence of an adverse event. For example, a surgical patient with a secondary diagnosis of streptococcal septicemia (ICD-9 diagnosis code 038.0) would be considered as having postoperative sepsis (PSI 13). Many PSI indicators have specific exclusions to reduce the likelihood of false positive cases.

Because of the approach used, PSIs may include conditions that were present upon admission. To reduce false positives, we flagged each secondary diagnosis as being either present on admission or identified during the hospitalization. We modified the AHRQ computer program to exclude secondary diagnoses present on admission from the PSIs [13]. From the original 2430 PSI cases, we eliminated 854 PSI (35.1%) cases as false positives. No further validation or medical record review was performed on the PSI-identified adverse events in this study.

Provider-reported events: As part of routine hospital and clinical activities at Mayo Clinic, nurses, physicians, therapists and other employees report patient safety events including medical errors and ‘near-misses’ (errors caught before they reach the patient) to a registered nurse carrying an ‘event pager’. Data are entered into a central database with a locally developed tool. Harm assessments are based on the reported patient status at the time of the event report. Updates to these reported events are rare. The recording RN categorizes each reported event into medication events, equipment events, falls, skin integrity events, and miscellaneous events. Miscellaneous events cover anything not in the previous four groups; for example, identification errors, blood product errors, delays in treatment, dislodged intravenous placements, mislabeling and self-inflicted injuries, to name a few. In this study any event with reported harm is defined as a ‘provider-reported event’. Near misses and incidents without harm were excluded as not affecting patient health. All skin integrity events were considered adverse events as no harm scale was available.

Trigger events: Trigger tools are sentinel words or conditions found in a relatively quick review of the medical record to detect the possibility of the occurrence of an adverse event. The presence of one of these conditions ‘triggers’ a more extensive record review by multiple reviewers, including a physician, to assess the cause of the condition. An example of a triggering event is the presence of an abnormal prothrombin time with international normalized ratio (INR) >6 indicating a blood clotting problem. If a high INR is discovered, the record is more thoroughly reviewed to see whether improper warfarin management or other treatment error caused the triggering condition or if it was due to disease progression. Several sets of trigger tools are now available for detecting errors in intensive care units and general care units as well as in medication management [12, 14].

For this study, the IHI Global Trigger Tool (GTT) was applied to a random sample of 10 hospital discharges every 2 weeks throughout the year, resulting in a subset of 235 discharges. Qualified nurses reviewed these records to determine whether a trigger word or condition was present, and, if so, whether an event with harm was associated with the trigger. Then level of harm resulting from the event was assigned. All adverse events were confirmed by a physician. No attempt was made to assess whether the adverse event was preventable. Thus, for this study, a ‘trigger event’ is an adverse event identified through this method.

Assignment of patient harm

Direct assessment of harm based on the National Coordinating Council for Medication Error Reporting and Prevention classification was obtained for medication, equipment and miscellaneous events from the provider-reported events and for the GTT [15]. Falls were classified into a five-level harm scale of no harm, minor, moderate, severe or contributing to death.

Statistical analysis

Data obtained for each approach to detecting adverse events was linked by a specific hospitalization prior to removing identifiers. The unit of analysis was the patient hospitalization. Each hospitalization could have multiple adverse events. Statistical analysis was performed to assess whether each of the three indicators measures similar aspects of medical management. Agreement between the measures of adverse events was assessed based on contingency table analysis using Chi-square tests and/or Fisher's exact tests, depending on the number and rate of cases with adverse events.

Results

Identified events

Overall, 4.0% or 2401 of all discharges experienced an adverse event as identified by one of the two all-hospital-patient methods. AHRQ PSIs, after removing conditions flagged as present on admission, were reported on 1576 (2.6%) of hospital discharges. Almost half of the reported PSIs were accidental punctures and lacerations (n = 761) associated with medical or surgical procedures. The next most frequent PSIs included postoperative deep vein thrombosis (DVT)/pulmonary embolism (PE), postoperative hemorrhage/hematoma and postoperative respiratory arrest.

A provider-reported event affected 922 (1.5%) discharges. Skin integrity events were the most frequent type (n = 399), followed by miscellaneous events, medication events and falls. Finally, among the 235 discharges reviewed with the GTT, 65 (27.7%) were identified as trigger adverse events, reflecting the greater sensitivity of the trigger tool in measuring patient safety.

Comparison of hospital adverse events identified using each method

Figure 1 presents the cross-classification of the two all patient methods of identifying adverse events. Ninety-seven patients had both a PSI and a provider-reported event, representing 6.2% of all PSI patients and 10.5% of all provider-reported event patients, respectively. Although the occurrence of PSI and provider-reported events were statistically significantly associated with one another due to large sample size, the actual overlap was relatively small.

Figure 1

Overlap of adverse events between PSI and provider-reported events. n is the number of discharges with an adverse event.

As seen in Table 1, only 6% of the 235 discharges reviewed with the trigger tool were flagged as having an adverse event by another method. Three were found with a PSI (2 with trigger events, 1 without a trigger event) and 11 had a provider-reported event (9 with a trigger event, 2 without trigger events). Meanwhile, very few patients with no trigger event had an adverse event identified through another method. One of the cases with a provider-reported event and no trigger event was a skin event. Using the trigger tool in this sample, too few events were identified to allow meaningful statistical comparison.

View this table:
Table 1

Number and congruence by patient of identified adverse events through three different methods: PSIs, provider-reported events and global trigger tool based on biweekly random samples of hospitalized patients, 2005

Number of discharges with an event for the trigger sample (n = 235)
Discharges with trigger event
YesNoTotal
Discharges with provider-reported event
Yes9211
No56168224
Total65170235
Discharges with patient safety indicator
Yes213
No63169232
Total65170235

Cross-classification of types of events

The cross-classification of type of provider-reported event and specific PSI is provided in Tables 2 and 3. Table 2 provides the number of cases with provider-reported events for specific PSIs, whereas Table 3 presents PSI information for specific types of reported events. Approximately 10% of each type of reported event also had a PSI with a range from 5.8% for reported falls to 29.4% for the 17 reported equipment events. Of the individual PSIs found in more than 20 cases in the year, only decubitus ulcer (PSI 3), failure to rescue (PSI 4), postoperative respiratory failure (PSI 11) and postoperative sepsis (PSI 13) were found in more than 10% of cases of provider-reported events. Only 8 of 52 patients with PSI 3, decubitus ulcer, had a provider-reported skin event. None of the 38 patients with PSI indicating postoperative wound dehiscence or the 81 patients with PSIs indicating obstetrical trauma had any provider-reported events.

View this table:
Table 2

Congruence of provider-reported adverse events with specific PSIs by hospital discharge based on all discharges, 2005

PSIDischarges with PSI (n)Discharges with corresponding provider-reported adverse event, n (%)Most frequent type of reported event
1Anesthesia complications10 (0)
2Death in low mort DRG81 (12.5)Miscellaneous
3Decubitus ulcer529 (17.3)Skin
4Failure to rescue16125 (15.5)Miscellaneous
5Foreign body left73 (42.7)Miscellaneous
6Iatrogenic pneumothorax492 (4.0)Miscellaneous/skin
7Selected infections828 (9.8)Miscellaneous
8PO hip fracture21 (50.0)Fall
9PO hemmorhage/hematoma1243 (2.4)Medication/skin
10PO metabolic derangement222 (9.1)Miscellaneous/skin
11PO respiratory failure9116 (17.6)Miscellaneous
12PO PE/DVT19617 (8.7)Skin
13PO sepsis486 (12.5)Miscellaneous
14PO wound dehiscence380 (0)
15Accidental puncture/laceration76124 (3.2)Miscellaneous
16Transfusion reaction10
17Birth trauma—newborn20
18OB trauma—vaginal delivery with instrumentation140
19OB trauma—vaginal delivery without instrumentation580
20OB trauma—caesarean section90
Any PSI157697 (6.2)
  • DRG, diagnosis-related group; PO, postoperative; OB, obstetrical.

View this table:
Table 3

Congruence of PSIs with specific provider-reported adverse events by hospital discharge based on all discharges, 2005

Discharges with provider-reported event (n)Discharges with corresponding PSI, n (%)Most frequent PSI
Provider-reported event
 Fall19011 (5.8)Failure to rescue
 Equipment175 (29.4)Failure to rescue
 Medication20730 (14.5)Postoperative respiratory failure
 Miscellaneous34250 (14.6)Failure to rescue
 Skin39945 (11.3)Failure to rescue; postoperative DVT/PE; accidental puncture/laceration
Any reported event92297 (10.5)

Discussion

Starting with a large database of hospital admissions, we developed a database of hospital adverse events to enable a comparison of three widely used methods for detecting hospital adverse events. Distinct methods identified different adverse events and very few patients were identified with multiple methods. Overall, 2.6% of discharges were identified with PSIs, yet less than 23% of all PSIs were on discharges that also had a provider-reported event. Further, PSIs did not identify 89.5% of patients with provider-reported adverse events. For the sample of cases reviewed with the GTT, 65 (27.7%) discharges were discovered to have had a trigger adverse event of which only 11 (17%) were also detected as provider-reported events or PSIs. Furthermore, only one provider-reported event (a skin event) and one PSI were in the sample of trigger tool cases that found no trigger events. Although it is not surprising that many patients were not identified as having an adverse event by all three methods, the extremely small extent of overlap is noteworthy. These findings are consistent with other studies where different methods of event identification flagged different cases for poor quality or patient safety [16, 17]. Poor agreement was also found between hospital adverse events reported through patient surveys and medical record review [18]. For quality improvement efforts, these results suggest combining identification approaches to more fully capture possible patient safety issues in the hospital [8, 19].

After eliminating conditions present on admission, PSIs were found to place more emphasis on problems occurring in surgical and procedural practices and were less able to detect problems among medical patients. In their study of PSIs in the Veterans Health Administration system, Rosen et al. [20] noted that adverse events from surgery are more amenable to ICD-9-CM coding than harmful events among medical patients. Over 75% of the PSIs found on discharged patients in our study were related to conditions identified as postoperative or procedurally related (postoperative problem or accidental puncture). In contrast, a 2000 study based on national data reported only 32% of the 18 PSIs were surgically related [21]. Although our method of identification of PSIs eliminated some conditions that were included in the national report (those that were present on admission), the lack of a close relationship between provider-reported events and identified PSIs, along with inconsistencies in diagnostic coding [22, 23] argue for further study of PSIs before they are used for comparing quality and patient safety across institutions. Moreover, not all of the PSIs have been adequately validated at the patient level through medical records review. Others question the value of PSIs since they have not been shown to be consistent with other quality measures at the institution level [24]. In subsequent review of accidental puncture/lacerations at our institution, the majority of cases flagged with a PSI were found to have serosal tears or very minor cuts during complex surgeries involving multiple adhesions (M. Nyman and J. Naessens, personal communication).

Anecdotally, most provider-reported events are made by nurses or other non-physician staff and often relate to events considered to be nursing issues (e.g. wrong medication given, hospital fall or skin ulcers). Finally, the GTT may be able to capture a significant number of adverse events associated with physician care of non-surgical patients, but our study included too few cases to draw any valid conclusions.

Specific event rates in our study were also similar to other studies. We found a provider-reported rate of medication events of 3.4 per thousand discharges. Our overall medication error rate during this time was 3.7% of discharges, with 87.9% of these reports including no harm to the patient. Senst et al. [25] found similar levels of medication events among 4.2% of admissions using various approaches that included self-reports, computer flags and review of a random selection of records.

This study is subject to several limitations. The findings were obtained at hospitals of an academic referral center in one community. As such, we cannot isolate the possible influence of local practice patterns. Furthermore, it is possible that the apparent level of patient safety varies by the adverse event identification method used. For example, a reliance on PSIs may identify more quality and safety issues at institutions with a higher proportion of surgical patients. On the other hand, reliance on provider-reported adverse events will be dependent on the level of harm to the patient estimated by the reporter at the time of the report. Routine use of this data may not capture subsequent changes or manifestations of the problem. The extent of provider reporting will also be influenced by the organizational culture. A punitive culture would suppress self-reporting.

Although progress has been made on adopting standardized definitions of adverse events, medical errors and preventable complications, their use in operational settings and in research studies still differs. In our use of provider-reported events, we excluded any reports of potential errors or near misses caught before reaching the patient as well as errors without reported harm. Provider-reported events, being more sensitive indicators of patient safety during hospitalization, may therefore be a better indicator of lower quality of care than PSIs. Also we assumed, without further validation, that any detected PSI was associated with a harmful adverse hospital event. Levels of patient harm may vary across types of PSIs, and we were not able to adjust for this factor. Furthermore, because PSIs depend on administrative billing diagnosis codes, all the issues in comparing coding practices between institutions need to be considered [9, 23].

In our experience with Minnesota's mandatory reporting of the National Quality Forum's list of serious adverse events, we report ‘unstageable’ pressure ulcers in addition to stage 3 or 4 ulcers acquired after admission. However, only 25% of the last 16 provider identified patients with pressure ulcers had an ICD-9-CM secondary diagnosis code of a pressure ulcer (codes 707.00–707.09). These patients typically have multiple morbidities and long hospitalizations. Our administrative system has a limitation of 15 diagnoses per hospitalization. It is possible that the decubitus ulcer was identified by the coder but was not prioritized high enough among possible diagnoses to be captured in our repository [26]. Finally, not all PSIs would be preventable with today's technology [22] and therefore, would not meet the criterion of identifying a preventable adverse event.

Conclusions

The findings in this study are consistent with those who argue for combining identification approaches to fully understand patient safety issues occurring within an organization. Combining approaches to identify adverse events may be less appropriate, however, for developing indicators for public reporting. Differences in the frequencies of identified events can arise from inconsistencies in the definitions used in practice and from the detection method employed. While standard definitions can improve the comparability of reported patient safety events, the possibility remains that the approaches used within organizations and by external agents may be measuring different constructs rather than common safety risks. For example, some of these approaches may be capturing the impact of disease severity on patient outcomes in addition to, or rather than, medical management errors that lead to patient harm.

Through its focus on selected cases, the GTT shows promise in its ability to detect adverse events in a consistent fashion. However, the current trigger tool process remains ‘reviewer intensive’ because thorough chart review is necessary to increase sensitivity. Further research is needed to investigate the value of incorporating trigger criteria into concurrent electronic medical record systems as just-in-time alert messages to providers. Concurrent trigger tools may not only detect adverse events but enable rapid amelioration of any deleterious effects.

Funding

No extramural funding was used to conduct this research.

Acknowledgements

We wish to thank and acknowledge Sara Hobbs Kohrt for her organization and manuscript preparation. We would also like to thank the reviewers of the manuscript for their insightful comments and revision suggestions.

References

View Abstract