OUP user menu

Using quality indicators to improve hospital care: a review of the literature

Maartje De Vos, Wilco Graafmans, Mieneke Kooistra, Bert Meijboom, Peter Van Der Voort, Gert Westert
DOI: http://dx.doi.org/10.1093/intqhc/mzn059 119-129 First published online: 20 January 2009

Abstract

Purpose To review the literature concerning strategies for implementing quality indicators in hospital care, and their effectiveness in improving the quality of care.

Data sources A systematic literature study was carried out using MEDLINE and the Cochrane Library (January 1994 to January 2008).

Study selection Hospital-based trials studying the effects of using quality indicators as a tool to improve quality of care.

Data extraction Two reviewers independently assessed studies for inclusion, and extracted information from the studies included regarding the health care setting, type of implementation strategy and their effectiveness as a tool to improve quality of hospital care.

Results A total of 21 studies were included. The most frequently used implementation strategies were audit and feedback. The majority of these studies focused on care processes rather than patient outcomes. Six studies evaluated the effects of the implementation of quality indicators on patient outcomes. In four studies, quality indicator implementation was found to be ineffective, in one partially effective and in one it was found to be effective. Twenty studies focused on care processes, and most reported significant improvement with respect to part of the measured process indicators. The implementation of quality indicators in hospitals is most effective if feedback reports are given in combination with an educational implementation strategy and/or the development of a quality improvement plan.

Conclusion Effective strategies to implement quality indicators in daily practice in order to improve hospital care do exist, but there is considerable variation in the methods used and the level of change achieved. Feedback reports combined with another implementation strategy seem to be most effective.

Keywords
  • quality indicators
  • quality improvement
  • quality measurement
  • implementation strategy
  • hospital care

Introduction

With increasing frequency, hospitals in various countries report and monitor indicator data in order to improve the quality of care [14]. Quality indicators aim to detect sub-optimal care either in structure, process or outcome, and can be used as a tool to guide the process of quality improvement in health care [5]. Monitoring the health care quality makes hospital care more transparent for physicians, hospitals and patients. Furthermore, it provides information to target quality improvement initiatives. However, collection of indicator data also implies an administrative burden for physicians and hospitals; therefore, the use of this information should be optimized. It is unclear which implementation strategy for quality indicators is optimal, and what effects can be achieved when quality improvement is guided by indicator information.

The implementation of quality indicators as a tool to assist quality improvement requires effective communication strategies and the removal of hindrances [6]. Evidence suggests that audit and feedback based on indicator data can be effective in changing health care professional practice [7, 8]. Monitoring the indicator data may also help to target specific quality improvement initiatives such as educational programs and development of protocols.

The effect of monitoring indicator data to promote quality improvement, and ultimately patient care, has been demonstrated in specific situations. For example, in the Bradford Teaching Hospital in the United Kingdom, feedback of mortality rates resulted in the reduction of the standardized mortality rate from 0.95 to 0.75 [9].

At present, no clear overview is available about strategies for implementing indicators and the effects on quality of care in hospitals. Some reviews do address the issue of implementation of indicators, but do not focus on hospital care [10, 11]. Another review of the literature has a limited focus on audit and feedback as implementation strategies [8]. With respect to the effectiveness of using indicators to promote quality improvement, previous reviews have focused on specific diseases or medical disciplines, e.g. pneumonia or cardiac surgery [12, 13]. In our review, we focus on hospital care in general, and take into account all possible implementation strategies described in the literature. The purpose of our study is firstly to review the literature concerning strategies for implementing quality indicators, and secondly to examine their effectiveness in improving the quality of hospital care.

Methods

Data source

A systematic literature search was conducted in MEDLINE and the Cochrane Library for the period from January 1994 to January 2008. We searched all articles published in the English and Dutch languages. The search was limited to randomized controlled trials (RCTs), controlled clinical trials (CCTs) and controlled before–after studies (CBAs), as categorized in MEDLINE. A RCT is the most robust study design to show the effect of quality improvement strategies [14]. However, as some strategies are not amenable to randomization, we also included non-randomized trials.

The search strategy in MEDLINE combined a truncated search for ‘quality indi*’ with the text words ‘hospital care’ or ‘quality improvement’. In addition, we searched the Cochrane Library, based on the Medical Subject Heading: quality indicators, health care’. The reference lists of all retrieved articles were searched for additional relevant references.

Two reviewers independently assessed the studies for inclusion. In case of disagreement between the two researchers, a third researcher was consulted.

Study selection

Firstly, we selected studies based on the relevance of the focus of the study. Studies reporting the use of quality indicators as a tool to improve hospital care were included. Studies that measured care processes or patient outcomes were also included, if the focus was on inpatient care at the hospital level, ward, or individual specialist. Studies concerned with primary care, e.g. general practitioners, chronic health, mental health and dental care were excluded because the delivery of care may differ considerably in these care settings from the hospital care setting.

Secondly, we selected studies based on study design and quality of the study. Studies had to report a baseline and follow-up measurement, and include a control and an intervention group. The effects of the implementation strategy had to be quantified, and studies had to be carried out in two or more hospitals because of generalization of the results.

For those studies that met the inclusion criteria, we classified the implementation strategies in which the information on quality indicators was used directly into the following categories (see Table 1): (1) educational meeting, (2) educational outreach, (3) audit and feedback, (4) development of a quality improvement plan and (5) financial incentives.

View this table:
Table 1

Classification of implementation strategies

Implementation strategies in which the information on quality indicators was directly used
 Educational meetingParticipation in conferences, seminars, lectures, workshops or training sessions. During these meetings, feedback of quality indicators was presented, and study participants discussed how to improve performance.
 Educational outreachA trained independent person or investigator who met with health professionals or managers in their practice setting to provide information (e.g. feedback of quality indicators).
 Audit and feedbackReport including a summary of clinical performance over a specified period of time had to be given.
 Development of a quality improvement planA plan based on indicator data was used to improve the quality of care.
 Financial incentiveRewarding individual health professionals or institutions with higher payments when they improve performance.
Supporting activities
 Distribution of educational materialDistribution of educational material was used if published or printed recommendations for clinical care or quality improvement were used.
 Local opinion leaderProfessionals named by their colleagues as influential with emphasis on acting as authority locally.
 Quality improvement facilitiesAn implementation process organized by a quality improvement team or organizations to improve the quality of care and implementing system support methods (support by phone or e-mail for quality improvement).

Implementation strategies that did not directly use the information on quality indicators, but support the implementation, were categorized in ‘distribution of educational material’, ‘local opinion leaders’ and ‘quality improvement facilities’ (see Table 1). Educational meeting was regarded as a supporting activity if the meeting focused on quality improvement techniques instead of presenting feedback on quality indicators.

Common to all studies on which we focus in this review is the use of key information of structure, process and outcome of care, and the systematic use of this information to improve quality of care. Central to the use of quality indicators is the feedback of information. Therefore, in order to summarize the implementation strategies that were used, we categorized the contribution of feedback to the implementation strategy in ‘receiving no feedback report’, ‘receiving a feedback report only’ and ‘receiving a feedback report combined with another strategy, which also used quality indicators as part of the implementation strategy.’

For the studies included, information was collected concerning the healthcare setting, methods used to implement quality indicators in hospitals, and their effectiveness in improving the quality of hospital care. The effectiveness of these strategies may be explained by the fact that they are capable of dealing with different barriers simultaneously [15]. We have summarized the barriers reported in some of the studies.

Results

Selection of articles

As a result of the search, 516 studies were identified (see Fig. 1). Of these, 465 were excluded, because these studies did not aim to measure the effect of the use of quality indicators. Four additional new articles were obtained from the reference lists. A total of 55 articles was evaluated by two reviewers, based on the quality of the studies. Finally, 21 studies were included.

Figure 1

Flow chart of study selection process.

Study characteristics

We included nine RCTs [1624], two CCTs [25, 26] and ten CBAs [2736].

The majority of the trials were conducted in the United States (17 studies); the others were carried out in Canada [17], Australia [33], Sweden [16] and Laos [21]. Furthermore, quality indicators were in a wide range of medical disciplines within hospital care. The majority of studies focused on the use of quality indicators in cardiovascular care (67%) [16, 17, 19, 20, 2224, 27, 29, 3236]. Most studies (81%) aimed at improving quality of care in one specific medical discipline. The sample size showed great variation, from one to 379 hospitals in the intervention group (see Table 2).

View this table:
Table 2

Characteristics and results of the studies included

First author, yearStudy designClinical areaMethods to implement quality indicatorsEffects on care processesEffects on patient outcome
Pandey et al., 2006 [27]CBA (CG = 6, IG = 7)Cardiovascular careEduc. outreach and educ. meeting vs. chart audit onlyNo sign. improvement in 6 out of 7 process indicators, except for lipid screening (adj. OR 19.93; 90% CI 2.99–36.86)Not measured
Carlhed et al., 2006 [16]RCT (CG = 19, IG = 19)Cardiovascular careReal-time feedback report, educ. meetings and QI plan vs. no intervention Supporting activities (QI facilities, incl. QI team and ongoing support by phone and on request site visits)Sign. improvement in 4 out of 5 process indicators: use of ACE inhibitor 1.4% vs. 12.6% (P = 0.002), use of lip. low. 2.3% vs. 7.2% (P = 0.065), use of heparin 5.3% vs. 16.3% (P = 0.010) and use of Cor-Angio 6.2% vs. 18.8% (P = 0.027)Not measured
Grossbart, 2006 [28]CBA (CG = 6, IG = 4)Cardiovascular care, pneumonia, hip/ kneeFeedback report and rewarding hospitals with an incentive bonus vs. no interventionSign. improvement in composite process indicator scores of 6.7 % vs. 9.3% (P < 0.001)Not measured
Moscucci et al., 2006 [29]CBA (CG = 7, IG = 5)Cardiovascular careQuarterly + annual feedback reports, educ. outreach, educ. meeting, distribution educ. material and QI plan vs. no interventionSign. improvement in all 6 process indicatorsSign. improvement in 4 out of 6 outcome indicators; contrast nephropathy (adj. OR 0.59; 95% CI 0.44–0.77), emergency CABG (adj. OR 0.54; 95% CI 0.32–0.90), stroke (adj. OR 0.33; 95% CI 0.16–0.65) and death (adj. OR 0.57; 95% CI 0.40–0.82).
Rosenthal et al., 2005 [30]CBA (CG = 31, IG = 134)Cancer screening, mammography, hemoglobin testingRewarding health care professionals with an incentive bonus vs. no interventionNo sign. improvement in 2 out of 3 process indicators, except for cervical cancer screening (3.6% improvement, P = 0.02)Not measured
Beck et al., 2005 [17]RCT (CG = 38, IG = 38)Cardiovascular careRapid feedback report vs. delayed feedback reportNo sign. improvement in any of the 12 process indicatorsNo sign. improvement in mortality at 30 days after discharge (adj. OR 0.6; 95% CI –0.70–1.8)
Snyder and Anderson 2005 [31]CBA (CG = 142, IG = 199)Cardiovascular care, pneumoniaFeedback report vs. no intervention Supporting activities (distribution educ. material and QI facilities, incl. assisting implementing system change)No sign. improvement in 14 out of 15 process indicators, except for pneumonia immunization (P = 0.005).Not measured
Landon et al., 2004 [25]CCT (CG = 25, IG = 44)HIV infectionMonthly feedback reports, educ. meetings and QI plan vs. no intervention Supporting activities (QI facilities, incl. QI team and monthly conference calls)No sign. improvement in 7 out of 8 process indicators, except for screening and prophylaxis papanicolaou smear (P = 0.06)Not measured
Horbar et al., 2004 [18]RCT (CG = 57, IG = 57)Surfactant preterm infantsFeedback report vs. no intervention Supporting activities (educ. meeting for QI techniques and QI facilities incl. ongoing support by quarterly conference calls and mail discussion list)Sign. improvement in all 3 process indicators: proportion receiving surfactant in delivery room (adj. OR 5.38; 95% CI 2.84–10.20), proportion receiving first surfactant >2 h after birth (adj. OR 0.35; 95% CI 0.24–0.53) and median time from birth to first dose surfactant (P < 0.0001)No sign. improvement in rate of death before discharge
Berner et al., 2003 [19]RCT (CG = 6, IG1 = 8, IG2 = 7)Cardiovascular careEduc. meeting and QI plan (IG1) vs. no intervention Supporting activities (distribution educ. material and IG2 added opinion leader)No sign. improvement in 4 out of 5 process indicators, except for antiplatet medication within 24 h for IG2 vs. CG (adj. OR 1.92; 95% CI 1.19–3.12) and antiplatet medication within 24 h for IG1 vs. IG2 (adj. OR 1.79; 95% CI 1.09–2.94)Not measured
Chu et al., 2003 [26]CCT (CG = 16 (crossover) IG = 20)PneumoniaFeedback report, educ. outreach and QI plan Supporting activities (distribution educ. material and QI facilities incl. support QI training and site visits)Sign. improvement in 2 out of 4 process indicators: antibiotics given emergency department (adj. OR 10.72; 95% CI 3.56–32.30) and blood culture obtained in 4 h (adj. OR 2.48; 95% CI 1.17–5.25)No sign. improvement in LOS and unadjusted mortality
Ferguson et al., 2003 [20]RCT (CG = 115 IG1 = 101 IG2 = 107)Cardiovascular careFeedback report, QI plan and distribution educ. material (one arm received educ. beta-blockade (IG1), other arm received educ. IMA grafting (IG2)) vs. no intervention Supporting activities (local opinion leader)Sign. improvement in 1 out of 2 process indicators: use of preoperative beta-blockade (IG1 group vs. CG (P < 0.001)) and (IG2 vs. CG (P = 0.02))Not measured
Wahlström et al., 2003 [21]RCT (CG = 12, IG = 12)Malaria, diarrhea and pneumoniaEduc. meetings vs. no intervention Supporting activities (educ. meeting incl. QI and QI facilities, incl. QI team)Sign. improvement in overall mean process indicator scores for malaria, diarrhea and pneumonia together (OR 0.63; 95% CI 0.16–1.112)Not measured
Hayes et al., 2002 [22]RCT (CG = 16, IG = 16)Cardiovascular careEduc. outreach vs. feedback report and educ. material Supporting activities (opinion leader, educ. meeting)No sign. improvement in 4 out of 5 process indicators, except for discharge counseling for daily weights (OR 2.63; 95% CI 1.14–6.07)Not measured
Mehta et al., 2002 [32]CBA (CG = 11, IG = 10)Cardiovascular careEduc. outreach, feedback report and QI plan vs. no intervention Supporting activities (opinion leader (outside hospital), distribution educ. material)Sign. improvement in 4 out of 8 process indicators: use of aspirin on admission (81% vs. 87%; P= 0.02), use of beta-blockers on admission (65% vs. 74%; P= 0.04), use of aspirin after discharge (84% vs. 92%; P= 0.002) and smoking counseling at discharge (53% vs. 65%; P= 0.02)Not measured
Scott et al., 2001 [33]CBA (CG = 112, IG = 1)Cardiovascular careFeedback reports and educ. meeting vs. no intervention Supporting activities (distribution educ. material)Not measuredSign. improvement in inpatient mortality (adj. OR, 0.59; 95% CI 0.45–0.77)
Hayes et al., 2001 [23]RCT (CG = 15, IG= 14)Cardiovascular careEduc. meeting and QI plan vs. feedback report Supporting activities (opinion leader and distribution educ. material)No sign. improvement in any of the 5 process indicatorsNot measured
Sauaia et al., 2000 [34]CBA (CG = 9, IG = 9)Cardiovascular careEduc. outreach and QI plan vs. mailed feedback report Supporting activities (opinion leader)No sign. improvement in any of the 7 process indicatorsNot measured
Ellerbeck et al., 2000 [35]CBA (CG = 73, IG = 44)Cardiovascular careQI plan based on feedback vs. no QI plan based on feedbackSign. improvement in 3 out of 8 process indicators: aspirin during hospitalization (6% vs. 13%) and at discharge (6% vs. 15%) and use of beta-blockers (14% vs. 22%).Not measured
Marciniak et al., 1998 [36]CBA (CG = not given, IG= 379)Cardiovascular careFeedback report and QI plan vs. no interventionSign. improvement in 3 out of 7 process indicators: use of aspirin at discharge (OR, 5.6; 95% CI 2.6–8.7), use of beta-blockers (OR, 8.0; 95% CI 1.4–14.6) and smoking counseling (OR, 8.5; 95% CI 1.6–15.5)No sign. improvement in hospital mortality
Soumerai et al., 1998 [24]RCT (CG = 17, IG = 20)Cardiovascular careEduc. meeting vs. mailed feedback report Supporting activities (local opinion leader and distribution educ. material)Sign. improvement in 2 out of 4 process indicators: use of oral aspirin (P < 0.04) and use of beta-blockers (P < 0.02)Not measured
  • CG, number of hospitals in control group; IG, number of hospitals intervention group; QI, quality improvement; OR, odds ratio; CI, confidence interval; sign., significant; adj., adjusted; incl., including; educ.,educational; lip.low, lipid-lowering therapy; Cor-Angio, coronary angiography; IMA, internal mammary artery; LOS, length of stay.

Types of implementation strategies

The methods used to implement quality indicators were classified into implementation strategies in which the information on quality indicators was used directly, or that did not use the information on quality indicators directly, but only supported the implementation, such as the involvement of a quality improvement team.

Table 2 shows the implementation strategies used. The most frequently used implementation strategies in which the information on quality indicators was used directly were audit and feedback (12 studies), followed by the development of a quality improvement plan based on quality indicator data (10 studies), 57% and 48%, respectively. The combination of these strategies was used in seven studies, and often supplemented by educational meetings and/or educational outreach [16, 25, 26, 29, 32]. The most frequently used supporting activity was distribution of educational material (9 studies). Other supporting activities were the use of a local opinion leader and the development of a quality improvement team.

In most studies (86%), multiple implementation strategies were used. In 14 studies, implementation strategies that related directly to quality indicators were combined with supporting activities. In four studies, strategies that related directly to quality indicators alone were used [2729, 36].

Three studies reported a single implementation strategy in which the information on quality indicators was used directly. The single implementation strategies described were as follows: providing external feedback with an incentive bonus [30], providing immediate feedback [17] and using a quality improvement plan [35].

Most follow-up measurements of process and outcome indicators were performed 6 months after the strategy was implemented [1618, 20, 21, 23, 25, 32].

Effects of quality indicator use

Different designs, implementation strategies and outcome measurements to measure the effect of quality indicators were described. In Table 2, the results are summarized as per study.

Most studies measured several outcomes, e.g. the change in several process indicators. In an attempt to summarize the results of the studies, we divided them into three categories: effective, partly effective and ineffective. We categorized the studies as ‘effective’ if more than half of all the outcome measures improved significantly. Studies were considered ‘partly effective’ if approximately half of the outcomes improved significantly, and ‘ineffective’ if there was significant improvement in less than half of all the outcomes.

Nine RCTs, two CCTs and ten CBAs were included. There was no clear relationship between the study design used and the effectiveness. Four out of nine RCTs showed the implementation to be ineffective [17, 19, 22, 23], one CCT was ineffective [25] and four out of ten CBAs did not show clear positive effects [27, 30, 31, 34]. The results of the studies included are reported in three different types of outcomes: overall composite score, patient outcomes and care processes, e.g. hospital mortality and prescribing medication. In two studies, an overall indicator score was measured. These studies showed a statistically significant improvement in the composite process indicator score [21, 28]. Five studies reported patient outcomes as well as care processes [17, 18, 26, 29, 36]. One study measured patient outcomes only [33]. In total, six studies evaluated whether or not quality indicator implementation improved patient outcomes. Four studies were found to be ineffective, one was partly effective and one was categorized as effective (see Table 2). Five studies reported inpatient mortality as endpoint. Two studies found significant improvements in patient outcomes: reduction in inpatient mortality [29, 33], stroke or transient ischemic attack [29], emergency coronary artery bypass graft (CABG) [29] and contrast nephropathy [29].

In 20 studies, process indicators were used to measure care processes (see Table 2). In each of these studies, more than one process indicator was measured. In three studies, there was no significant improvement in all the process indicators measured [17, 23, 34]. Two studies reported significant improvements in all process indicators [18, 29]. Most studies reported significant improvements in part of the measured process indicators. Of these studies, seven studies seemed to be effective or partly effective. These studies reported mostly on higher rates of prescribing drugs: inhibitors of angiotensin-converting enzyme (ACE) [16], heparin [16], antibiotics at emergency department [26], beta-blockers [20, 24, 32] and aspirin [24, 32]. In addition, these studies also reported on treatments given: lipid-lowering therapy [16], coronary angiography [16], blood culture obtained in 4 h [26] and higher rates of smoking counseling [32].

Not all studies adjusted the analyses for differences in distributions of other determinants when comparing the effect in the intervention group with that of the control group. Fourteen of the studies reported adjusted outcome measurements at patient level (age, co-morbidity) and/or hospital level (teaching status/ volume). Of these studies eight were found to be ineffective [17, 19, 22, 23, 25, 27, 31, 34], four were partly effective [18, 20, 26, 32] and only two were categorized as effective [29, 33]. Looking at the studies using unadjusted outcome measurements, three studies were found to be effective [16, 21, 28], three were partly effective [24, 35, 36] and one was ineffective [30].

The follow-up measurement period varied from 4 months [27, 35, 36] to 4 years [33]. Studies with a follow-up measurement period of less than 6 months showed less significant improvement and functioning of the hospital outcome measures [27, 3436].

Types of implementation strategies and their effects

In order to summarize the prevailing implementation strategies, we divided them into three categories: receiving no feedback report, a feedback report only and receiving a feedback report combined with another implementation strategy (see Table 3). There seemed to be a relation between implementation strategies used and the effectiveness of the study (Kruskal–Wallis χ2 = 6,720, P = 0.035).

View this table:
Table 3

Types of implementation strategies and their effects

Implementation strategies in which indicator scores were used directlyEffectiveaPartly effectivebIneffectivec
No feedback report126
Feedback report only012
Feedback report combined with another implementation strategy441
  • aEffective, if more than half of all outcomes improved significantly.

  • bPartly effective, if approximately half of the outcomes improved significantly.

  • cIneffective, if less than half of all outcomes improved significantly.

Effective or partly effective studies appear to use feedback reports combined with other implementation strategies. For example, feedback reports in combination with education and the use of a quality improvement plan seemed to be effective [16, 20, 26, 29, 32]. Studies that did not use feedback reports systematically seemed to be less effective [19, 2224, 27, 30, 34, 35]. Studies using a feedback report only also seemed to be less effective [17, 31].

From the studies describing an implementation strategy in which the information on quality indicators was used directly, eight studies reported a single implementation strategy, with additional supporting activities in five of these. Only one out of eight studies was effective [21]. This study used monthly educational meetings, including feedback and discussion on performance improvement. Thirteen studies used multifaceted implementation strategies, and of these four were effective [16, 28, 29, 33].

Reported barriers

Analyses of barriers to changing practice, such as a review of 76 studies on doctors, have shown that obstacles to change can arise at different levels at the health care system, such as at the level of the patient, the individual professional, the health care team, the health care organization, or the wider environment [37].

In our study, we identified reported barriers to implementation. In seven of the studies included, perceived barriers to change were reported. In these studies, we also identified barriers at different levels of the health care system (see Table 4): knowledge and cognitions (not convinced of the evidence) of the individual health care professional; interaction within the team (no mutual accountability and control, no leadership) and functioning of the hospital (facilities).

View this table:
Table 4

Studies addressing the perceived barriers

Barriers at different levelsFocus of factorsBarriersStudy
ProfessionalKnowledgeUnawareness[19]
CognitionsLack of credible data[17, 21]
Team or unitSocial influence and leadershipNo support management/physicians[17, 22]
HospitalResourcesLack of resources[20, 22, 23, 30]

Four studies reported a lack of resources, e.g. time investment and lack of administrative support (see Table 4). Several facilitating factors were reported, such as the availability of supportive/collaborative management, administration support, using detailed and credible data feedback to evaluate effects and the ability of persons receiving feedback to act on it.

Discussion

The two main objectives of this review were to explore the best implementation strategy for quality indicators, and to quantify the effectiveness of using quality indicators as a tool to improve quality of hospital care. Our results show that the majority of the studies included reported combinations of implementation strategies in which audit and feedback were most frequently used. Few studies showed significant improvements in the outcomes measured. Most of them focused on process measures, and reported significant improvements in part of the measured process indicators. Only few studies focused on the improvement of patient outcomes.

We recognize that significant improvements in patient outcomes are difficult to achieve. In our review, studies with a follow-up measurement period of less than 6 months showed less significant improvements in outcome measures. Short follow up on the effects of the implementation strategies may have contributed to the lack of effectiveness in some studies.

Looking at the type of implementation strategies used and their effects, there does seem to be a link between how quality indicators are used and the effectiveness of the study. Although this relationship was statistically significant (Kruskal–Wallis χ2 = 6,720, P = 0.035), we should be cautious in interpreting this partly arbitrary data. Effective or partly effective studies appeared to use feedback reports combined with other implementation strategies. Receiving a feedback report combined with education and the use of a quality improvement plan seemed to be effective. Less effective were those implementation strategies in which health care providers or managers did not receive a feedback report of quality indicator data. To improve patient outcomes or provider performance, health care providers should receive feedback on their performance in order to change practice and improve patient outcomes.

It has been suggested that multifaceted implementation strategies are more effective than single implementation strategies [10, 38]. In this review, we also found combinations of implementation strategies to be most effective. However, we could not really confirm these results because only few studies involved single implementation strategies.

The prevailing view on implementation of strategies to improve quality of care is that they should be tailored to potential barriers [39]. Ideally, possible barriers should be analyzed before the quality improvement implementation strategies are developed in order to influence both type and content of the implementation strategy [39]. Remarkably, none of the studies included reported translation of a priori identified barriers into tailor-made implementation strategies, but only reported barriers after the strategy was implemented. This may have affected the effectiveness.

The studies included in this review showed a great diversity in outcomes measured. Therefore, in an attempt to summarize the results of these studies, we categorized studies as ‘effective’, ‘partly effective’ and ‘ineffective’. However, the results of this aggregation have to be interpreted with caution. All outcomes were included on an equal basis, but outcomes may be valued differently for their relevance for quality of care. For example, one measure for patient outcome may be of more value than a process measure.

The implications of the findings reported in the present review must be considered within the context of the limits of the study. Firstly, we used strict selection criteria and as a result, the studies included are limited. Studies without a control group were excluded and, consequently, interrupted time series were excluded as well.

Due to our inclusion criteria, we report only on studies with primary quantitative outcome measurements. As a result, insights from qualitative studies fall outside the scope of this paper.

We noted the relatively narrow range of clinical areas studied. While cardiovascular care is an important clinical topic, other important areas, such as intensive care and obstetrics, were not covered in the studies. As a result, it is difficult to make generalized conclusions about hospital care as a whole.

Secondly, there is a great variation in quality of the studies. The availability of well-designed studies on this topic is limited. In the results section, we reported that adjustments for differences in distributions of other determinants varied when comparing the effects in the intervention group with the control group. Studies using unadjusted outcome measurements seemed to be more effective than studies using adjusted outcomes. In addition, most studies describe a combination of implementation strategies, which hampers a quantification of the effects of separate implementation strategies. Finally, implementation strategies used in the studies were often poorly described; therefore, we checked them against a standardized list of strategies.

In conclusion, there are many different implementation strategies in which the information on quality indicators was used directly, focusing on feedback, education, etc. Often, these strategies were combined with supporting activities. Receiving a feedback report, combined with education, and the development of a quality improvement plan seemed to be most effective. Effective strategies to implement quality indicators in daily practice in order to improve hospital care do exist, but there is considerable variation in methods used and the level of change achieved. Based on the present review, receiving a feedback report combined with another implementation strategy is recommended. There is a need for thoroughly designed studies on the implementation of quality indicators to further guide future implementation.

References

View Abstract