OUP user menu

Using a logic model to design and evaluate quality and patient safety improvement programs

Christine A. Goeschel, William M. Weiss, Peter J. Pronovost
DOI: http://dx.doi.org/10.1093/intqhc/mzs029 330-337 First published online: 28 June 2012

Abstract

Quality improvement programs often pose unique project management challenges, including multi-faceted interventions that evolve over time and teams with few resources for data collection. Thus, it is difficult to report methods and results. We developed a program to reduce central line-associated bloodstream infections (CLABSIs) and improve safety culture in intensive care units (ICUs). As previously reported, we worked with 103 Michigan ICUs to implement this program, and they achieved a 66% reduction in the median CLABSI rate and sustained the reduction. This success prompted the spread of this program to Spain, England, Peru and across the USA. We developed a logical framework approach (LFA) to guide project management; to incorporate the cultural, clinical and capacity variations among countries; and to ensure early alignment of the project's design and evaluation. In this paper, we describe the use of the LFA to systematically design, implement and evaluate large-scale, multi-faceted, quality improvement programs.

  • quality management
  • quality measurement
  • measurement of quality
  • design for safety
  • patient safety
  • general methodology

Guidance to successfully implement and evaluate healthcare quality improvement and patient safety programs is limited [1]. These programs pose unique challenges, including multi-faceted interventions that generally evolve over time and often have few resources for data collection, which makes it difficult to report methods and results [2]. We developed a program to reduce central line-associated bloodstream infections (CLABSI) and improve safety culture (Comprehensive Unit-based Safety Program; ‘CUSP’) in intensive care units (ICUs). As previously reported, we worked with 103 Michigan ICUs to implement this program, and they achieved a 66% reduction in the median CLABSI rate [3] and sustained the reduction for over 3 years [4]. Moreover, safety climate scores improved by 10%. This success prompted the spread of this CUSP/CLABSI program to Spain, England, Peru and nationwide in the USA.

In these initiatives, we use a logical framework approach (LFA) to guide project management; to incorporate the cultural, clinical and capacity variations among countries; and to ensure early alignment of the project's design and evaluation [5].

In this paper, we use our ICU CUSP/CLABSI program in Michigan as a case study to describe how project directors can use the LFA to systematically design, implement and evaluate large-scale, multi-faceted, quality improvement programs.

Logical framework approach

The LFA helps project teams increase the potential of planning, implementing and evaluating programs that will achieve the desired results. The LFA's value includes its summation of ideas on one document to facilitate transparency, communication and collaboration with stakeholders [6, 7]. Logic models are common in public health, although many organizations outside of public health have adapted this approach. Program logic models track program efforts from beginning to end.

The LFA includes some best practices for project management such as management by objective, participatory planning, management by exception and backward planning [8, 9]. ‘Management by objective’ is a disciplined decision-making process that describes what changes are needed and why, and decides what activities and resources are required before any action occurs. It defends against the common quality improvement mistake of selecting solutions before defining problems [10].

The LFA prompts program designers to document this multi-faceted information in one report (with visuals) so managers, funders and other stakeholders can participate in a critical review. This transparency allows persons with varying perspectives to engage in ‘participatory planning’ by reviewing the comprehensive ‘logic’ and helping shape the program before implementation.

The LFA includes an evaluation plan for tracking a project's progress along its continuum from design and resource acquisition to achievement of desired results. This allows project managers to identify problems early and implement interventions to keep the project on course. The evaluation plan follows the ‘management by exception’ principle [11] which focuses on a limited set of indicators that can signal an emerging problem and seeks a smaller quantity of higher quality data. A LogFrame or matrix is a tool used to develop the project evaluation plan. The LogFrame shows the project steps for both the vertical and horizontal logic needed to develop, monitor and evaluate the project. The LogFrame architecture we use is just one of many possible variations.

Developing the strategy: vertical logic

A pyramid may be used to understand the vertical logic of the LogFrame (Fig. 1).

The vertical logic depicted by this pyramid suggests that for any program, many resources are required to achieve a limited set of Results, which in turn contribute to the narrowly focused program Goal. The LFA relies on a ‘backward planning’ process to develop this vertical logic. The project developer or team defines the concise Goal first, and then works ‘backwards’ to identify all the components required to achieve the goal. Each level of vertical logic is subject to assumptions that get described during development of the horizontal logic.

The Goal describes the desired long-term impact to which the program contributes (e.g. reduce ICU morbidity and mortality). We deliberately use the word contribute because individual programs may be ‘necessary’ but may not be singularly ‘sufficient’ to achieve the goal.

Results (‘the improvements we want to see that will move us toward the goal’) are the desired measurable changes in health outcomes that should occur if the program objectives are met. Once the program development team defines desired results, they should look at empirical evidence to select program objectives. Scientific evidence should link the objectives with the desired results.

Objectives (‘what we intend do to get the results we want’) describe measurable changes in behavior, coverage of services and/or the health system expected in response to the program. Each objective has its own LogFrame matrix.

Outputs (demonstrated attitudes beliefs and behaviors in response to adaptive program activities and measures ‘of process participation/compliance in response to technical program activities’). For example, if one technical program activity (or service) is to train ICU staff on the Science of Patient Safety, then an output of the program is the number of ICU staff who received this service (training) compared with those who were eligible to receive it. If an adaptive activity is asking staff to speak up when they identify unit safety hazards, an output may be the frequency with which such reports are made.

Activities (‘processes designed and implemented’) to help achieve each defined objective and generate the outputs. Programs should strive for a limited set of activities to minimize burden necessary to implement and sustain the interventions.

Inputs (‘the resources needed to do the work’). Inputs for any successful project are diverse, and thus they are depicted at the base of the logical framework pyramid.

LogFrame example: ICU project case study

The two goals of the ICU Project were to reduce mortality and length of stay among patients, and the intermediate result to help achieve the goals was to reduce the incidence of CLABSIs (Fig. 2). To achieve the result, the project developers established five objectives:

  1. Increase the percent of ICUs with an effective CUSP team.

  2. Increase the percent of ICUs that ‘learn from defects’.

  3. Increase the percent of ICUs with improved teamwork and communication.

  4. Increase the percent of ICUs with fully equipped central-line carts.

  5. Increase the percent of ICUs that routinely practice evidence-based bloodstream infection prevention (use checklist, monitor infections, investigate each infection).

Figure 2

Summary LogFrame for CLABSI reduction.

The first three objectives were ‘adaptive’ interventions, which support a change in the safety context or ‘culture’ within the ICU. The final two objectives were the ‘technical’ interventions that directly targeted the evidence-based structure and process capacity of the ICU. The project developers expected the adaptive interventions to contribute directly or indirectly to the intermediate result, since they assumed that teams with strong adaptive skills were more likely to be successful adhering to the programs technical interventions.

Selecting objectives and interventions are critical aspects of the LFA. We used our previously published method for translating evidence into practice (TRiP) to select objectives and interventions to reduce CLABSI [12]. The resources needed to design, implement and evaluate translational programs are substantial. The model assumes that centralized researchers support the technical program (e.g. summarize the research evidence and develop performance measures) and local teams implement the technical and adaptive work (e.g. tailor interventions to the local context and process of work). The TRiP steps include: (i) summarize the evidence, (ii) identify local barriers to implementation: understand the process and context of work, (iii) measure performance and (iv) ensure all patients reliably receive the intervention.

Each of the objectives, intermediate results and goals are measureable, logically linked and provide the basis for an evaluation plan. Figure 2 summarizes the ‘strategy for reduction in CLABSIs’ in the Johns Hopkins CUSP/CLABSI Program as used in Michigan. For each objective, we used a separate LogFrame matrix in the form of a table to describe in detail the tactics we believed would be necessary to achieve the objective. The horizontal logic structure (described later) defines how we intended to evaluate each objective and supports the evaluation process.

LogFrame matrix: developing interventions

The LogFrame matrix (Fig. 3) provides two left-side columns to describe the vertical logic for Objective 1 which is to increase the percent of ICUs with an effective CUSP team. The three right-hand columns are part of the horizontal logic (described in the next section) that supports the evaluation plan for Objective 1.

Figure 3

Objective 1 LogFrame matrix.

The top three rows (Goal, Result and Objective) are ‘strategy’: they link Objective 1 (increase the percent of ICUs with an effective CUSP team) to the targeted intermediate result of reducing CLABSI, which in turn contributes to the goal of reducing mortality and length of stay for ICU patients. The bottom three rows are tactics: they describe how the program plans to achieve this strategy: what outputs are needed to achieve the objective; what activities are needed to create these outputs and what inputs or resources are needed to carry out the planned activities.

The outputs for this example are attitudes beliefs and behaviors considered sufficient, given assumptions, demonstrate achievement of the specified objective (to increase the number of ICUs with an effective CUSP team). The beneficiaries of the ICU Project are patients, the hospitals with ICUs participating in the Project and their staff. LogFrame matrices and evaluation plans for Objectives 2–5 of the ICU Project are available by contacting the corresponding author.

Developing an evaluation plan: horizontal logic

The vertical logic provides the conceptual structure and operational definitions for each level of the program, and the horizontal logic describes what is needed to accomplish each level. The horizontal logic builds directly from the vertical logic, and supports the program evaluation plan. There are three critical elements of the evaluation plan, Key Assumptions, Objectively Verifiable Indicators and Means of Verification (MoV), with a column in the matrix dedicated to each. Each critical element is described below using an example from the LogFrame matrix for Objective 1 of the Michigan ICU project (increase percent of ICUs with CUSP Team).

Clarifying assumptions

Key assumptions are the ‘external’ things, which are outside the program manager's control, that the development team believes need to happen to ensure that achievement of one LogFrame level (e.g. Activities) leads to achievement of the next higher level (e.g. Outputs) (Fig. 4). Assumptions are typically rooted in knowledge that may come from research, experience or theory. It is important to consider these assumptions when monitoring and evaluating a program. Assumptions are sometimes incorrect; or external factors that made the assumption true at the beginning of the program change while the program is underway. If assumptions do not hold during the course of a program, the tactics or strategy may need to change. Our experience suggests that defining assumptions at the beginning of project is difficult and often a skipped step and missed opportunity. Clarifying assumptions as you begin improves the chances of program success.

Figure 4

Assumptions for Objective 1 LogFrame matrix.

Defining objectively verifiable indicators

Objectively verifiable indicators are signs or signals demonstrating whether each level of the vertical logic is or is not being achieved (Fig. 5) [13].

Figure 5

Objectively verifiable indicators for Objective 1 LogFrame matrix.

The ‘management by exception’ principle applies here. Rather than using scarce management resources to monitor all details of the program, focus on a limited set of indicators that can show whether the program is achieving the overall goals and objectives or signal an emerging problem. Indicators should be limited in number and scope to monitor problems emerging in either the vertical logic or the key assumptions of your LogFrame matrix.

Objectively verifiable indicators may be either process or outcome measures. Shojania provided a critique of patient safety measurement systems currently available in healthcare [14]. Each method and type of data has relative strengths and weaknesses. Program designers should consider the variety of metrics and available measurement systems capable of supporting the overarching program goal and the designated intermediate results. Moreover, they must weigh the benefits and the burdens of data collection against the quality of the data. All measures, especially for the goals and results, should be robust and valid. Different measures may be suited for different purposes.

Identifying MoV

The MoV are the data sources and information systems that will provide the numerator and denominator data, if needed, for the objectively verifiable indicators specified for each objective (Fig. 6). Data quality control processes are an important component of the MoV. In the design phase, it is useful to focus on the quality (rather than quantity) of data, considering resource limitations. Data quality can be maximized through the use of standard data collection forms, thorough staff training and an easy to use, but well-structured database. Clearly defined data elements and routine audits of both data collection and entry allow for quick mitigation of error. Missing data should be identified and corrected with system-based controls data. Appropriate statistical methods and sensitivity analysis should be used to manage the effects of missing data and outliers and to portray the results accurately [15].

Figure 6

MoV for indicators for Objective 1 LogFrame matrix.

Deciding what to collect, when to collect and how to use quality improvement data

Understanding the impact of quality improvement programs and evaluating their benefit is complex. In our experience, it is useful to consider the burden of data collection against several criteria such as:

  1. What is the current state (calls for baseline data collection of key performance/outcome indicators)

  2. What risk is associated with current state? (e.g. do nothing)

    1. Clinical risk (patient harm; poor outcomes)

    2. Financial risk (costs of care; reimbursement; ability to attract and retain staff)

    3. Reputational risk (comparison of outcomes, patient satisfaction and value across hospitals or clinics)

  3. What resources are available to collect, analyze and report data and are clinicians supported in efforts to use such data to improve care?

Questions of whether data collection should be continuous or episodic require careful consideration of what the data show. When outcomes are good and performance is stable, episodic measurement is an efficient and appropriate approach. Yet quality improvement programs must always consider the burden of potential harm when determining the frequency of monitoring.

An equally important question is how to use the data that are collected. In our experience, data to influence change must be routinely and transparently shared with everyone that is expected to be part of the change process. Clinicians, administrators, patients and families can all be motivated to understand and participate in quality improvement programs by data that are clear, concise, accompanied by a meaningful explanation and a sincere request for input on how to use the data to improve healthcare delivery. We often use run charts as well as simple signs in public places that report results, for example, posting on bulletin boards in staff lounges: ‘how many weeks since our last CLABSI’. Executives take data to board meetings, and managers are encouraged to take all program data to staff meetings and discuss it with caregivers gathering their ideas on how care to improve program fidelity.

Understanding whether the program ‘worked’ is multifaceted, yet QI resources are typically limited. Thus, we aim to rigorously measure select, narrowly defined clinical indicators that evidence suggests are tied to outcomes, robustly analyze the data and report it transparently. In addition, we assess staff perception of quality and safety on an annual basis using the Agency for Healthcare Research and Quality (AHRQ) hospital survey of patient safety (HSOPS). This instrument is freely available in many languages and provides insights on whether clinicians believe that care is improving.

Program level evaluation is complex. In quality improvement programs where control groups are often not available and randomized control trials are not typically feasible, adequacy assessments provide program managers with a relatively inexpensive and feasible method to answer the questions ‘did the expected changes take place?’. Adequacy evaluations such as those used in our ICU program may be pre/post, cross-sectional, carried out on a single occasion during or at the end of the program. They may also be longitudinal, requiring baseline data and repeated measures for detecting trends. Adequacy evaluations do not allow one to infer that if changes occurred they were due to the program, but they are often useful to provide reassurance that expected goals are being met, to satisfy funders and to guide future efforts.

Adaptation of ICU CUSP/CLABSI program

To adapt the CUSP/CLABSI program for effective use in other countries, proposed Activities and consequent Inputs and Outputs (the interventions) may vary by setting. What is likely to be similar, or the same from site to site, is the overall strategy (Goals, Results and Objectives). The indicators for these may not require much adaptation. If a goal is to improve performance across settings, it is essential that the stated goal and targeted results are similar in each setting to allow for standardization of indicators, data collection forms, training for data collection, databases and MoV across settings. This would make it easier and more cost-effective to develop a global strategic program while encouraging local modification of the interventions, especially the adaptive elements that are context sensitive.

In conclusion, we believe that the use of a logic model is an effective approach to plan, manage and evaluate large-scale quality improvement programs. Because journals often limit a detailed description of quality improvement interventions, researchers could publish their models and LogFrames, allowing for greater learning, and providing a mechanism to disseminate their results.

Funding

This research was supported by a grant from the Agency for Healthcare Research and Quality (Michigan ICU Project, 1 UC1 HS14246-01), and a contract from the World Health Organization Patient Safety Program.

Conflict of interest

C.A.G. and P.J.P. receive honoraria plus travel reimbursement from various not-for-profit entities to speak on improving quality.

Acknowledgements

The authors wish to thank Christine G. Holzmueller, BLA, medical writer/editor in the Armstrong Institute for Patient Safety and Quality for her help reviewing and editing this manuscript. She did not receive additional compensation for this assistance.

References

View Abstract