OUP user menu

Using clinical indicators to facilitate quality improvement via the accreditation process: an adaptive study into the control relationship

Sheuwen Chuang, Peter P. Howley, Stephen Hancock
DOI: http://dx.doi.org/10.1093/intqhc/mzt023 277-283 First published online: 14 April 2013

Abstract

Objective The aim of the study was to determine accreditation surveyors' and hospitals' use and perceived usefulness of clinical indicator reports and the potential to establish the control relationship between the accreditation and reporting systems. The control relationship refers to instructional directives, arising from appropriately designed methods and efforts towards using clinical indicators, which provide a directed moderating, balancing and best outcome for the connected systems.

Design Web-based questionnaire survey.

Setting Australian Council on Healthcare Standards' (ACHS) accreditation and clinical indicator programmes.

Results Seventy-three of 306 surveyors responded. Half used the reports always/most of the time. Five key messages were revealed: (i) report use was related to availability before on-site investigation; (ii) report use was associated with the use of non-ACHS reports; (iii) a clinical indicator set's perceived usefulness was associated with its reporting volume across hospitals; (iv) simpler measures and visual summaries in reports were rated the most useful; (v) reports were deemed to be suitable for the quality and safety objectives of the key groups of interested parties (hospitals' senior executive and management officers, clinicians, quality managers and surveyors).

Conclusions Implementing the control relationship between the reporting and accreditation systems is a promising expectation. Redesigning processes to ensure reports are available in pre-survey packages and refined education of surveyors and hospitals on how to better utilize the reports will support the relationship. Additional studies on the systems' theory-based model of the accreditation and reporting system are warranted to establish the control relationship, building integrated system-wide relationships with sustainable and improved outcomes.

  • clinical indicators
  • control relationship
  • quality measurement and reporting
  • accreditation
  • systems theory
  • feedback

Introduction

The use of accreditation systems to improve healthcare quality and patient safety has increased internationally [14]. Quality measurement and reporting systems (hereafter referred to simply as quality measurement systems) which incorporate clinical indicators have also become more visible aspects of hospitals' improvement efforts [59]. These systems involve considerable resources and are believed to improve quality [6, 7, 10, 11]. Despite the interest in achieving quality through quality measurement systems, it has been found to be not well correlated with satisfying accreditation requirements [1216]. In cases where hospitals achieve accreditation and their quality measurement system's approaches are deemed to be acceptable, partial, inconsistent and conflicting success in improving quality have resulted [7, 10].

Chuang and Inder's (2009) systems theory-based approach to improve patient safety and quality in healthcare systems identified open and dynamic complex systems as a suite of interrelated subsystems that are kept in a state of dynamic equilibrium by feedback loops of information and control. A holistic healthcare systems relationship model was developed to provide a framework for evaluating the effectiveness of the interrelated subsystems of the healthcare system, namely the accreditation, quality measurement and hospital systems, on improvements in health care [17]. The model revealed a control relationship between the quality measurement and accreditation systems as crucial towards creating a positive association between these two systems and achieving continuous improvement in patient safety. The control relationship refers to instructional directives between systems, arising from appropriately designed methods and efforts by each system's responsible agent towards using the clinical indicators, which will provide a directed moderating, balancing and best outcome for the systems it connects. The role of accreditation surveyors was thus identified as a key system component in activating this relationship. If reports stemming from the quality measurement system were utilized by hospitals to guide quality improvement and referenced by surveyors to assess quality improvement in hospitals, then surveyors could produce valuable feedback to hospitals via the accreditation process. This would propagate the continuity of healthcare quality and patient safety improvement.

This study was undertaken to facilitate a better understanding of the existing status of this important, but undeveloped, control relationship, and thus the potential to initiate the system-wide redesign of the healthcare system's components towards establishing the control relationship. The aims of the study were to determine: (i) the extent to which surveyors used the existing reports when accrediting hospitals; (ii) the perceived utility of the reports; (iii) surveyors' perspectives on the extent to which hospitals used the reports and (iv) the potential to improve the accreditation and quality measurement systems.

Methods

A model of the feedback architecture between the accreditation, quality measurement and hospital systems (Fig. 1) identifies key system components to be tested to understand the existing status of the control relationship. In Fig. 1, the accreditation system encompasses the components within the largest of the dotted rectangles. The two key system component inputs to the accreditation process are the clinical indicator reports, generated by the quality measurement system, and the accreditation agency. The latter has three subcomponents: the chosen accreditation standards, surveyors and method for investigation. The accreditation process consists of preparation, on-site investigation and evaluation. Its results become inputs for the hospital system.

Figure 1

Key components of the accreditation-quality measurement system to be tested for their effectiveness. CI = clinical indicator. QMS = quality measurement system. Accredn = accreditation.

Theoretically, appropriate interaction between system components will result in the desired best outcomes. This study focuses primarily on the clinical indicator reports and their contribution to the accreditation surveyor component. The study also considers the accreditation agency for the effects of its training programme and preparation of surveyors, and hospital staff, and how these components interact with the clinical indicator reports. These are labelled in bold in the model with interactions of interest stated as questions to be addressed within this study. The provision of the pre-survey package and the surveyor training programmes are sub-processes of the accreditation agency's surveyor preparation. If the reports provided to the surveyors and hospitals are inadequate or poorly used then the system will underperform. Hence knowing the existing status of each of these components is critical in determining the condition of the potential control relationship in such accreditation and quality measurement systems. A questionnaire to address these uncertainties was designed and administered to accreditation surveyors.

Setting

The Australian Council on Healthcare Standards (ACHS) has a well-established national accreditation programme. It has provided strong support for Australian healthcare and is one of the most commonly cited programmes in the world [18, 19]. The programme at the time of the survey was based upon the ACHS's Evaluation and Quality Improvement Programme, 4th edition (EQuIP4) standards. The EQuIP4 standards are designed to guide organisations through a four-year cycle of self-assessment, organisational-wide survey and periodic review conducted by a multidisciplinary team of surveyors [20]. In the 2007–2008 National Accreditation Report [21], 454 organizations participated in on-site investigations using the EQuIP4 standards.

ACHS surveyors are health-industry professionals or consumers. They annually undergo training in a hospital review against the EQuIP4 standards. As at 31 December 2008, the primary backgrounds of the ACHS's accreditation surveyors were: nursing (38%), administration and management (31%), medical and dental fields (24%), allied health (4%) and consumers (3%). Of the total surveyor days in 2008, 57% were conducted by surveyors not otherwise employed and receiving an ACHS honorarium per day of survey, 43% were conducted by volunteer employer-sponsored surveyors [21].

ACHS clinical indicator reports

In addition to providing a national accreditation scheme, the ACHS supports hospitals by providing a list of clinical indicators from which hospitals may choose to collect data to submit for analysis and reporting via the ACHS's online performance indicator reporting tool. In the Australasian clinical indicator report 2001–2008, data were received from 689 Australian and New Zealand hospitals on 363 clinical indicators across 23 specialties (clinical indicator sets) [22]. This is the largest source of clinical indicator data in Australia and New Zealand.

Each year, hospitals who have contributed clinical indicator data to the ACHS receive two kinds of reports. The first is a ‘six-monthly’ report which provides aggregated results across all contributing hospitals and comparisons with the individual hospital's ‘peer’ organizations. The comparisons include confidence intervals of rates, observed and expected counts of events of interest and numbers of hospitals contributing data, for each clinical indicator. Hospitals reporting four or more 6-month periods of data for a given clinical indicator also receive an annual ‘trend’ report. This second report shows comparative information for the period covered, starting from as early as 2001. The six-monthly report provides more simplistic statistical comparisons and is less descriptive and shorter than the trend report.

The trend report compares individual hospital performance with the entire system of organizations, based upon means, 20th and 80th centile rates, and provides trend analyses of their 6-monthly rates. The report considers both within-hospital and between-hospital variations [23]. It contains graphical and numerical comparisons, including cumulative counts of observations above that expected for the hospital, and indicates the hospital's performance (including measures of statistical significance) relative to itself and to other hospitals contributing the clinical indicator data.

These ACHS reports are designed as screening tools that can alert possible problems or opportunities to improve patient care. They are available to accreditation surveyors before their survey.

Questionnaire

The study surveyed ACHS surveyors to assess the utility and perceived value of the reports. The survey questionnaire was tested in November 2008 for face validity by three experienced surveyors who were senior healthcare workers in medicine and nursing. To ensure respondents reflected Australian accreditation, ACHS international surveyors who did not conduct surveys in Australia, and consumer surveyors who did not have experience working in hospitals, were excluded from the survey. The questionnaire was emailed to 306 ACHS surveyors in January 2009. Responses were collected the following month. The survey covered surveyors' characteristics, including their experience, professional background and employment status; their survey experiences in 2008, including the use of reports in preparation for the accreditation, training resources utilized, and how reports were obtained; and the relative use of the reports by hospitals.

Data analysis

Sample bias was assessed through comparison with available population surveyor characteristics. Descriptive analyses of the survey results are reported within three themes explored from the surveyors' perspective: (i) the relative use of the existing reports by surveyors; (ii) the usefulness and suitability of the existing reports and (iii) the relative use of these reports by hospitals. The use (‘always’ or ‘most of the time’), or not, of each of the 6-monthly and trend reports and the usefulness of particular sections of these reports were tested for association with the following nine factors using Pearson Chi-square tests:

  1. Surveyor type (Honorarium, Employer-sponsored);

  2. Number of surveys undertaken in previous 12 months (≤2, 3–5, ≥6);

  3. Years as a surveyor (≤2, 3–7, ≥8);

  4. Size of surveyed hospitals (representing whether surveyor had surveyed large hospitals (>450 beds));

  5. Professional background [Medical/Dentistry (Allied Health), Nursing, Management/Administration];

  6. Employment status (full-time, part-time, retired);

  7. Received report in pre-survey package (always, sometimes, never);

  8. Used non-ACHS reports (yes, no);

  9. Received ACHS-specific training on role of CIs (yes, no).

Results

Sample appropriateness and surveyor characteristics

Seventy-three completed surveys were received; a response rate of 24%. The distribution of responding surveyors' professional backgrounds was not statistically significantly different to the population (P = 0.67). Fifty-six percent of respondents were employer-sponsored and 43% honorarium, reflecting the known population percentages of total surveyor days of 57 and 43% respectively. The distribution of surveyors' employment status, number of years of experience and number of surveys conducted in 2008 indicate that all categories were well-represented, see Table 1.

View this table:
Table 1

Characteristics of respondents

CharacteristicsFrequencyPercentage
Total73100
Professional backgrounds
 Nursing3346
 Medicine/dentistry2130
 Administration/management1521
 Allied health (two missing values)23
Current employment
 Full-time4663
 Employed part-time1521
 Retired (two missing values)1216
No. of years as an ACHS surveyor
 ≤21521
 3–72230
 ≥83649
Surveys conducted in the last 12 months
 1–22129
 3–53244
 ≥62027

Surveyors reported surveying between 1 and 18 different specialties, the average being 5. Eighty-seven percent of surveyors had experience accrediting acute general hospitals, 38% with each of mental health/psychiatry and day-procedure centres, 32% with community health/home-care facilities, 25% for each of Rehabilitation/Disability Care with the remaining specialist categories each having between 13 and 23%, with the exceptions of Indigenous Health, 6%, and Sexual Health, 1%.

Survey preparation by ACHS

The ACHS provides surveyors a pre-survey package and training resources for clinical indicators. Seventy-eight percent of surveyors indicated ACHS training and education programmes had contributed to their understanding of clinical indicators with 22% learning about clinical indicators solely from non-ACHS sources, including learning from their own experience. Only 30% of surveyors identified that they received the ACHS reports in their pre-survey package in all cases, a further 47% in some cases and 23% in no cases. In cases where the surveyors did not receive the reports from the ACHS, 50% indicated that the reports were provided by the hospital in their evidence folders, and a further 50% indicated that they received them from the hospital upon request.

Theme one: the relative use of ACHS reports by accreditation surveyors

Of 71 respondents, 51 and 45% used (always or most of the time) the six-monthly and trend reports respectively. Surveyors who used the six-monthly report were also likely to have used the trend report (P < 0.0001).

Surveyors' use of ACHS reports was related to the reports' availability prior to on-site investigations

Among those who always received the 6-monthly report in the pre-survey package, 81% reported using the report, compared with 36% of those who sometimes received the report and 44% of those who never received the report (P = 0.005). Among those who always received the trend report in the pre-survey package, 76% reported using the report, compared with 28% of those who sometimes received the report and 44% of those who never received the report (P = 0.003).

Surveyors who used the ACHS reports also showed a propensity to use other non-ACHS indicator reports

The use of non-ACHS reports was statistically significantly positively associated with the use of the 6-monthly report (P = 0.04); no other factors were statistically significantly associated (P > 0.12).

Theme two: the usefulness and suitability of the ACHS reports

The surveyors' perceived usefulness of clinical indicator sets was positively associated with how broadly hospitals reported upon them. The top eight clinical indicator sets rated particularly useful by surveyors also had the most hospitals reporting data in the Australasian clinical indicator report 2001–2008 [22], see Table 2.

View this table:
Table 2

Perceived usefulness and reporting volume of clinical indicator sets

Clinical indicator setNo. of surveyors identifying as useful (n = 73)% of respondentsNo. of hospitals reporting data
Hospital-wide3041460
Infection control3041320
Adverse drug reactions1926174
Surgery1825192
Emergency medicine1622211
Obstetric1622180
Day surgery1521400
Anaesthesia1521308

The simpler measures and visual summaries in ACHS reports were rated the most useful

Sixty-four percent indicated the simple clinical indicator rates, 62% the identification of outliers and statistical significance and 32% the excess count (i.e., the difference between the observed and expected counts for the hospital) as the most useful section of the 6-monthly report. Surveyors using non-ACHS reports were more likely to rate the simple clinical indicator rates as most useful (P = 0.03). Surveyors undertaking the greater number of surveys (≥6) in 2008 were more likely than those undertaking smaller numbers of surveys to rate outliers and statistical significance as most useful (P = 0.04). Surveyors having surveyed larger hospitals were more likely to have identified excess count as most useful (P = 0.03).

Fifty-eight percent indicated the summary pages, 56% the trend in rates and 38% outlier identification as the most useful section of the trend report. No statistically significant predictors were identified for indicating the summary pages as most useful. Honorarium surveyors were more likely than employer-sponsored to rate the trend in rates as most useful (P = 0.008). Full-time employers were less likely than part-time and retired surveyors to rate the trend in rates (P = 0.05) and outlier identification (P = 0.02) as most useful.

The ACHS reports were deemed to be suitable for the quality and safety objectives of the four key groups of interested parties

Eighty-six percent of surveyors judged the reports to be ‘very’ or ‘moderately’ appropriate for the senior executive officials, 89% for safety and quality managers, 85% for surveyors and 75% for clinicians (partially reflecting previous reports clinicians seldom using quality reports [24, 25]). These consistently high proportions were not statistically significantly different (P = 0.71). Further, of those who only ‘sometimes’ used the reports, a large majority, ranging from 70 to 90%, indicated that the reports were very or moderately appropriate for each of the groups.

Theme three: the relative use of ACHS reports by hospitals

Half (28 of 55) of surveyors indicated that the 6-monthly reports were used by the hospitals' staff in ‘most’ or ‘all’ cases. Slightly more than half (28 of 48) indicated that the trend reports were used in ‘most’ or ‘all’ cases. The rates for each report's use increased to 96% when the response ‘at least in some cases’ was included. Neither report was perceived to be used more than the other (P = 0.88).

Twenty-five of the 51 respondents who described hospitals' comments arising during discussion of the reports indicated that hospitals had strictly positive attitudes towards the use of the reports, with 10 indicating strictly negative attitudes and 16 having mixed experiences. The negative comments included indications of lacking an understanding of relevance for the hospital or high cost-benefit considerations.

Discussion

The ACHS provides reports and training to surveyors and hospitals. Despite their use not being mandatory, the existing mechanism supports moderate to high percentages of surveyors using clinical indicators. The positive results identify the potential to develop the desired control relationship. Its effect, however, would be diminished by incomplete data feedback caused by a lack of clinicians', or others', involvement. Ways to improve report use is an important consideration.

Berwick (1996) [26] advocated that every system is perfectly designed to achieve the results that it achieves. The five key messages must be addressed to build upon the noted strengths and overcome a lack of association between accreditation and quality measurement outcomes. These systems must be appropriately redesigned. The accreditation agency can do so by:

  • – redesigning processes to ensure that ACHS reports are included in surveyors' pre-survey packages;

  • – designing a mechanism for encouraging hospitals to consistently report clinical indicators;

  • – enhancing the training and education of surveyors and hospitals to increase utilization of ACHS reports.

The quality measurement system can support the desired system-wide improvement through:

  • – training and technical support services in using and understanding the reports;

  • – refining and developing the reports, in consultation with users, in order to increase their use by hospitals and surveyors.

An international perspective

There is interest internationally in including clinical indicators as part of the accreditation process and mechanisms vary across countries. Four of the most often cited national accreditation programmes internationally [18, 19] were compared.

  1. The Joint Commission (JC) in the USA and Accreditation Canada are examples of accreditation bodies that have integrated the mandatory requirement for hospitals to provide core performance measures as part of the accreditation process to help focus on-site survey evaluation activities in accreditation [2734]. The JC has done so through its ORYX® programme and through its integration of measurement data into its Priority Focus Process for the on-site survey [28, 29]. Accreditation Canada has done so through its Qmentum programme and combining indicator data with their ‘instrument’ data obtained through questionnaires completed by representative samples of clients, staff, leadership and/or other key stakeholders [34].

  2. Haute Autorité de Santé, France, has mandatory accreditation for all its hospitals and has connected many of its accreditation standards to indicators. There are 13 criteria that must be satisfied to achieve certification, of which four have indicators linked to them. In total, there are 14 indicators connected with accreditation criteria [35, 36].

  3. The ACHS, Australia, provides the 6-monthly and trend reports should the hospital choose to provide their clinical indicator data. The contribution of these reports to a hospital's self-evaluation and quality improvement efforts are relied upon for instigating the clinical indicator data collection within hospitals and thus inclusion in the accreditation process.

Integrating indicators into the accreditation process is a trend worldwide involving varying approaches of incorporating and using indicators in the accreditation process. A robust design for future accreditation and quality measurement systems in any nation should be based on similar adaptive control relationship studies in light of the approaches and achievements in other countries.

Study limitations

The sample size reduces the ability to identify potential predictors. This was partially addressed by aggregating across categories for each potential predictor. The response rate of 24% was less than desired; although such rates are not uncommon. Importantly the sample was representative of the population [37], based on the population figures available for comparison.

Surveyors provided their perception of the hospitals' use of the reports based upon interactions and observations during the accreditation process; however, they did not identify how the reports were used. Additional related data yet to be procured is required to address this appropriately.

Conclusion

The study indicates that establishing the control relationship between the quality measurement and accreditation systems is a promising expectation. Other system components which play critical roles in the feedback loops, and international trends towards the mandatory inclusion of indicators within the accreditation process [38], should be considered.

Overcoming the tendency for fragmented research is crucial in improving the accreditation-quality measurement system's impact on patient safety and quality of care. This study's systems theory-based model of the accreditation and quality measurement system lays the foundation for future studies.

Funding

This work was supported in part by the Academy of the Social Sciences in Australia - International Science Linkages Programme, Department of Innovation, Industry, Science and Research, Australia.

Acknowledgements

Thanks to the ACHS for allowing the survey to be conducted and their ongoing support of these improvement projects, as well as journal reviewers and editors and Prof. Robert Gibberd for their constructive feedback.

References

View Abstract