OUP user menu

Evaluating accreditation

Charles D. Shaw
DOI: http://dx.doi.org/10.1093/intqhc/mzg092 455-456 First published online: 6 December 2003

A global study for the World Health Organization in 2000 identified 36 nationwide health care accreditation programmes [1]. Starting with the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) in the United States in 1951, the number of programmes around the world has doubled every five years since 1990. Development has been especially marked in Europe which now accounts for half of all programmes.

The traditional model of voluntary, independent accreditation is being rapidly adapted towards a government sponsored or even statutory tool for control and public accountability. Many countries, especially in Eastern Europe, are beginning to use accreditation as an extension of statutory licensing for institutions.

In this context of growth and innovation, the National Agency for Healthcare Accreditation and Evaluation (ANAES) in France, described by Daucourt and Michel in their paper ‘Results of the first 100 accreditation procedures in France’ [2] in this issue of the International Journal for Quality in Health Care, should provide useful lessons to accreditors and governments world-wide. It is expansively ambitious and innovative, but also governmentally slow and expensive; in 2001 its accreditation programme accounted for three-quarters of the €16.5 million combined expenditure of ten national programmes in Europe. Other governments, at least those which would contemplate a statutory programme, should watch Paris for signs of value of the investment, in terms of impact on the public/private health system.

The problem is that, in an increasingly evidence-based world, we have aggregated very little hard data about:

  • The uptake or market share of individual accreditation programmes at national level, and their impact on the health system,

  • The consistency, compatibility and validity of programmes as a basis for comparing health care providers, such as across Europe, and

  • The costs and benefits of individual programmes to health care providers.

‘Considering the amount of time and money spent on organisational assessment, and the significance of the issue to governments, it is surprising that there is no research into the cost-effectiveness of these schemes’ [3]. Most established programmes have been subjected to internal [4,5] or external evaluation of their impact [68], but few of these evaluations have used comparable methods to permit synthesis. There is ample evidence that hospitals rapidly increase compliance with the published standards in the months prior to external assessment, and improve organizational processes but there is less evidence that this brings benefits in terms of clinical process and outcome (K. Timmons, 2001, S. Whittaker, 2001, unpublished data).

The ALPHA principles and standards of the International Society for Quality in Health Care have codified the experience of many established accreditation programmes into empirical guidance on how health care standards may be developed, defined and independently assessed [9]. Another ISQua working group has set out to identify and evidence the environmental factors (such as culture, incentives, regulation) which are associated with success or failure of the general model of accreditation in a given country [10]. But the process–outcome link, which neatly legitimizes the measurement of clinical practice against evidence-based guidelines as a proxy for clinical outcome, has not been established between institutional accreditation and performance.

Several factors make accreditation more difficult to evaluate than a clinical technology:

  • The ‘endpoints’ of accreditation are hard to define, and vary according to the expectations of users and observers; there are many potential ‘products’ (e.g. institutional control, organizational development, professional regulation, financial allocation, public accountability),

  • Individual programmes vary around a common model e.g. with respect to scope, standards, assessment, packaging and pricing; they are not a homogeneous population,

  • ‘Accreditation’ is not a single technology but a cluster of activities which interact to produce documented processes and organizational changes; but process–outcome links may be demonstrated for component interventions, and summated as a proxy for overall impact, and

  • Case-control studies of institutional accreditation require a large, supportive but uncontaminated universe to sample, compare and monitor over many months; few countries offer this opportunity.

The relative priorities of national accreditation programmes are influenced by local social, political, economic and historical factors. In developed countries the common emphasis is on evaluation and improvement of safety, clinical effectiveness, consumer information, staff development, purchaser intelligence and accountability—and reduction in variation. In developing countries, the emphasis is on establishing basic facilities and information, and improving access in an environment where there may be no established culture of professional responsibility, and very limited resources available for staffing, equipment and buildings.

Some programmes in North America accredit entire health networks, and are beginning to shift from individual health care towards population health; some recent governmental programmes in Europe address public health priorities (such as cardiac health, cancer services) by assessing local performance of preventive to tertiary services against national service frameworks. In such programmes, measures should include the application of evidence-based medicine (process) and population health gain (outcome) but many health determinants (e.g. housing, education, poverty) remain outside the scope of health care accreditation programmes.

Given these uncertainties and variations, any objective measures of individual accreditation programmes should be welcomed in the public domain. Daucourt and Michel [2] have used the summary reports of 100 hospitals, published on the Internet, to quantify the common concerns of ANAES surveys (which are very similar to those in other countries). They refer in passing to how long the programme has taken to develop, to long turnaround times for reports (nine months) and to inconsistencies between teams in report writing which may account for some of the variations in the number of recommendations and reservations. These are realities faced by any programme, especially new ones. The study does not bridge the research chasm but it may encourage others to examine new and established programmes systematically in order to document and share empirical evidence on the impact of external assessment against standards.

References