OUP user menu

A performance assessment framework for hospitals: the WHO regional office for Europe PATH project

J. Veillard, F. Champagne, N. Klazinga, V. Kazandjian, O. A. Arah, A.-L. Guisset
DOI: http://dx.doi.org/10.1093/intqhc/mzi072 487-496 First published online: 9 September 2005

Abstract

Objective. The World Health Organization (WHO) Regional Office for Europe launched in 2003 a project aiming to develop and disseminate a flexible and comprehensive tool for the assessment of hospital performance and referred to as the performance assessment tool for quality improvement in hospitals (PATH). This project aims at supporting hospitals in assessing their performance, questioning their own results, and translating them into actions for improvement, by providing hospitals with tools for performance assessment and by enabling collegial support and networking among participating hospitals.

Methods. PATH was developed through a series of four workshops gathering experts representing most valuable experiences on hospital performance assessment worldwide. An extensive review of the literature on hospital performance projects was carried out, more than 100 performance indicators were scrutinized, and a survey was carried out in 20 European countries.

Results. Six dimensions were identified for assessing hospital performance: clinical effectiveness, safety, patient centredness, production efficiency, staff orientation and responsive governance. The following outcomes were achieved: (i) definition of the concepts and identification of key dimensions of hospital performance; (ii) design of the architecture of PATH to enhance evidence-based management and quality improvement through performance assessment; (iii) selection of a core and a tailored set of performance indicators with detailed operational definitions; (iv) identification of trade-offs between indicators; (v) elaboration of descriptive sheets for each indicator to support hospitals in interpreting their results; (vi) design of a balanced dashboard; and (vii) strategies for implementation of the PATH framework.

Conclusion. PATH is currently being pilot implemented in eight countries to refine its framework before further expansion.

  • delivery of health care
  • Europe
  • hospitals
  • performance indicators
  • performance measurement
  • quality improvement tools

The World Health Report 2000 [1] identified three overall goals of a health care system: achieving good health for the population, ensuring that health services are responsive to the public and ensuring fair payment systems. The hospital has a central role in achieving these goals [2]. Obviously, the organization, configuration and delivery of health care services impact on the performance of the overall health system.

More specifically, the restructuring of health care services among several European countries aims at increasing accountability, cost effectiveness, sustainability, and quality improvement strategies and involves a growing interest in patient satisfaction. These reforms highlight a quest throughout Europe for achieving more efficient and effective hospital care, although maintaining hospital functioning at quality levels acceptable to those served. Such reforms would best be based on scientific evidence and better practice models to enhance hospitals’ performance.

Emphasis should therefore be put on the development of systems monitoring the performance of health care providers, especially hospitals as they generally consume more than half of the overall health care budget in most European countries [3]. Within that, it is both desirable and urgent to monitor health care quality improvement [4]. Such systems are still poorly developed across Europe [5].

It thus seemed important for the World Health Organization (WHO) Regional Office for Europe to gather evidence in the field of hospital performance and contribute to the development of new frameworks, to enhance greater accountability and stimulate continuous quality improvement.

The Regional Office launched in 2003 a new project for the benefit of its 52 member states, aiming to develop and disseminate a flexible and comprehensive framework for the assessment of hospital performance and referred to as the Performance Assessment Tool for quality improvement in Hospitals (PATH).

This article describes the first stage of this project, namely the development of an overall framework for hospital performance assessment. Stages 2 (pilot implementation in eight countries) and 3 (expansion of PATH) will be carried out in 2004/2005. We present hereunder the methods and approach used to develop the tool, the results of the work performed, and the conclusion, which points out the main lessons drawn from the project and the next steps concerning PATH pilot implementation.

Purpose and orientations of the project

There are two principal uses of indicator systems: as a summative mechanism for external accountability and verification in assurance systems and as a formative mechanism for internal quality improvement [6].

The purpose of the PATH project is to support hospitals in assessing their performance, questioning their own results, and translating them into actions for improvement. It is achieved by providing hospitals with tools for performance assessment and by enabling support to and networking among participating hospitals. Performance assessment is conceived in this project as a quality management tool, that is a tool to be used by hospital managers for the evaluation and improvement of hospital services (formative and supportive perspectives as shown in Figure 1, cell A). In the short term, the PATH project aims only at national or subnational data comparisons. Nevertheless, experiences in quality improvement through performance assessment could be shared at international level among participating hospitals. In the midterm, data standardization could allow international comparisons.

Figure 1

Taxonomy of quality assessment systems [21].

The WHO Regional Office for Europe supports 52 individual member states in initiatives related to the development of hospital quality standards and accreditation processes (Figure 1, cell B: supportive of continuous quality improvement with external source of control) and in improvements in hospital accountability and performance management in the public sector through public reporting of performance indicators and quality-based purchasing (Figure 1, cell D: punitive or summative context with an external source of control). WHO is usually not involved itself in the internal evaluation of hospitals in member states (Figure 1, cell C).

WHO literature on hospital performance assessment was reviewed [1,711], and main policy orientations were identified and taken into consideration during PATH framework development.

WHO strategic orientations are encompassed into six interrelated dimensions: clinical effectiveness, safety, patient centredness, responsive governance, staff orientation, and efficiency. It advocates a multidimensional approach of hospital performance: all dimensions are considered interdependant and are to be assessed simultaneously. This multidimensional approach forms the basis of the definition of hospital performance in the frame of the PATH project.

In the PATH framework, satisfactory hospital performance is defined as the maintenance of a state of functioning that corresponds to societal, patient, and professional norms. High hospital performance should be based on professional competencies in application of present knowledge, available technologies and resources; efficiency in the use of resources; minimal risk to the patient; responsiveness to the patient; optimal contribution to health outcomes.

Within the health care environment, high hospital performance should further address the responsiveness to community needs and demands, the integration of services in the overall delivery system, and commitment to health promotion. High hospital performance should be assessed in relation to the availability of hospitals’ services to all patients irrespective of physical, cultural, social, demographic, and economic barriers.

Based on WHO orientations and on the identification of six dimensions of hospital performance, a review of literature was carried out. The dimensions of hospital performance were defined and the subdimensions identified.

An indicator was defined as ‘a measurable element that provides information about a complex phenomenon (e.g. quality of care) which is not itself easily captured’ [12].

Methods and approach

The development of PATH is split into three stages (2003–2005):

  1. analysis of different models and performance indicators currently in use worldwide and agreement on a comprehensive framework for assessing hospital performance (2003);

  2. PATH pilot implementation in eight countries (Belgium, Denmark, France, Lithuania, Poland, Slovakia in Europe plus two voluntary countries outside Europe, Canada and South Africa) to assess the feasibility and usefulness of the strategy used to evaluate hospital performance (2004);

  3. definition of guidelines to support countries in the implementation of the framework and creation of national and/or international benchmarking networks (2005).

The development process of the PATH framework includes reviews of the literature, workshops with international experts and a survey in 20 European countries. Thirty-one experts coming from fifteen different countries (western and central European countries, Australia, South Africa and North America) and representing most valuable experiences on hospital performance worldwide met in four workshops. These experts built the framework based on evidence gathered in background articles and on their own experience.

A conceptual model of performance was elaborated to identify dimensions and subdimensions of performance. Next, a list of 100 hospital performance indicators was identified through a review of the literature. Indicators were assessed against a series of criteria by the experts panel through a nominal group technique. Indicator selection was based on evidence gathered through the previous review of the literature and on the survey carried out in 20 countries.

This process was iterative in the sense that even though agreement on the conceptual model preceded and guided indicator selection, analysis of the evidence on various performance indicators led to refinements on the conceptual model. Furthermore, even though the main process of indicator selection was one of progressive elimination starting from a comprehensive set to a parsimonious one limited to a range of 20–25 indicators, new indicators had to be sought and introduced throughout the process as new evidence was gathered.

The overall work was performed through five objectives, which were to

  1. develop a comprehensive theoretical model sustaining WHO orientations in the field of hospital performance and identifying trade-offs between dimensions;

  2. establish a limited list of indicators (100) allowing a preliminary discussion in experts’ committee;

  3. build a comprehensive operational model, shaped on the conceptual model, with indicators assessing dimensions and subdimensions of performance previously agreed on;

  4. ascertain face, content and construct validity of the set of indicators as a whole;

  5. support hospitals in collecting data and interpreting their own results to move from measurement to assessment to action for quality improvement.

Conceptual model (dimensions, subdimensions, how they relate to each other)

The conceptual model was built by considering and analysing: (i) WHO policies relevant to hospital performance; (ii) WHO literature related to health care systems performance and hospital performance; (iii) published conceptual models of performance; and (iv) published information on various international experiences in hospital performance assessment systems. During workshops, experts discussed this background material and defined dimensions of hospital performance underlying the PATH framework.

The WHO strategic orientations are encompassed into the six interrelated dimensions of the PATH conceptual model, namely: clinical effectiveness, safety, patient centredness, responsive governance, staff orientation, and efficiency. Two transversal perspectives (safety and patient centredness) cut across four dimensions of hospital performance (clinical effectiveness, efficiency, staff orientation, and responsive governance) (Figure 2). For instance, safety relates to clinical effectiveness (patient safety), staff orientation (staff safety), and responsive governance (environmental safety) when patient centredness relates to responsive governance (perceived continuity), staff orientation (interpersonal aspect items in patient surveys), and clinical effectiveness (continuity of care within the organization). Dimensions and sub-dimensions of hospital performance are described in Table 1.

Figure 2

The PATH theoretical model for hospital performance.

View this table:
Table 1

Description of the dimensions and subdimensions of hospital performance

DimensionDefinitionSubdimensions
Clinical effectivenessClinical effectiveness is a performance dimension, wherein a hospital, in line with the current state of knowledge, appropriately and competently delivers clinical care or services to, and achieves desired outcomes for all patients likely to benefit most [22,23].Conformity of processes of care, outcomes of processes of care, appropriateness of care
EfficiencyEfficiency is a hospital’s optimal use of inputs to yield maximal outputs, given its available resources [24,25].Appropriateness of services, input related to outputs of care, use of available technology for best possible care
Staff orientationStaff orientation is the degree to which hospital staff are appropriately qualified to deliver required patient care, have the opportunity for continued learning and training, work in positively enabling conditions, and are satisfied with their work [23,26].Practice environment, perspectives and recognition of individual needs, health promotion activities and safety initiatives, behavioural responses and health status
Responsive governanceResponsive governance is the degree to which a hospital is responsive to community needs, ensures care continuity and coordination, promotes health, is innovative, and provides care to all citizens irrespective of racial, physical, cultural, social, demographic or economic characteristics [2].System/community integration, public health orientation [27]
SafetySafety is the dimension of performance, wherein a hospital has the appropriate structure, and uses care delivery processes that measurably prevent or reduce harm or risk to patients, healthcare providers and the environment, and which also promote the notion [22,23].Patient safety, staff safety, environment safety
Patient centrednessPatient centredness is a dimension of performance wherein a hospital places patients at the centre of care and service delivery by paying particular attention to patients’ and their families’ needs, expectations, autonomy, access to hospital support networks, communication, confidentiality, dignity, choice of provider, and desire for prompt, timely care [24].Client orientation, respect for patients

The dimensions selected are a synthesis from different organizational performance theories [13,14]. Table 2 summarizes that the six interrelated dimensions of the conceptual model tend to encompass most organizational performance theories.

View this table:
Table 2

Mapping the six dimensions of hospital performance into known organizational performance theories

DimensionCorresponding organizational performance theories
Clinical effectivenessRationale of professionals
Patient centrednessRationale of patient experience and patient satisfaction
EfficiencyInternal resource model and resource acquisition model
SafetyFault-driven model
Staff orientationHuman relations model
Responsive governanceStrategic constituencies and social legitimacy models

Operational model (core set of indicators and how indicators relate to each other)

Criteria for indicator selection, as described in Table 3, were agreed on, through consensus among the experts. Specifically, four working groups were asked to score each individual indicator, using a nominal group technique, and to rank them on a scale from 1 to 10 according to importance, relevance and usefulness, reliability and validity, and burden of data collection. Criteria for indicator selection focused not only on the selection of individual indicators but also on the characteristics of the set of indicators as a whole.

View this table:
Table 3

Criteria for indicator selection

LevelCriteriaIssue addressed by the criterion
Set of indicatorsFace validityIs the indicator set acceptable as such by its potential users?
Content validityAre all the dimensions covered properly?
Construct validityHow do indicators relate to each other?
IndicatorsImportance and relevanceDoes the indicator reflect aspects of functioning that matter to users and are relevant in current healthcare context?
Potential for use (and abuse) and sensitivity to implementationAre hospitals able to act upon this indicator if it reveals a problem?
Measurement toolsReliabilityIs there demonstrated reliability (reproducibility) of data?
Face validityIs there a consensus among users and experts that this measure is related to the dimension (or subdimension) it is supposed to assess?
Content validityDoes the measure relate to the subdimension of performance it is supposed to assess?
Contextual validityIs this indicator valid in different contexts?
Construct validityIs this indicator related to other indicators measuring the same subdimension of hospital performance?
Burden of data collectionAre data available and easy to access?

Indicators are grouped into two ‘baskets’:

  1. a ‘core’ basket gathering a limited number of indicators relevant, responsive and valid in most contexts, relying on sound scientific evidence, for which data are available or easy to collect in most European countries;

  2. a ‘tailored’ basket gathering indicators suggested only in specific contexts because of varying availability of data, varying applicability (e.g. teaching hospitals, rural hospitals) or varying contextual validity (cultural, financial, organizational settings).

The final sets of indicators were obtained through the following steps.

  1. Current national/regional performance assessment systems and their field applications were screened to establish a preliminary comprehensive list of 100 potential indicators. Experts scrutinized the list and proposed some refinements (dropping and adding some indicators).

  2. Dimensions or subdimensions that were not properly covered were identified, and literature had to be further reviewed to identify indicators covering properly these areas.

  3. An extensive review of the literature was carried out, evidence was collected for each of 100 pre-selected indicators on the rationale for use, prevalence, validity and reliability, current scope of use, suggested and demonstrated relationship with other performance indicators, and on potential exogenous factors.

Available evidence on the validity of the various indicators varied greatly. For some dimensions and indicators, such as clinical effectiveness, indicator validity was well documented based on numerous research studies and empirical experiences. For others, such as responsive governance and staff orientation, little previous research or experiences could be drawn upon to support indicator selection. In those cases, expert judgement was relied upon.

  1. A survey was carried out in 20 countries in May 2003. It aimed to define the availability of indicators, their relevance in different national contexts, their potential impact on quality improvement, and the burden of data collection. Eleven responses were received from Albania, Belgium, Denmark, Estonia, Finland, France, Georgia, Germany, Ireland, Lithuania, and Slovakia. Surveys were filled in either by individuals or by large working groups. Respondents were asked to provide a global evaluation of the above four characteristics of a measurement system for a ‘representative’ hospital in their country.

  2. The empirical findings of the survey are considered crucial to reconcile theory with practice and to develop a strategy to monitor the applicability of the model to different health care systems because the literature used to develop the PATH framework and select the indicators focused mainly on Anglo-Saxon contexts. Consequently, the applicability of tools to other contexts is unknown. The survey constituted a preliminary input from relevant countries. Further input will be made possible through the pilot implementation phase.

  3. Based on evidence gathered through the review of the literature and the survey, experts selected indicators and classified them into the core or the tailored basket.

  4. A final workshop was organized to amend indicator selection and guarantee content validity of the set of indicators as a whole. This meant that an indicator with a higher data collection burden or a lower degree of validity could still be included in the model because no indicator entirely satisfied all selection criteria.

Results

The PATH framework

The PATH framework includes

  1. a conceptual model of performance (dimensions, subdimensions, and how they relate to each other);

  2. criteria for indicator selection;

  3. two sets of indicators (including rationale, operational definition, data collection issues, support for interpretation);

  4. an operational model of performance (how indicators relate to each other, to explanatory variables and quality improvement strategies relating to the indicator, potential reference points);

  5. strategies for feedback of results to hospitals, mainly through a balanced dashboard;

  6. educational material to support further scrutiny of indicators (e.g. surveys of practices) and dissemination of results within hospitals;

  7. strategies to foster benchmarking of results between participating hospitals and practices.

The PATH sets of indicators

An important distinction between ‘reflective’ and ‘formative’ indicators was drawn. Formative indicators (causative) lead to changes in the value of the latent variable, whereas reflective indicators (effect) are the results of changes in the latent variable. This distinction is essential for assessment of validity and for interpretation and use of indicators for action. It will support the interpretation of indicators and will guide action by indicating if the indicator should be acted upon directly for quality improvement purpose or if an underlying quality process should be acted upon resulting in an improvement of the indicator results. This distinction has strong implications when quality improvement initiatives are to be developed based on indicator results.

The list of indicators included in the operational model was restricted to a final set of 24 core performance indicators and to a tailored set of 27 indicators. The core set has been designed to allow international benchmarking in the future—when data quality will be considered good enough. A summary of the operational definitions is presented in Table 4. Full operational definitions are available as an appendix with the online version of this article. The tailored set of performance indicators was selected by the experts’ panel based on scientific evidence and on the countries survey—as the core set of indicators— but does not present operational definitions on purpose. Hospitals and/or countries can use tailored indicators and include them in their dashboards, but these indicators have to be defined operationally by them with WHO technical support. Tailored indicators are included to reflect hospital or national specific priorities and are not to be used for comparisons at international level.

View this table:
Table 4

PATH core set of hospital performance indicators

Dimension/ subdimensionPerformance indicatorsNumeratorDenominator
Clinical effectiveness and safety
Appropriateness of careCaesarean section deliveryTotal number of cases within the denominator with Caesarean sectionTotal number of deliveries
Conformity of processes of careProphylactic antibiotic use for tracers: results of audit of appropriatenessVersion 1: Total number of audited medical records with evidence of over-use of antibiotics (too early and/or too long, too high dose, too broad spectrum) in comparison with hospitals guidelines. Version 2: Total number of audited medical records with evidence of under-use of antibiotics (too early and/or too long, too high dose, too broad spectrum) in comparison with hospitals guidelinesTotal number of medical records audited for a specific tracer operative procedure
Outcomes of care and safety processesMortality for selected tracer conditions and proceduresTotal number of cases in denominator who died during their hospital stayTotal number of patients admitted for a specific tracer condition or procedure
Readmission for selected tracer conditions and proceduresTotal number of cases within the denominator who where admitted through the emergency department after discharge—within a fixed follow-up period—from the same hospital and with a readmission diagnosis relevant to the initial careTotal number of patients admitted for a selected tracer condition
Admission after day surgery for selected tracer proceduresNumber of cases within the denominator who had an overnight admissionTotal number of patients who have an operation/procedure performed in the day procedure facility or having a discharge intention of one day
Return to higher level of care (e.g. from acute to intensive care) for selected tracer conditions and procedures within 48 hTotal number of patients in the denominator who are unexpectedly (once or several times) transferred to a higher level of care (intensive care or intermediary care) within 48 h (or 72 h to account for week-end effect) of their discharge from a high level of care to an acute care wardTotal number of patients admitted to an intensive or intermediary care unit
Sentinel eventsBinary variable A: Existence of a formal procedure to register sentinel events. Binary variable B: Existence of a formal procedure to act upon sentinel events + description of procedures
Efficiency
Appropriateness of servicesDay surgery, for selected tracer proceduresTotal number of patients undergoing a tracer procedure who have it performed in the day procedure facility
ProductivityLength of stay for selected tracersMedian length of stay in number of days of hospitalization. Day of admission and discharge count as 1 day
Use of capacityInventory in stock, for pharmaceuticalsTotal value of inventory at the end of the year for pharmaceuticalsTotal expenditures for pharmaceuticals during the year/365
Intensity of surgical theatre useNumber of patient hours under anaesthesiaNumber of theatres × 24 h
Staff orientation and staff safety
Perspective and recognition of individual needsTraining expendituresDirect cost for all activities dedicated to staff trainingAverage number of employees on payroll during the period (alternative: average number of full time employees)
Health promotion and safety initiativesExpenditures on health promotion activitiesDirect cost for all activities dedicated to staff health promotion (as per list) set up in 2003Average number of employees on payroll during the period (alternative: average number of full time employees)
Behavioural responsesAbsenteeism: short-term absenteeismNumber of days of medically or non-medically justified absence for 7 days or less in a row, excluding holidays, among nurses and nurse assistantsTotal equivalent full time nurses and nurses assistants × number contractual days per year for a full-time staff (e.g. 250)
Absenteeism: long-term absenteeismNumber of days of medically or non-medically justified absence for 30 days or more, excluding holidays, among nurses and nurse assistantsTotal equivalent full time nurses and nurses assistants × number contractual days per year for a full-time staff (e.g. 250)
Staff safetyPercutaneous injuriesNumber of cases of percutaneous injuries reported in the official database or occupational medicine registered in 1 year (includes needlestick injuries and sharp devices injuries)Average number of full-time equivalent staff and non-salaried physicians
Staff excessive weekly working timeFor each week, number of full-time staff (nurses and nurse assistants) who worked more than 48 h, summed up on all the weeks in the period under studyTotal number of weeks available during the period under study (total number of days during the period – statuary holidays) × number full-time employee
Responsive governance and environmental safety
System integration and continuityAverage score on perceived continuity items in patient surveysIndicator is calculated based on the questionnaire survey currently used in the hospital. It is not for international nor for national comparisons but for follow-up within the organization. If standard surveys are used in a country, national benchmarking is proposed
Public Health Orientation: Health promotionBreastfeeding at dischargeTotal number of mothers included in the denominator breastfeeding at dischargeTotal number of delivery fulfilling criteria for inclusion
Patient centrednessAverage score on overall perception/satisfaction items in patient surveysIndicator is calculated based on the questionnaire survey currently used in the hospital. It is not for international nor for national comparisons but for follow-up within the organization. If standard surveys are used in a country, national benchmarking is proposed
Interpersonal aspectsAverage score on interpersonal aspect items in patient surveysIndicator is calculated based on the questionnaire survey currently used in the hospital. An average score is computed for all items relating to interpersonal aspects. It is not for international nor for national comparisons but for follow-up within the organization. If standard surveys are used in a country, national benchmarking is proposed
Client orientation: accessLast minute cancelled surgeryTotal number of patients who had their surgery cancelled or postponed for more than 24 h, during the period under study and who meet inclusion criteriaTotal number of patients admitted for surgery during the period under study and who meet inclusion criteria
Client orientation: information and empowermentAverage score on information and empowerment items in patient surveysIndicator is calculated based on the questionnaire survey currently used in the hospital. An average score is computed for all items relating to patient information and empowerment. It is not for international nor for national comparisons but for follow-up within the organization. If standard surveys are used in a country, national benchmarking is proposed
Client orientation: continuityAverage score on continuity of care items in patient surveysIndicator is calculated based on the questionnaire survey currently used in the hospital. An average score is computed for all items relating to continuity of care. It is not for international nor for national comparisons but for follow-up within the organization. If standard surveys are used in a country, national benchmarking is proposed

When selecting indicators, their potential use for quality improvement was considered central. According to the multidimensional and integrated model of performance (Figure 2), the main message to convey to the hospitals assessing their performance is that it is inappropriate to interpret indicators in isolation. The performance model developed in the frame of this project is a conceptualization of hospital functioning, which itself is a diverse and complex phenomenon, not easy to capture. Therefore, an isolated view of hospital performance is not only inappropriate but also dangerous and justifies the use of a balanced dashboard to assess hospital performance.

The PATH balanced dashboard

The purpose of the balanced dashboard is to enable meaning and to guide decision-making and quality improvement. The reporting scheme relates results to external references as well as internal comparisons over time and gives guidance on interpretation.

The physical structure of the balanced dashboard is organized in embedded levels. The detailed specifications of the dashboard are to be defined during the pilot implementation of the project, with the constant feedback of the field to make sure that this tool is really valuable and usable by hospitals. The design of reports must follow the interests and authority of the users and the structure of accountability and authority within the institution.

Conclusions

The PATH framework strongly emphasizes the internal use of indicators because ‘neither the dynamics of selection nor the dynamics of improvement (through quality measurement) work reliably today. . . the barriers are not just in the lack of uniform, simple and reliable measurements, they also include a lack of capacity among the organizations and individuals acting on both pathways’ [15].

PATH is a flexible and comprehensive framework, which should be relevant in different national contexts even if hospital performance is acknowledged to be a complex and multidimensional phenomenon. It essentially contains two sets of evidence-based indicators for use in European hospitals and suggests ways for its strategic use in hospital performance assessment [16].

The value of PATH relies not only on the interest of individual hospitals to improve the way they assess their own performance, but it also pursues the goal of building on the dynamics of national and international comparisons through benchmarking networks, possibly at national (in the short term) and international level (in the medium term).

The international dimension of the project is paramount and was perceived as a strong incentive for hospitals to participate in its pilot implementation. The international component does not limit itself to international comparisons (limited due to varying contexts even if there is growing evidence that generic hospital performance indicator rates might be comparable worldwide [17]) of results on indicators. By joining PATH, hospitals are part of an international network to share better practices for quality improvement. International networking will be fostered using different tools such as newsletters, list-server, or a web page.

In the next phase of the project, PATH will be pilot-tested in eight countries in Europe (Belgium, Denmark, France, Lithuania, Poland, Slovakia) and beyond (Canada, South Africa) from March 2004 to November 2005. The purpose of the pilot implementation is to evaluate the usefulness of the tool as a whole (especially its assessment strategies), the burden of data collection for participating hospitals and its potential for adoption across Europe.

Ultimately, PATH should support hospitals to move from measurement to interpretation to actions for quality improvement. Therefore, indicators are considered as flags requiring cautious interpretation in the light of local circumstances [6] in the frame of this project (‘performance indicators do not measure performance, people do’ [18]) and are intended primarily to give directions for action to hospital managers and hospital professionals at large. Furthermore, PATH will also contribute to the improvement of information systems and data quality and will reinforce the credibility of performance measurement systems and confidence of hospitals into the data they need to assess the way they perform—and promote their accountability [19,20].

Acknowledgments

We thank the following experts for their contributions to this article: Adalsteinn D. Brown (Canada—University of Toronto), Manuela Brusoni (Italy—University of Bocconi), Mohammed Hoosen Cassimjee (South Africa—Pietermaritzburg Metropolitan Hospital Complex and Midlands Region), Brian T. Collopy (Australia—CQM Consultants), Thomas Custers (The Netherlands—University of Amsterdam), Mila Garcia-Barbero (World Health Organization Regional Office for Europe), Pilar Gavilán (Spain—Catalan Institute for Health), Oliver Gröne (World Health Organization Regional Office for Europe), Svend Juul Jorgensen (Denmark—National Board of Health), Vytenis Kalibatas (Lithuania—National Blood Centre), Isuf Kalo (World Health Organization Regional Office for Europe), Johann Kjaergaard (Denmark—Copenhagen Hospital Corporation), Itziar Larizgoitia (World Health Organization), Pierre Lombrail (France—Nantes University Hospital), Ehadu Mersini (Albania—Ministry of Health), Etienne Minivielle (France—INSERM), Sergio Pracht (Spain—Europatient Foundation), Anne Rooney (United States of America—Joint Commission International), Laura Sampietro-Colom (Spain—Catalan Institute for Health), Irakli Sasania (Georgia— M. Iashvili Children’s Central Hospital), Henner Schellschmidt (Germany—Wissenschaftliches Institute der AOK), Charles Shaw (United Kingdom—Private Consultant), Rosa Suñol (Spain—Foundation Avedis Donabedian), and Carlos Triginer (World Health Organization Regional Office for Europe).

References

View Abstract