OUP user menu

A performance assessment framework for hospitals: the WHO regional office for Europe PATH project

J. Veillard , F. Champagne , N. Klazinga , V. Kazandjian , O. A. Arah , A.-L. Guisset
DOI: http://dx.doi.org/10.1093/intqhc/mzi072 487-496 First published online: 9 September 2005


Objective. The World Health Organization (WHO) Regional Office for Europe launched in 2003 a project aiming to develop and disseminate a flexible and comprehensive tool for the assessment of hospital performance and referred to as the performance assessment tool for quality improvement in hospitals (PATH). This project aims at supporting hospitals in assessing their performance, questioning their own results, and translating them into actions for improvement, by providing hospitals with tools for performance assessment and by enabling collegial support and networking among participating hospitals.

Methods. PATH was developed through a series of four workshops gathering experts representing most valuable experiences on hospital performance assessment worldwide. An extensive review of the literature on hospital performance projects was carried out, more than 100 performance indicators were scrutinized, and a survey was carried out in 20 European countries.

Results. Six dimensions were identified for assessing hospital performance: clinical effectiveness, safety, patient centredness, production efficiency, staff orientation and responsive governance. The following outcomes were achieved: (i) definition of the concepts and identification of key dimensions of hospital performance; (ii) design of the architecture of PATH to enhance evidence-based management and quality improvement through performance assessment; (iii) selection of a core and a tailored set of performance indicators with detailed operational definitions; (iv) identification of trade-offs between indicators; (v) elaboration of descriptive sheets for each indicator to support hospitals in interpreting their results; (vi) design of a balanced dashboard; and (vii) strategies for implementation of the PATH framework.

Conclusion. PATH is currently being pilot implemented in eight countries to refine its framework before further expansion.

  • delivery of health care
  • Europe
  • hospitals
  • performance indicators
  • performance measurement
  • quality improvement tools

The World Health Report 2000 [1] identified three overall goals of a health care system: achieving good health for the population, ensuring that health services are responsive to the public and ensuring fair payment systems. The hospital has a central role in achieving these goals [2]. Obviously, the organization, configuration and delivery of health care services impact on the performance of the overall health system.

More specifically, the restructuring of health care services among several European countries aims at increasing accountability, cost effectiveness, sustainability, and quality improvement strategies and involves a growing interest in patient satisfaction. These reforms highlight a quest throughout Europe for achieving more efficient and effective hospital care, although maintaining hospital functioning at quality levels acceptable to those served. Such reforms would best be based on scientific evidence and better practice models to enhance hospitals’ performance.

Emphasis should therefore be put on the development of systems monitoring the performance of health care providers, especially hospitals as they generally consume more than half of the overall health care budget in most European countries [3]. Within that, it is both desirable and urgent to monitor health care quality improvement [4]. Such systems are still poorly developed across Europe [5].

It thus seemed important for the World Health Organization (WHO) Regional Office for Europe to gather evidence in the field of hospital performance and contribute to the development of new frameworks, to enhance greater accountability and stimulate continuous quality improvement.

The Regional Office launched in 2003 a new project for the benefit of its 52 member states, aiming to develop and disseminate a flexible and comprehensive framework for the assessment of hospital performance and referred to as the Performance Assessment Tool for quality improvement in Hospitals (PATH).

This article describes the first stage of this project, namely the development of an overall framework for hospital performance assessment. Stages 2 (pilot implementation in eight countries) and 3 (expansion of PATH) will be carried out in 2004/2005. We present hereunder the methods and approach used to develop the tool, the results of the work performed, and the conclusion, which points out the main lessons drawn from the project and the next steps concerning PATH pilot implementation.

Purpose and orientations of the project

There are two principal uses of indicator systems: as a summative mechanism for external accountability and verification in assurance systems and as a formative mechanism for internal quality improvement [6].

The purpose of the PATH project is to support hospitals in assessing their performance, questioning their own results, and translating them into actions for improvement. It is achieved by providing hospitals with tools for performance assessment and by enabling support to and networking among participating hospitals. Performance assessment is conceived in this project as a quality management tool, that is a tool to be used by hospital managers for the evaluation and improvement of hospital services (formative and supportive perspectives as shown in Figure 1, cell A). In the short term, the PATH project aims only at national or subnational data comparisons. Nevertheless, experiences in quality improvement through performance assessment could be shared at international level among participating hospitals. In the midterm, data standardization could allow international comparisons.

Figure 1

Taxonomy of quality assessment systems [21].

The WHO Regional Office for Europe supports 52 individual member states in initiatives related to the development of hospital quality standards and accreditation processes (Figure 1, cell B: supportive of continuous quality improvement with external source of control) and in improvements in hospital accountability and performance management in the public sector through public reporting of performance indicators and quality-based purchasing (Figure 1, cell D: punitive or summative context with an external source of control). WHO is usually not involved itself in the internal evaluation of hospitals in member states (Figure 1, cell C).

WHO literature on hospital performance assessment was reviewed [1,711], and main policy orientations were identified and taken into consideration during PATH framework development.

WHO strategic orientations are encompassed into six interrelated dimensions: clinical effectiveness, safety, patient centredness, responsive governance, staff orientation, and efficiency. It advocates a multidimensional approach of hospital performance: all dimensions are considered interdependant and are to be assessed simultaneously. This multidimensional approach forms the basis of the definition of hospital performance in the frame of the PATH project.

In the PATH framework, satisfactory hospital performance is defined as the maintenance of a state of functioning that corresponds to societal, patient, and professional norms. High hospital performance should be based on professional competencies in application of present knowledge, available technologies and resources; efficiency in the use of resources; minimal risk to the patient; responsiveness to the patient; optimal contribution to health outcomes.

Within the health care environment, high hospital performance should further address the responsiveness to community needs and demands, the integration of services in the overall delivery system, and commitment to health promotion. High hospital performance should be assessed in relation to the availability of hospitals’ services to all patients irrespective of physical, cultural, social, demographic, and economic barriers.

Based on WHO orientations and on the identification of six dimensions of hospital performance, a review of literature was carried out. The dimensions of hospital performance were defined and the subdimensions identified.

An indicator was defined as ‘a measurable element that provides information about a complex phenomenon (e.g. quality of care) which is not itself easily captured’ [12].

Methods and approach

The development of PATH is split into three stages (2003–2005):

  1. analysis of different models and performance indicators currently in use worldwide and agreement on a comprehensive framework for assessing hospital performance (2003);

  2. PATH pilot implementation in eight countries (Belgium, Denmark, France, Lithuania, Poland, Slovakia in Europe plus two voluntary countries outside Europe, Canada and South Africa) to assess the feasibility and usefulness of the strategy used to evaluate hospital performance (2004);

  3. definition of guidelines to support countries in the implementation of the framework and creation of national and/or international benchmarking networks (2005).

The development process of the PATH framework includes reviews of the literature, workshops with international experts and a survey in 20 European countries. Thirty-one experts coming from fifteen different countries (western and central European countries, Australia, South Africa and North America) and representing most valuable experiences on hospital performance worldwide met in four workshops. These experts built the framework based on evidence gathered in background articles and on their own experience.

A conceptual model of performance was elaborated to identify dimensions and subdimensions of performance. Next, a list of 100 hospital performance indicators was identified through a review of the literature. Indicators were assessed against a series of criteria by the experts panel through a nominal group technique. Indicator selection was based on evidence gathered through the previous review of the literature and on the survey carried out in 20 countries.

This process was iterative in the sense that even though agreement on the conceptual model preceded and guided indicator selection, analysis of the evidence on various performance indicators led to refinements on the conceptual model. Furthermore, even though the main process of indicator selection was one of progressive elimination starting from a comprehensive set to a parsimonious one limited to a range of 20–25 indicators, new indicators had to be sought and introduced throughout the process as new evidence was gathered.

The overall work was performed through five objectives, which were to

  1. develop a comprehensive theoretical model sustaining WHO orientations in the field of hospital performance and identifying trade-offs between dimensions;

  2. establish a limited list of indicators (100) allowing a preliminary discussion in experts’ committee;

  3. build a comprehensive operational model, shaped on the conceptual model, with indicators assessing dimensions and subdimensions of performance previously agreed on;

  4. ascertain face, content and construct validity of the set of indicators as a whole;

  5. support hospitals in collecting data and interpreting their own results to move from measurement to assessment to action for quality improvement.

Conceptual model (dimensions, subdimensions, how they relate to each other)

The conceptual model was built by considering and analysing: (i) WHO policies relevant to hospital performance; (ii) WHO literature related to health care systems performance and hospital performance; (iii) published conceptual models of performance; and (iv) published information on various international experiences in hospital performance assessment systems. During workshops, experts discussed this background material and defined dimensions of hospital performance underlying the PATH framework.

The WHO strategic orientations are encompassed into the six interrelated dimensions of the PATH conceptual model, namely: clinical effectiveness, safety, patient centredness, responsive governance, staff orientation, and efficiency. Two transversal perspectives (safety and patient centredness) cut across four dimensions of hospital performance (clinical effectiveness, efficiency, staff orientation, and responsive governance) (Figure 2). For instance, safety relates to clinical effectiveness (patient safety), staff orientation (staff safety), and responsive governance (environmental safety) when patient centredness relates to responsive governance (perceived continuity), staff orientation (interpersonal aspect items in patient surveys), and clinical effectiveness (continuity of care within the organization). Dimensions and sub-dimensions of hospital performance are described in Table 1.

Figure 2

The PATH theoretical model for hospital performance.

View this table:
Table 1

Description of the dimensions and subdimensions of hospital performance

The dimensions selected are a synthesis from different organizational performance theories [13,14]. Table 2 summarizes that the six interrelated dimensions of the conceptual model tend to encompass most organizational performance theories.

View this table:
Table 2

Mapping the six dimensions of hospital performance into known organizational performance theories

Operational model (core set of indicators and how indicators relate to each other)

Criteria for indicator selection, as described in Table 3, were agreed on, through consensus among the experts. Specifically, four working groups were asked to score each individual indicator, using a nominal group technique, and to rank them on a scale from 1 to 10 according to importance, relevance and usefulness, reliability and validity, and burden of data collection. Criteria for indicator selection focused not only on the selection of individual indicators but also on the characteristics of the set of indicators as a whole.

View this table:
Table 3

Criteria for indicator selection

Indicators are grouped into two ‘baskets’:

  1. a ‘core’ basket gathering a limited number of indicators relevant, responsive and valid in most contexts, relying on sound scientific evidence, for which data are available or easy to collect in most European countries;

  2. a ‘tailored’ basket gathering indicators suggested only in specific contexts because of varying availability of data, varying applicability (e.g. teaching hospitals, rural hospitals) or varying contextual validity (cultural, financial, organizational settings).

The final sets of indicators were obtained through the following steps.

  1. Current national/regional performance assessment systems and their field applications were screened to establish a preliminary comprehensive list of 100 potential indicators. Experts scrutinized the list and proposed some refinements (dropping and adding some indicators).

  2. Dimensions or subdimensions that were not properly covered were identified, and literature had to be further reviewed to identify indicators covering properly these areas.

  3. An extensive review of the literature was carried out, evidence was collected for each of 100 pre-selected indicators on the rationale for use, prevalence, validity and reliability, current scope of use, suggested and demonstrated relationship with other performance indicators, and on potential exogenous factors.

Available evidence on the validity of the various indicators varied greatly. For some dimensions and indicators, such as clinical effectiveness, indicator validity was well documented based on numerous research studies and empirical experiences. For others, such as responsive governance and staff orientation, little previous research or experiences could be drawn upon to support indicator selection. In those cases, expert judgement was relied upon.

  1. A survey was carried out in 20 countries in May 2003. It aimed to define the availability of indicators, their relevance in different national contexts, their potential impact on quality improvement, and the burden of data collection. Eleven responses were received from Albania, Belgium, Denmark, Estonia, Finland, France, Georgia, Germany, Ireland, Lithuania, and Slovakia. Surveys were filled in either by individuals or by large working groups. Respondents were asked to provide a global evaluation of the above four characteristics of a measurement system for a ‘representative’ hospital in their country.

  2. The empirical findings of the survey are considered crucial to reconcile theory with practice and to develop a strategy to monitor the applicability of the model to different health care systems because the literature used to develop the PATH framework and select the indicators focused mainly on Anglo-Saxon contexts. Consequently, the applicability of tools to other contexts is unknown. The survey constituted a preliminary input from relevant countries. Further input will be made possible through the pilot implementation phase.

  3. Based on evidence gathered through the review of the literature and the survey, experts selected indicators and classified them into the core or the tailored basket.

  4. A final workshop was organized to amend indicator selection and guarantee content validity of the set of indicators as a whole. This meant that an indicator with a higher data collection burden or a lower degree of validity could still be included in the model because no indicator entirely satisfied all selection criteria.


The PATH framework

The PATH framework includes

  1. a conceptual model of performance (dimensions, subdimensions, and how they relate to each other);

  2. criteria for indicator selection;

  3. two sets of indicators (including rationale, operational definition, data collection issues, support for interpretation);

  4. an operational model of performance (how indicators relate to each other, to explanatory variables and quality improvement strategies relating to the indicator, potential reference points);

  5. strategies for feedback of results to hospitals, mainly through a balanced dashboard;

  6. educational material to support further scrutiny of indicators (e.g. surveys of practices) and dissemination of results within hospitals;

  7. strategies to foster benchmarking of results between participating hospitals and practices.

The PATH sets of indicators

An important distinction between ‘reflective’ and ‘formative’ indicators was drawn. Formative indicators (causative) lead to changes in the value of the latent variable, whereas reflective indicators (effect) are the results of changes in the latent variable. This distinction is essential for assessment of validity and for interpretation and use of indicators for action. It will support the interpretation of indicators and will guide action by indicating if the indicator should be acted upon directly for quality improvement purpose or if an underlying quality process should be acted upon resulting in an improvement of the indicator results. This distinction has strong implications when quality improvement initiatives are to be developed based on indicator results.

The list of indicators included in the operational model was restricted to a final set of 24 core performance indicators and to a tailored set of 27 indicators. The core set has been designed to allow international benchmarking in the future—when data quality will be considered good enough. A summary of the operational definitions is presented in Table 4. Full operational definitions are available as an appendix with the online version of this article. The tailored set of performance indicators was selected by the experts’ panel based on scientific evidence and on the countries survey—as the core set of indicators— but does not present operational definitions on purpose. Hospitals and/or countries can use tailored indicators and include them in their dashboards, but these indicators have to be defined operationally by them with WHO technical support. Tailored indicators are included to reflect hospital or national specific priorities and are not to be used for comparisons at international level.

View this table:
Table 4

PATH core set of hospital performance indicators

When selecting indicators, their potential use for quality improvement was considered central. According to the multidimensional and integrated model of performance (Figure 2), the main message to convey to the hospitals assessing their performance is that it is inappropriate to interpret indicators in isolation. The performance model developed in the frame of this project is a conceptualization of hospital functioning, which itself is a diverse and complex phenomenon, not easy to capture. Therefore, an isolated view of hospital performance is not only inappropriate but also dangerous and justifies the use of a balanced dashboard to assess hospital performance.

The PATH balanced dashboard

The purpose of the balanced dashboard is to enable meaning and to guide decision-making and quality improvement. The reporting scheme relates results to external references as well as internal comparisons over time and gives guidance on interpretation.

The physical structure of the balanced dashboard is organized in embedded levels. The detailed specifications of the dashboard are to be defined during the pilot implementation of the project, with the constant feedback of the field to make sure that this tool is really valuable and usable by hospitals. The design of reports must follow the interests and authority of the users and the structure of accountability and authority within the institution.


The PATH framework strongly emphasizes the internal use of indicators because ‘neither the dynamics of selection nor the dynamics of improvement (through quality measurement) work reliably today. . . the barriers are not just in the lack of uniform, simple and reliable measurements, they also include a lack of capacity among the organizations and individuals acting on both pathways’ [15].

PATH is a flexible and comprehensive framework, which should be relevant in different national contexts even if hospital performance is acknowledged to be a complex and multidimensional phenomenon. It essentially contains two sets of evidence-based indicators for use in European hospitals and suggests ways for its strategic use in hospital performance assessment [16].

The value of PATH relies not only on the interest of individual hospitals to improve the way they assess their own performance, but it also pursues the goal of building on the dynamics of national and international comparisons through benchmarking networks, possibly at national (in the short term) and international level (in the medium term).

The international dimension of the project is paramount and was perceived as a strong incentive for hospitals to participate in its pilot implementation. The international component does not limit itself to international comparisons (limited due to varying contexts even if there is growing evidence that generic hospital performance indicator rates might be comparable worldwide [17]) of results on indicators. By joining PATH, hospitals are part of an international network to share better practices for quality improvement. International networking will be fostered using different tools such as newsletters, list-server, or a web page.

In the next phase of the project, PATH will be pilot-tested in eight countries in Europe (Belgium, Denmark, France, Lithuania, Poland, Slovakia) and beyond (Canada, South Africa) from March 2004 to November 2005. The purpose of the pilot implementation is to evaluate the usefulness of the tool as a whole (especially its assessment strategies), the burden of data collection for participating hospitals and its potential for adoption across Europe.

Ultimately, PATH should support hospitals to move from measurement to interpretation to actions for quality improvement. Therefore, indicators are considered as flags requiring cautious interpretation in the light of local circumstances [6] in the frame of this project (‘performance indicators do not measure performance, people do’ [18]) and are intended primarily to give directions for action to hospital managers and hospital professionals at large. Furthermore, PATH will also contribute to the improvement of information systems and data quality and will reinforce the credibility of performance measurement systems and confidence of hospitals into the data they need to assess the way they perform—and promote their accountability [19,20].


We thank the following experts for their contributions to this article: Adalsteinn D. Brown (Canada—University of Toronto), Manuela Brusoni (Italy—University of Bocconi), Mohammed Hoosen Cassimjee (South Africa—Pietermaritzburg Metropolitan Hospital Complex and Midlands Region), Brian T. Collopy (Australia—CQM Consultants), Thomas Custers (The Netherlands—University of Amsterdam), Mila Garcia-Barbero (World Health Organization Regional Office for Europe), Pilar Gavilán (Spain—Catalan Institute for Health), Oliver Gröne (World Health Organization Regional Office for Europe), Svend Juul Jorgensen (Denmark—National Board of Health), Vytenis Kalibatas (Lithuania—National Blood Centre), Isuf Kalo (World Health Organization Regional Office for Europe), Johann Kjaergaard (Denmark—Copenhagen Hospital Corporation), Itziar Larizgoitia (World Health Organization), Pierre Lombrail (France—Nantes University Hospital), Ehadu Mersini (Albania—Ministry of Health), Etienne Minivielle (France—INSERM), Sergio Pracht (Spain—Europatient Foundation), Anne Rooney (United States of America—Joint Commission International), Laura Sampietro-Colom (Spain—Catalan Institute for Health), Irakli Sasania (Georgia— M. Iashvili Children’s Central Hospital), Henner Schellschmidt (Germany—Wissenschaftliches Institute der AOK), Charles Shaw (United Kingdom—Private Consultant), Rosa Suñol (Spain—Foundation Avedis Donabedian), and Carlos Triginer (World Health Organization Regional Office for Europe).


View Abstract