OUP user menu

Repeated measurements of generic indicators: a Danish national program to benchmark and improve quality of care

Peter Qvist, Lisbeth Rasmussen, Birgitte Bonnevie, Thomas Gjørup
DOI: http://dx.doi.org/10.1093/intqhc/mzh028 141-148 First published online: 29 March 2004


Objective. To measure performance on the basis of generic (non-diagnoses related) standards of care developed in a national Danish quality improvement programme in departments of internal medicine, and to determine the power of repetitive national audits to increase levels of performance.

Design. Multifaceted intervention: national audits in 2001 and 2002 based on the standards of the program, combined with direct contact with heads of departments and a national conference to discuss audit results.

Setting. Seventy-nine and 82 wards in 2001 and 2002, respectively, covering 71% of Danish hospitals receiving medical emergencies. The wards participated on a voluntary basis.

Participants. In the first audit round, 3950 patients were admitted as emergencies, while 4068 patients were admitted as emergencies in the second audit. Patients were included without reference to diagnoses.

Main outcome measures. Correct initial diagnostic assessment, early interdisciplinary action plans, correct drug prescriptions, waiting times for examinations, documented patient information, readmissions, and content and processing time for discharge letters.

Results. For the 70 wards participating in both rounds, the general level of performance improved significantly between the two audits: the proportion of patients with correct initial diagnostic assessment increased from 75.9% to 79.4%, the proportion of patients with correct drug prescriptions increased from 83.8% to 85.9%, and the proportion of sufficiently informed patients increased from 32.4% to 36.2% (P < 0.05). The proportion of medical records containing action plans for selected clinical problems (nutritional and functional problems, fever, and treatment of pain) increased from 72.8% to 75.9% (P < 0.05). Length of stay in hospital was significantly related to a correct initial assessment and to waiting time for examinations. Wards with a common medication chart for physicians and nurses had significantly more correct drug prescriptions than wards that did not use a medication chart. Fifty-four (75%) of the participating departments indicated that the result of the first audit round had led to organizational changes in the department.

Conclusion. Professional self-regulation guided by a multidisciplinary audit tool developed in cooperation with professionals can improve quality of care. It is possible to conduct and repeat a national audit on a voluntary basis.

  • clinical audit
  • internal medicine
  • performance indicators
  • benchmarking

Emergency medical admissions account for a substantial proportion of hospital admissions [1]. Many patients suffer from chronic diseases and the clinical pathways for these patients are often complex. It is therefore vital for both quality of care and for optimal use of resources that the clinical processes and the organization function well during hospital stay, as well as in the coordination of care across treatment settings.

The Good Medical Department Program is a Danish national program aimed at reducing variation and improving quality of care in departments of internal medicine [2,3]. The project develops multidisciplinary standards based on everyday clinical processes of care. The standards encompass the organization of treatment across care settings and are meant to be relevant for all types of patients, regardless of their disease. Performance indicators that allow objective measurement and benchmarking are elaborated for selected standards. The standards are developed with close cooperation between the project Secretariat and health care professionals from medical departments participating in the program.

The project was started in 2000 and is funded by the Danish counties and the Copenhagen Hospitals’ Corporation. A consultant and a chief nurse govern the project Secretariat. The steering committee of the project consists of representatives from the health authorities, counties, Copenhagen Hospital Corporation, the Danish Board of Health, Health Ministry, and a number of professionals. Once a year the project coordinates a national audit of quality based on indicator measurement of selected standards. The participating departments report anonymous data to a central database that allows the departments to know whether they are performing satisfactorily according to the standards. The database enables the departments to compare their performance with similar departments and to follow their own development in quality over time. The departments participate on a voluntary basis and improvement of quality is based on professional self-regulation.

In the present paper we report the results of the first two audits, performed in 2001 and 2002. The results of the first audit are compared with the results of the second audit to assess whether performance has improved over the last year.


Participating wards

All Danish departments of internal medicine, with responsibility for emergency admissions, were invited to participate in the program. Participating departments could enrol one or more of their wards. Departments from all 14 Danish counties and the Copenhagen Hospitals’ Corporation participated (Table 1).

View this table:
Table 1

Participating wards in both the first (2001) and the second audit (2002)

Standards and indicators

Generic standards covering hospital and care pathways across treatment settings were elaborated by the project Secretariat and whenever possible these standards were evidence-based. All standards were confirmed by a multidisciplinary team of 20 experts and by scientific societies for which the standards in question were relevant. Before the audit, registration of indicators was piloted in five volunteer departments and the data collection form was adjusted accordingly. Subsequently, an inter-observer variation study was performed, with collection of data for all indicators for 50 consecutive patients, by internal as well as external reviewers. Observer agreement reached a mean value of 84.2% for the chosen indicators (range 64–98%). Questions regarding indicators with relatively low agreement were adjusted or addressed specifically in the written instructions to minimize observer variation. The overall assessment in terms of good versus poor performance did not differ between internal and external reviewers.

Examples of standards regarding the planning of care are shown in Table 2.

View this table:
Table 2

Example of standards for care planning

Audit rounds

The data collection form was developed from selected standards with special focus on multidisciplinary planning, medication (prescription errors), in-hospital waiting times, documented patient information and readmission. Selection of the topics and corresponding specific indicators was done by the Secretariat, supported by dedicated professionals. Several considerations were taken into account in the process of choosing indicators, including scientific evidence and suspicion of quality problems or variations in performance. Only topics accepted by professionals as important and topics that should be documented in the records were included. Table 3 shows items covered by the two audit rounds in January 2001 and January 2002.

View this table:
Table 3

Items covered by the two audit rounds

In each round a doctor and a nurse from each of the participating wards collected data from 50 consecutive patients; four wards collected data on slightly fewer (38–49) patients in the second round. Detailed explanatory guidelines were used to aid in completion of the audit forms. Indicator data were recorded electronically and reported to the project database. Both nurses’ and doctors’ records together with other relevant documentation was used as the basis for data collection. It took an average of 10 minutes to review each case.

Five months after reporting the results from the first audit round, the heads of the participating departments were asked whether the result of the audit would lead to organizational or other specific changes in clinical practice.

Between the two audit rounds a national conference was held. The wards that had shown a high level of performance gave presentations on local organization and processes of care.

Data analysis and presentation

Statistical methods

SPSS for windows was used for statistical analysis. The Mann–Whitney U-test and the chi-square test were used to estimate significant differences between ward-specific results, and differences between the first and the second study. Confidence intervals were estimated at the 95% level. Relations between selected variables were estimated using linear and logistic regression for numerical and binary variables.

Presentation of level of performance after each audit round

Aggregated national average indicator values for all the participating wards and confidential reports for each of the wards were presented on the project website (http://www.dgma.dk, accessed 2 December 2003) 2 months after completing the audit. Participating departments gained access to their own results via a password. Data concerning prescription of medicine is used in this paper to illustrate how results were presented to the participating wards: we examined the medical record to see whether the first three prescriptions listed there were complete. The following items were assessed: unambiguous name of drug, route, total daily dose, and administration intervals. A prescription was considered complete if all four items were present. The wards indicated whether a joint medication chart for nurses and physicians was in use. The proportion of prescriptions fulfilling each of the above-mentioned standards and the proportion of complete prescriptions for each ward were calculated. A graph ranking the wards was elaborated for each indicator with respect to medicine and for the proportion of complete prescriptions (Figure 1). Information on how to gain access to national and local results was distributed to both managers and staff members by sending out newsletters. Similar graphs were prepared for the rest of the measurements.

Figure 1

The percentages of complete prescriptions by ward. Note: the single result highlighted provides an example of how a ward accessing the data by means of a password could identify its own result.

Change in level of performance between the first and the second audit round: national level

For each audit round, all cases were pooled. The difference in the proportion of cases meeting the requirements and the difference in means were calculated for each standard, and 95% confidence intervals (CIs) were estimated.

Changes in national results are calculated on the basis of results from the 70 wards participating in both measurements.


Participating wards

A total of 79 wards representing 62 departments of internal medicine from 47 hospitals participated in the first audit round (Table 1). The 47 hospitals accounted for 71% of all hospitals in Denmark receiving medical emergencies. Data on 3950 cases were submitted. Eighty-two wards, 70 of which had participated in the first round, participated in the second audit. Data on 4068 cases were reported. No difference in age, gender, or diagnosis was found in cases participating in the first and the second audit.

Items covered in only one of the audit rounds

As seen in Table 3, most indicators were included in both rounds for comparison purposes. Exceptions were in-hospital waiting time and indicators for communication across care settings; these items will be re-audited in the next round (2003).

In-hospital waiting times for selected examinations ordered during the first 24 hours of admission are listed in Table 4. Median waiting time and range is shown for each examination.

View this table:
Table 4

Examinations prescribed during the first 24 hours of admission

There was a significant relationship between length of stay in hospital and waiting time for examinations in the hospital in question (P < 0.001).

The number of assessments made by specialists from other departments is shown in Table 5. Median waiting time and range for the most frequently used specialist consultation, assessment by a therapist, was 1 day (range 0–14 days).

View this table:
Table 5

Evaluations by specialists from other departments

The documentation of predefined requirements for content of discharge letters (such as examination results, medication list, need for follow up, etc.) was recorded in the second round. In total, 61.3% (95% CI 59.8–62.8%, range 23.0–98.0%) of the discharge summaries were complete. The discharge summary was forwarded to the general practitioner an average of 3.4 days after the patient’s discharge from hospital (Figure 2). The average processing time for the wards was between 0 and 12.2 days.

Figure 2

Processing time for discharge letters by ward. Note: the single result highlighted provides an example of how a ward accessing the data by means of a password could identify its own result.

Changes in organization between the two audit rounds

Fifty-four of 72 interviewed ward managers (75%) stated that organizational changes were made or planned in 2001 as a direct consequence of the results from the first measurement. Changes included the introduction of safer medication systems, the use of screening tools, check lists, improvement of staff education, cooperation, and teamwork, etc. For several wards participation in the studies resulted in initiating local quality improvement projects addressing one or more of the topics covered by the audits.

Change in level of performance between the first and the second audit round: national level

Changes in performance for the 70 wards participating in both rounds are shown in Table 6 for 10 selected indicators. On a national level, six out of 10 indicators had significantly higher values and only one had a lower value in the second compared with the first measurement. Examples with respect to medication and planning of care are given below.

View this table:
Table 6

Difference in level of performance between audit 2001 and audit 2002: national level for all indicators used in both years


The proportion of complete prescriptions was 83.3% (95% CI 82.6–84.0%). The lowest and the highest proportion varied among the wards, from 30% (95% CI 22.5–37.5%) to 100% (Figure 1). Wards using a joint medication chart for nurses and physicians had significantly more complete prescriptions than wards that did not use such charts (P < 0.0001). In the second round, a small but significant rise (from 83.8 to 85.9%) in the national average for complete prescriptions was seen (Table 6).

Planning of clinical pathways

The initial clinical assessment of the patient was considered to be correct if it agreed with the final assessment made by the ward. Length of stay in the hospital was significantly shorter for patients where the primary diagnostic proposals were correct (P < 0.05).

Action plans for selected clinical problems (Table 3) were generally more adequate for common symptoms like pain and fever than for functional and nutritional problems. Plans were considered incomplete if description of a problem was not followed by a decision on the action(s) to be taken. Plans with respect to action on described clinical problems were significantly more complete in the second round compared with the first round (from 72.8 to 75.9%).

Change in level of performance between the first and the second audit round: ward level

Evaluation of changes in performance for each of the 70 wards participating in both rounds was made for the above-mentioned 10 indicators used in both rounds. Two wards were stable, good performers, with at least half of the indicator values above the national average (and none below) in both rounds. Thirty-one wards (44%) were characterized by having at least one significant indicator improvement (and no corresponding deterioration on other items). For the remaining wards performance was either unchanged or had declined slightly.

In general there was a good correlation between the statements from ward managers regarding organizational changes and the improvements documented between the first and the second study. Very rarely, significant improvements were seen in departments without specific focus on the topic in question.


Interdepartmental medical audit enables clinicians to benchmark current processes and outcomes against best practice [4]. Data from national and hospital episode statistics are often used when departments need to know how well they are doing compared with others [5]. The Good Medical Department Program is the first national program based on audit of records that compares quality of care in departments of internal medicine. The audit tool was developed so that it could be used for all types of internal medicine patients. A large number of medical patients are chronically ill and suffer from several diseases. A complete evaluation of quality of care pathways will therefore not be achieved if the evaluation is founded on specific diagnoses alone.

Several interventions aiming at improving quality have been studied, but no final answer has been found as to how the best results are obtained [6]; however, the general view is that a multifaceted strategy is needed [68]. Obviously we cannot know whether quality has improved unless it is measured to begin with. Feedback from audit in combination with regional education improved standards of care in patients with stroke [8]. In the present study, feedback from audit as well as interview and participation of health care providers in a workshop, including discussion of practice, was used. Improvement was based on professional self-regulation, thus limiting the costs of the program. Changes were achieved without outside control.

It has been claimed that ideally an audit should be independent of those providing the care [9]. As part of the study, inter-rater reliability was examined by using an independent auditor. Reliability was high, as found in a previous study [10], and level of performance was not found to be higher when assessed by the wards themselves.

To ensure wide support, the standards and indicators were developed in cooperation with professionals and scientific societies. Even though the departments participated on a voluntary basis, >70% of Danish hospitals were involved in the program within the first 2 years. Apparently the managers of the departments recognized that improvement in the daily working processes was an important issue for clinical governance.

The data comparing the changes in national level of performance show improvements across most indicators, but there remain considerable and serious variations. Due to the relatively large total number of patients included in the study—3500 patients in 70 wards participated twice—significant improvement could be obtained with relatively small changes in absolute difference between the two measurements. Furthermore, it has not been possible to draw a comparison with a control group in these studies. Accordingly it is difficult to prove a direct causal connection between our strategy and the documented improvements. Impressions from the interviews alongside the fact that improvements were seen for several important topics indicate that a lot of departments made real progress as a result of participating in the program.

According to the manager interviews, the majority intended to improve, but some did not succeed in obtaining significant results within the first year. This emphasizes the importance of following up on results to identify reasons for suboptimal performance in order to choose the most effective intervention [11].

A vital aspect of The Good Medical Department Program is that outcome of treatment is linked to processes of care, and that important processes of care must be documented in the case records. Using clinical documentation as the basis for performance measurement and assessment of clinical quality implies the acceptance of a strong link between documented care and care actually provided. As mentioned earlier, we included only indicators that dedicated professionals agreed should be documented in doctors’ or nurses’ records. Organizational factors have been shown to be related to errors in medication prescription [12]. This was confirmed in our studies. The communication processes between hospitals and general practitioners, such as timeliness of receipt of discharge summaries, information about issues requiring follow-up, and treatment provided in hospital, were of great importance in the quality of the care given by general practitioners [13,14]. In this study, waiting time for examination was related to length of hospital stay. Looking at specific diagnoses, other researchers have shown that early systematic discharge planning, clinical guidelines, and fast-track treatment have reduced length of stay in hospital and readmission rates, morbidity in patients with chest pain and stroke, and morbidity in patients undergoing surgical procedures [1518].

This study included acutely admitted medical patients regardless of diagnosis. There is no reason to doubt that improving the planning of care pathways for these patients will lead to better use of resources, as has been shown for patients with well recognized, specific diagnoses.

This study provided evidence that even anonymous feedback on performance measures can stimulate change and lead to significant improvements. An important step to be taken is the advancement of electronic patient records, which automatically collect data for audit prospectively. The program is now available for other specialities, i.e. surgery, gynaecology, and obstetrics, and the program is expected to be part of the Danish model for quality improvement in the health care system that is to be launched in 2006.


View Abstract