OUP user menu

Meeting the ambition of measuring the quality of hospitals' stroke care using routinely collected administrative data: a feasibility study

William L. Palmer, Alex Bottle, Charlie Davie, Charles A. Vincent, Paul Aylin
DOI: http://dx.doi.org/10.1093/intqhc/mzt033 429-436 First published online: 13 April 2013

Abstract

Objective To examine the potential for using routinely collected administrative data to compare the quality and safety of stroke care at a hospital level, including evaluating any bias due to variations in coding practice.

Design A retrospective cohort study of English hospitals' performance against six process and outcome indicators covering the acute care pathway. We used logistic regression to adjust the outcome measures for case mix.

Setting Hospitals in England.

Participants Stroke patients (ICD-10 I60–I64) admitted to English National Health Service public acute hospitals between April 2009 and March 2010, accounting for 91 936 admissions.

Main Outcome Measure The quality and safety were measured using six indicators spanning the hospital care pathway, from timely access to brain scans to emergency readmissions following discharge after stroke.

Results There were 182 occurrences of hospitals performing statistically differently from the national average at the 99.8% significance level across the six indicators. Differences in coding practice appeared to only partially explain the variation.

Conclusions Hospital administrative data provide a practical and achievable method for evaluating aspects of stroke care across the acute pathway. However, without improvements in coding and further validation, it is unclear whether the cause of the variation is the quality of care or the result of different local care pathways and data coding accuracy.

  • quality indicators
  • measurement of quality
  • safety indicators
  • patient safety

Introduction

Measuring the quality of stroke care is a priority area for the National Health Service (NHS) in England. In July 2010, the Government cited stroke survival rates and emergency readmissions to hospital within 28 days of discharge for stroke as examples of its intentions for the NHS to measure performance ‘against results that really matter’ [1]. Financial incentives have also been introduced for hospitals to provide thrombolysis [2], early scans and care on a specialist stroke unit [3]. In 2010, a national report on stroke care highlighted both the enduring variations in hospital treatment and that some levels of care remain unacceptably low [4]. This phenomenon is not restricted to England [5] and, as a result, research has called for indicators to measure stroke care performance [6, 7].

To date, the most comprehensive and high-profile study to benchmark stroke care in English hospitals has been the National Stroke Sentinel Audit (NSSA). A further bespoke data collection was required after the introduction of mandatory measures regarding stroke care in the NHS operating framework for 2008–09 and 2010–11 [8]. Since 2000, the National Centre for Health Outcomes Development (NCHOD) has published a compendium of some 300 indicators, including 3 indicators based on administrative data specific to hospital stroke care (number of admissions, readmissions and mortality), with a further 6 measures relating to stroke prevention [9].

Administrative data have been suggested as offering the potential to evaluate the quality of hospital care [10]. In this study we investigate the feasibility of using Hospital Episode Statistics (HES), a database containing details of all admissions to NHS hospitals in England, to evaluate the quality of stroke care at a hospital level. The intention of the study was not to explicitly compare the use of administrative data to other sources, such as clinical audits, but instead to explore whether this readily available resource could provide an alternative, additional method for measuring the quality of hospital stroke care, robust to known issues in coding practice. Specifically, the objectives of the study were to evaluate the following: (1) the hospital-level variation in the measures, in terms of statistical outliers, (2) the influence of bias introduced by commonly cited variations in the coding of the underlying data and (3) convergent validity, in terms of the degree to which theoretically similar measures correlate with one another.

Methods

Developing indicators

The first stage of the study was to identify existing measures of the quality of stroke care and apply these to HES data, followed by an evaluation of the robustness of these indicators. A literature review was conducted to identify indicators of stroke care that could be applied to hospital administrative data, with a subset of six chosen for application here. The indicators were chosen to cover the hospital stroke medical care pathway and include the following: process measures (e.g. scanning rates), outcome measures that are amenable to specific aspects of care (e.g. pneumonia rates, with higher rates indicating fewer patients receive swallow assessments) and outcome measures (e.g. emergency readmission rates) (Fig. 1).

Figure 1

Positioning of the stroke indicators across the care pathway.

The selected indicators were then refined following a review process including a clinical coding specialist, administrative data experts and clinicians as well as being published online as part of an open consultation [11]. Details of the indicators are included in Table 1.

View this table:
Table 1

Details of stroke indicators used in analysis

IndicatorDescriptionRationaleDenominator exclusion criteria (admissions not counted in denominator)Numerator inclusion criteria (admissions meeting the denominator criteria with the following)
Same–day and by-next-day scanningProportion of stroke patients receiving a brain scan on the same day (or within one day) of admissionBrain scanning should be performed immediately (when indicated) or as soon as possible [15]. Hospitals receive incentive payment for immediate scanningPatients who die on the day of admission (same-day scan) or within day (by-next-day scan)OCPS codes for CT or MRI brain scan (U05.1/2 or U21.1/2 with Z01.9)
ThrombolysisProportion of stroke patients receiving thrombolysis treatmentMeasure of provision of thrombolysis. Hospitals receive payment for providing thrombolysisPatients outside thrombolysis licence (age range of 18–80)OPCS codes for thrombolysis (fibrinolytic drugs, X83.3)
Aspiration pneumoniaProportion of stroke patients contracting aspiration pneumoniaSwallow assessments can reduce likelihood of patients having pneumonia. A study suggested some 35% of deaths that occur after acute stroke are caused by pneumonia [25]NoneICD-10 codes for aspiration pneumonia—J69.0 (due to food and vomit) and J69.8 (other solids and liquids)—in primary or secondary diagnosis fields. Cases excluded if these codes appear in episodes that end before the first stroke episode
30-day in-hospital mortalityProportion of stroke patients dying in hospital within 30 days of admissionSome mortality following stroke is potentially avoidableNoneFlag for death at discharge and length of stay after stroke <30 days
Discharge to usual place of residence within 56 daysProportion of stroke patients discharged to their usual place of residence within 56 days of admissionProxy measure for successful outcome of rehabilitation and availability of on-going care. For instance, patients receiving better physical therapy have been found to have increased likelihood of being discharged home [26]Admissions that end in deathLength of stay after stroke <56 days, and admission source and discharge destination suggesting return to usual place of residence
30-day emergency readmissions (all cause)Proportion of stroke patients readmitted within 30 days of discharge from hospitalSome readmissions are potentially avoidable [27] with, for instance, infections being a leading cause for readmission of stroke patients [28]. Hospitals can be financially penalized for readmissionsAdmissions that end in deathEmergency admissions within 0–29 days of discharge

Application of indicators

The HES database includes some 14 million records every year, with each record covering the continuous period during which the patient is under the care of one consultant [Finished Consultant Episode (FCE)]. Diagnoses are recorded using the International Statistical Classification of Diseases and Related Health Problems, tenth version (ICD-10), and procedures are coded using the Office of Population Censuses and Survey's Classification of Surgical Operations and Procedures, fourth version (OPCS-4) [12, 13].

The next stage was to extract details of stroke admissions (ICD-10: I60-2 subarachnoid, intracerebral and other nontraumatic intracranial haemorrhages; I63 cerebral infarction and I64 unspecified stroke) from 1 April 2009 to 31 March 2010. The assumptions used in the algorithm for identifying strokes in this study—based on previous studies, consultation with clinical coders and review of coding guidance—are included in the Supplementary data [12, 14]. For instance, if a patient receives more than one episode of care (FCE) during treatment for their stroke, including at different hospitals, these episodes are grouped together into a single admission record (‘superspell’). Where a patient is transferred between hospitals, the corresponding performance against the measure is scored against only the first hospital.

Once an extract had been obtained using these criteria, we calculated crude (unadjusted) and case-mix-adjusted rates for every acute NHS hospital across each of the measures. The formulae for these rates are described in the Supplementary data. For the adjusted rates, a logistic regression was used to calculate an expected number of numerator events based on the case mix for each hospital to account for age, sex, socio-economic deprivation quintile, number of previous admissions, co-morbidities (Charlson index), month of discharge, ethnic group, source of admission (including whether admitted as an emergency or elective patient) and stroke type (4-digit ICD-10 diagnosis code). Process measures (scanning and thrombolysis) are reported here as unadjusted (crude) rates. We plotted the crude and adjusted rates using funnel plots with 95 and 99.8% control limits and identified outliers.

Coding practice

There is a risk that hospitals' performance for these indicators might be largely affected by variation in the way hospitals code their data rather than due to differences in the quality and safety. As such we investigated, at a hospital level, two proxies for the consistency of coding practice and evaluated the relationship between any coding variation and hospital performance. We hypothesized that:

  1. ‘coding depth’ would bias performance on the aspiration pneumonia measure, with the coding practice in some hospitals increasing the likelihood that they record secondary diagnoses and, therefore, identify complications and comorbidities and

  2. use of the ICD-10 diagnosis code I64 (unspecified stroke) could bias performance on the mortality measure, since different use of this code may affect the risk adjustment for the outcome measures. We also predicted a potential relationship between scanning rates and the use of this unspecified diagnosis code, since conducting a brain scan is the principle way to determine stroke type and, therefore, if any of the specific stroke codes (i.e. other than I64) can be used.

To measure the consistency of ‘coding depth’, we calculated, by hospital, the average number of distinct diagnosis codes per admission. For use of ICD-10 code I64, we calculated, by hospital, the proportion of strokes recorded using this diagnosis code. We calculated correlation coefficients and significance between these indicators of coding practice and relevant quality measures. We re-ran the regression analysis for where the coding issue had a substantial impact, instead fitting generalized linear models with a hospital-level variable to account for variations in coding practice, and again plotted on funnel charts to investigate whether the same hospitals would be identified as outliers.

Inter-measure correlations

We also compared hospitals' performance across the different indicators to evaluate our hypothesis that certain indicators would be correlated with, for example, good management of a stroke unit likely to affect all the quality and safety indicators to some extent. Specifically, we predicted two clinical explanations for correlations:

  1. two indicators measure different events on a defined clinical pathway, e.g. scanning and thrombolysis rates or

  2. an outcome indicator reflects the results of a process indicators, e.g. morality (outcome) and scanning (process).

To test this hypothesis, we investigated associations between hospitals' performance by calculating the coefficient and statistical significance of the correlation.

Analysis

All regression analyses were conducted using SAS version 9.2 using either the PROC LOGISTIC or PROC GLIMMIX procedures. Statistical outliers were identified using funnel plots based on templates provided by the network of public health observatories (www.apho.org.uk). For correlations, the Pearson's correlation coefficient (r) and statistical significance (P) were calculated using Microsoft Excel 2010.

Results

Across 147 acute English NHS hospitals, we identified 91 936 stroke admissions in the period April 2009–March 2010. Of these, 2522 (2.7%) died on the same day as admission, 15 846 (17.2%) died within 30 days of admission and 19 721 (21.5%) died before discharge. Of those patients meeting the inclusion criteria (discussed in Table 2), 69.7% were scanned within 1 day of admission, 2.6% received thrombolysis, 5.3% had aspiration pneumonia, 72.8% were discharged to their normal place of residence and 11.0% were readmitted as an emergency within 30 days of discharge. Each stroke identified resulted in, on average, 2.3 episodes of care (FCEs) and fewer than one-in-seven (13.7%) of the patients received care for their stroke in more than one hospital.

View this table:
Table 2

Number of hospitals identified as statistical outliers

IndicatorNational rate (%) (147 hospitals)Range (%) (min–max)Number of hospitals higher than averageNumber of hospitals lower than average
P < 0.001P < 0.025P < 0.001P < 0.025
Measures of high quality care (higher values are good)
 Same-day scan47.120.4–79.338473850
 By-next-day scana69.741.5–86.913321328
 Combined scana,bn/an/a9231024
 Thrombolysis2.60–16.813246375
 Discharge to usual place of residence within 56 days72.854.9–85.40729
Measures of poor quality care (lower values are good)
 Aspiration pneumonia5.31.6–12.511211427
 30-day in-hospital mortality17.210.1–23.108310
 30-day emergency readmissions (all cause)11.06.1–17.30409
Totala62111120180
  • aTotal excludes by-next-day scan and combined scan.

  • bCombined scan was calculated as P < 0.001 if both scan indicators had this value and P < 0.025 if both indicators at least P < 0.025.

Variations by hospital

Displaying the hospital-level data on funnel plots highlighted the variation in performance (Fig. 2). With the exception of the measure for emergency readmissions, all the indicators identified at least one hospital as having performance outside the 99.8% control limits (Table 2). If variations in performance were by chance alone, we would expect approximately four hospitals outside the control limits at the 95% level and none at the 99.8% level, which would be represented in Table 2 as 4 for the measures in each of the P < 0.025 columns and 0 in the P < 0.001 columns).

Figure 2

Funnel plots of hospital-level indicators for 2009–10. Each dot represents a hospital. The horizontal line refers to national average; short-gauge dotted line refers to P < 0.025 significance level and long-gauge dotted line refers to P < 0.001 significance level.

Coding practice

At a hospital level, the average number of distinct diagnosis codes used per admission ranges from 5.0 to 10.7. There was a statistically significant but weak correlation (r = 0.26, P = 0.002) between this measure of coding depth and performance against the aspiration pneumonia measure. Of the 25 hospitals identified at the 99.8% level in the original regression, 20 (80.0%) were again flagged as outliers at this significance level when coding practice was included in the regression, with no hospitals flagged as having statistically different performance highlighted in the reverse performance category.

Secondly, across hospitals the proportion of strokes diagnosed as ICD-10 code I64 (unspecified stroke) varied from 0.2 to 42.6%; however, there is negligible correlation (r = −0.13, P = 0.12) between use of this code and performance in the outcome measure of in-hospital mortality within 30 days. There was a statistically significant but weak correlation (r = −0.17, P = 0.04) between the proportion of patients without a specific stroke diagnosis and the hospital's 1-day scanning rates; this association was expected as a scan is required to determine whether a stroke is ischaemic or haemorrhagic.

Inter-measure correlations

Across the indicators, there were six pairs of indicators that had a statistically significant correlation at the 95% level, of which two were significant at the 99.8% level (Table 3).

View this table:
Table 3

Coefficient of correlation between pairs of indicators

By-next-day scanThrombolysisAspiration pneumonia30-day in-hospital mortalityDischarge to usual place of residence within 56 days30-day emergency readmissions (all cause)
Same-day scan0.77**0.17*0.20*−0.22*−0.060.02
By-next-day scan0.160.06−0.18*0.00−0.09
Thrombolysis−0.10−0.060.12−0.12
Aspiration pneumonia−0.06−0.31**0.08
30-day in-hospital mortality−0.15−0.05
Discharge to usual place of residence within 56 days0.00
  • Correlation significant at the 95% level marked with ‘*’; those at 99.8% level with ‘**’.

Discussion

The results show the potential for using hospital administrative data to meet the Government's intention to measure stroke care and, moreover, to highlight potentially significant variations in the quality and safety across the care pathway. Six measures of quality and safety, covering the acute stroke care pathway, were applied to English hospital administrative data identifying 91 936 strokes in 1 year. Five of the six indicators identified hospitals with statistically outlying performance at the 99.8% level.

For most of the measures, there is no clinical consensus or guidelines for what actual levels are acceptable. The exception to this rule is in access to a scan, for which extant guidelines recommend that all stroke patients should receive brain imaging ‘within a maximum of 24 h after onset of symptoms’ and, as such, the actual performance of 69.7% of patients receiving a scan within 1 day of admissions is unacceptable [15].

By using funnel plots and this control limit, we were able to account for random variation; if the deviation was entirely due to random variation, you would only expect to identify one outlier for every three to four measures (given there are 147 hospitals in the sample). Further to chance, differences in the performance may be due to case mix, how the data were collected or quality of care [16]. Given the ambition was to compare hospitals on the basis of the last of these factors, the previous two also need to be accounted for.

Previous work has highlighted the importance in adjusting for case mix when comparing the performance in stroke care [17]. We accounted for case mix using the patient-level logistic regression to calculate the expected number of events for the outcome measures. However, at a patient level, some significant case-mix factors for stroke, such as severity of stroke and pre-stroke function [18], are not directly recorded within the data and therefore some of the variation may still be caused by differences in case mix, although any bias will be further diminished by the large number of stroke patients admitted at each hospital, ranging from 171 to 1532. One specific issue relating to case mix originates from stroke care being increasingly delivered in regional networks, whereby certain hospitals are responsible for the urgent care of patients, whilst the other hospitals may take responsibility for rehabilitation. In particular, in these cases, the ambulance service has bypass protocols so that patients eligible for urgent treatment are taken directly to the designated hospital, irrespective of whether there is a nearer hospital, and therefore introducing case-mix bias.

The variation in performance due to how data were collected is harder to disaggregate. Some of the possible variations in coding practice, for example, would mimic recognized alternative hospital pathways, and where procedure codes, such as that for thrombolysis, are not recorded, this will look like the treatment has not been provided. However, we showed that two coding issues that were central to the assumptions for extracting the data explained only a small proportion of the differences in the performance against the measures. There are few studies specifically investigating the accuracy of coding stroke care in healthcare systems using the ICD-10 framework; however, one article on coding in England found that coding of stroke diagnoses was excellent and elsewhere administrative data have been recommended for use in tracking progress and identifying problems for further review [19, 20]. Previous studies have suggested coding is improving and, specifically in the instance of stroke care, the fact that some of the codes are new—for example, the scanning procedure codes were only introduced in 2006—means that it can be expected that there will initially be under-use (and so under-reporting) against these codes and that recording will improve over time [21].

Our hypothesis that similar measures of quality would be correlated could explain the significant correlations that we identified between: same-day scanning and next-day scanning (positive correlation); same-day scanning and thrombolysis (positive); scanning rates and mortality (negative); and pneumonia and discharge to usual place of residence within 56 days (negative). The one unexpected, statistically significant result was the positive correlation between same-day scanning (an indicator of good care) and aspiration pneumonia rates (an indicator of poor care). A plausible explanation is that a hospital with more comprehensive coding practices is more likely to record both the scanning procedure code and pneumonia diagnosis code compared with a hospital with less rigorous coding.

In the NSSA, there is already a comprehensive tool for measuring hospital performance in stroke care. However, the underlying data are self-reported and based on a small sample (around 60 patients per hospital), and the NSSA's 2-year reporting cycle coupled with time lag before publication prohibits its use as a real-time monitoring tool. A recent positive development has been the introduction of the Stroke Improvement National Audit Programme that collects real-time data on stroke patients albeit this only covers the first 3 days of care and does not include all hospitals. HES has the advantage of being: longitudinal; timely; covering all hospital admissions and being relatively cheap, costing £1 per record to collect compared with around £10—£60 per record for clinical registers [22]. Whilst NCHOD have made use of some of the potential advantages of administrative data, in assessing a limited number of aspects of stroke care, their indicators are only updated annually and at the time of writing, hospital mortality from stroke was 2 years out of date.

The analysis of coding practice highlighted variations in the data-recording practice of hospitals, even where guidance exists. For example, the current ICD-10 Clinical Coding Instruction Manual ‘directs the coder that on emergency admissions for strokes it is of paramount importance that the coder assigns the code for stroke in the primary position’ [12]. However, variations in the number of stroke codes recorded in secondary diagnosis fields above what would be expected from different prevalence of stroke as a co-morbidity suggest there are, from hospital to hospital, differing proportions of stroke recorded in secondary diagnosis fields. Similarly, an English study evaluating the accuracy of the coding of stroke diagnosis found that the cause of errors was predictable, with confusion in coding of different types of strokes. With limited evidence of consistency, across hospitals, in the likelihood of subarachnoid haemorrhages (ICD-10 I60) being recorded as an unspecified stroke (I64), we included these within the extract, even though such strokes often require different care pathways. This implies that further guidance and training may be needed to ensure consistency in coding [19]. One recent suggestion to facilitate an improvement is to develop better relationships between coders and clinicians [23].

We explicitly outline our assumptions for identifying strokes in HES data so that they can form the basis of a debate within the medical and coding practitioner community about how to further develop these indicators and the data set itself. Likewise there should be a review of whether some of the coding rules, such as the guidance to not record procedures undertaken before the decision to admit has been taken, should be amended. In particular, this current rule could introduce some bias where hospitals have differing procedures for admitting patients and might also result in an underestimate in, for example, scanning and thrombolysis rates.

Whilst previous studies have used hospital administrative data to measure the performance of aspects of stroke care, none have brought together the multiple facets of the care pathway and instead mostly focused on one area, such as mortality, and on national trends rather than hospital comparisons. A combination of process and outcome measures was used in this study, so benefiting from the advantages of process measures (which tend to be more sensitive to differences in the quality of care and offer a clear action for improvement) and outcome measures (greater intrinsic interest, high face validity and can reflect all aspects of care, including those that are otherwise difficult to measure such as technical expertise and operator skill) [16, 24].

This exploratory study shows that HES provides the facility to record some key process and outcome measures across care pathways in a cheap and timely manner. These results could be linked to structural measures, such as existence of specialist stroke units or a 24-h service, to investigate their effectiveness. However, there are important limitations to the data, such as lack of coding consistency and information on stroke severity. Therefore, the utilization of such measures—whether, for example, for a hospital's internal benchmarking or by regulators to identify potential quality issues—must be in proportion to the confidence over the validity of the individual indicators.

Funding

The work was supported by the National Audit Office and National Institute for Health Research. The Dr Foster Unit at Imperial College London is funded by a grant from Dr Foster Intelligence (an independent health service research organization). The Dr Foster Unit at Imperial is affiliated with the Imperial Centre for Patient Safety and Service Quality at Imperial College Healthcare NHS Trust, which is funded by the National Institute of Health Research. The Department of Primary Care & Public Health at Imperial College is grateful for support from the National Institute for Health Research Biomedical Research Centre Scheme, and the National Institute for Health Research Collaboration for Leadership in Applied Health Research & Care scheme. The funding organizations had no role in the design and conduct of the study; collection, management, analysis and interpretation of the data; and preparation, review or approval of the manuscript. We have approval under Section 251 (formerly Section 60) granted by the National Information Governance Board for Health and Social Care (NIGB, formerly the Patient Information Advisory Group). We also have approval for using these data for research from the South East Research Ethics Committee.

Acknowledgements

Sue Eve-Jones Director, Professional Association of Clinical Coders, UK. Karen Taylor OBE, Research Director, Deloitte UK and former Director of Health Value for Money Audit, National Audit Office.

References

View Abstract