OUP user menu

Comparing processes of stroke care in high- and low-mortality hospitals in the West Midlands, UK

Mohammed A. Mohammed, Jonathan Mant, Louise Bentham, James Raftery
DOI: http://dx.doi.org/10.1093/intqhc/mzh088 31-36 First published online: 24 January 2005


Objective. There are wide variations in hospital-specific mortality for stroke. The aim of this study was to investigate whether there were differences in quality of care when a group of hospitals with high standardized mortality ratios (SMRs) in nationally published league tables were compared with a group with low SMRs.

Design. Retrospective case note review of a random sample of patients from hospitals with high and low mortality according to published league tables.

Setting. Eight hospitals in the West Midlands, UK.

Participants. 702 patients admitted to hospital with acute stroke during the year 2000–2001.

Main outcome measures. Process measures derived from the Intercollegiate Stroke Audit Package.

Results. Crude 30 day mortality was 25% (99/402) in ‘top’ ranking hospitals and 38% (113/300) in ‘bottom’ ranking hospitals (P < 0.001). Bottom hospitals performed significantly (P < 0.001) less well on four out of seven indicators of process of care relating to the patients’ first 24 hours in hospital—assessment of eye movements and visual fields, screening for swallowing disorders and sensory testing. However, analysis at the individual hospital level showed that this was largely due to poor performance in one hospital with high mortality. If this outlier was omitted, there was little relationship between process of care and SMR. No significant differences were found in care provided after 24 hours. Nevertheless even in ‘top’ ranking hospitals only 47% of stroke patients had at least 50% of their hospital stay in a stroke/rehabilitation unit and only 40% were on aspirin within 48 hours.

Conclusions. Our results show that there is scope for improving the quality of stroke care irrespective of where a hospital ranks in terms of mortality. The lack of association between SMR and quality of care as assessed by process measures casts some doubt over the value of ranking hospitals in terms of stroke SMR.

  • mortality league tables
  • outcome
  • process of care
  • quality of care
  • stroke


In England, publication of a stroke mortality league table by an independent commercial company (Dr Foster Ltd) generated considerable interest in the national press [1,,2] because it showed wide variation between hospitals [3]. Dr Foster’s analysis is based on hospital standardized mortality ratios (SMRs), comparing observed with expected in hospital mortality, adjusting for age, gender, and length of stay with a supporting regression analysis taking deprivation into account for the years 1995–2001. However, Dr Foster’s results are difficult to interpret, since they are based on routinely collected Hospital Episode Statistics (HES), which are prone to error [4] and can only allow very basic case mix adjustment. It is not possible to say whether the hospitals with lower mortality are ‘better’, which was implied by the rankings, or whether such hospitals simply treat patients with less severe illness. Consequently, there is controversy over the extent to which publication of hospital-specific mortality rates such as these inform or mislead. There are four major categories of explanation that might account for the observed variations: differences in the type of patient being treated (i.e. further unmeasured differences in case mix); differences in how the data were collected; chance; or differences in the quality of care provided [5]. It has been argued that it is unlikely that variations on this scale could be attributed to differences in the care provided, since they far exceed the likely effects of proven interventions, such as provision of stroke unit care, and therefore that the likeliest explanation is residual confounding due to incomplete adjustment for case mix [5]. An alternative view is that many aspects of stroke care which have not been subjected to randomized controlled trials, but yet constitute ‘common sense’, may account for some of this mortality difference [6]. If the variations do reflect differences in how stroke care is provided in hospitals, this would have important implications both for the provision of stroke services, and how to monitor them. The aim of this study was to explore the extent to which differences in quality of care might be associated with differences in observed stroke mortality in ‘high’ and ‘low’ mortality hospitals according to the Dr Foster Ltd rankings.


The commissioners of the study, the West Midlands R&D directorate of the NHS Executive, recruited seven hospitals into the study based on the Dr Foster SMR rankings [7]—four of the highest ranking and three of the lowest ranking West Midlands hospitals (see Table 1). A fourth low-ranking hospital was also recruited, but was subsequently excluded because of non-random patient selection (see below).

View this table:
Table 1

Comparison of Dr Foster SMR and hospital-specific 30 day mortality in our study sample

Dr Foster case-mix adjusted stroke Standardized Mortality Ratio (SMR)1 {rank}2Study sample crude 30 day mortality (%)95% CI
‘Top’ hospitals
    190 {22}19/100 [3]3 (19.0)(11.3–26.7)
    293 {31}22/100 [1] (22.0)(13.9–30.1)
    395 {44}27/101 [3] (26.7)(18.1–35.4)
    488 {54}31/101 [2] (30.7)(21.7–39.7)
‘Bottom’ hospitals
    5129 {173}42/100 [3] (42.0)(32.3–51.7)
    6121 {171}31/99 [2] (31.3)(22.2–40.4)
    7107 {120}40/101 [1] (39.6)(30.1–49.1)
  • 1 Study hospital groups (‘top’ and ‘bottom’) derived from Dr Foster’s SMRs, which compared observed versus expected in-hospital deaths after adjusting for age, gender and length of stay with a supporting regression analysis taking deprivation into account over 1995–2001.

  • 2 Rank of hospital as provided in [2], from a list of 173 hospitals.

  • 3 [ ] indicates number of missing cases, i.e. where 30-day mortality was not known.

Our ‘top’ hospitals consisted of one large teaching hospital (976 beds), two medium-sized acute hospitals (340 and 488 beds) and one small acute hospital (146 beds). Our ‘bottom’ hospitals consisted of one large acute hospital (776 beds) and two medium-sized hospitals (330 and 557 beds) [8].

For each hospital, we obtained a list of all patients admitted with a stroke during a 12 month period, April 2000–March 2001 (the Dr Foster rankings included this year’s data) identified via the hospital information systems using the International Classification of Disease (ICD-10) codes I61, I63 and I64. Each patient on a given hospital’s list was allocated a study number, and 100 numbers were randomly selected for each hospital. Where the notes could not be obtained or if the admission did not relate to acute stroke the case was substituted by the next case on the random list. One of the ‘bottom’ ranking hospitals did not follow our prescribed sampling procedure and so selected patients non-randomly. Consequently this hospital was excluded from the analysis.

Data were extracted from case notes by our trained research nurse (L.B.), using the widely used and validated intercollegiate stroke audit package, which collects data on case mix and on process of care, based on nationally agreed clinical standards [9]. We chose this data collection tool because it is comprehensive, well established with adequate inter-rater reliability and is widely used in the UK and elsewhere [10–12,,14]. Throughout the data collection and analysis phases of the study our case note data extractor (L.B.) was blinded to the league table position of each hospital.

We focused on six aspects of case mix, based on the Oxfordshire Community Stroke Project [13] and validated in three independent cohorts [11,12,,14]. The case mix factors were: age; whether the patient lived alone before the stroke; whether the patient was independent in simple activities of daily living (ADL) pre-stroke; whether, at the time of maximum severity after stroke, the patient could lift both arms against gravity, could walk without the help of another person and was fully conscious [13]. Process of care measures were derived from the intercollegiate stroke audit package, which is based upon nationally agreed standards of best practice relating both to assessments performed during the first 24 hours, and subsequent care [15]. They included six process of care indicators for the first 24 hours: (i) whether a clear diagnostic description of the likely site of cerebral lesion had been documented, (ii) whether brain imaging had been performed within 24 hours, (iii) whether the patient’s consciousness level was assessed, (iv) whether the patient was screened for swallowing disorders, (v) whether the patient’s visual fields were assessed and (vi) whether the patients had sensory testing. For the post-24 hours of care period six process of care indicators were assessed: (i) whether aspirin was given within 48 hours of admission, (ii) whether a swallowing assessment by a speech and language therapist had been undertaken within 72 hours of admission, (iii) whether a physiotherapist’s assessment had been undertaken within 72 hours of admission, (iv) whether the patient had been cared for by a stroke team within 7 days of admission, (v) whether the patient had at least half their stay in a stroke/rehabilitation unit, and (vi) whether their hospital stay was limited to one or two wards only. Thirty-day mortality was ascertained from hospital case-notes, and a separate follow-up exercise involving review of the data held on the hospital computer system and the NHS Tracing [16] service was carried out.


The null hypothesis we were testing was that there would be no difference in quality of care between high- and low-mortality hospitals as defined in the Dr Foster SMR rankings (Table 1). Due to power considerations, we made the a priori decision that our principal analysis would compare process measures at the group level (i.e. high SMR group versus low SMR group) rather than to compare differences between individual hospitals. Differences in proportions between ‘top’ and ‘bottom’ hospital groups were tested using the binomial test for proportions with a correction for continuity.

In our study, a data item was deemed missing, if, despite a thorough search of the case-notes, it could not be found. For all process of care indicators, the denominator excludes patients who were not eligible (see footnotes to Tables 3 and 5) and so denominator sizes vary.

View this table:
Table 3

Comparison of care in ‘top’ and ‘bottom’ hospitals within the first 24 hours of admission

Process of care indicator‘Top’ hospitals n (%)‘Bottom’ hospitalsn (%)P value
Visual fields assessed186/235 (37)34/164 (21)0.001
Clear diagnostic description of likely site of cerebral lesion144/397 (36)99/298 (33)0.45
Brain imaging performed2121/337 (36)85/229 (37)0.84
Screened for swallowing disorders1145/251 (58)63/169 (37)0.0001
Sensory testing performed1148/251 (59)71/176 (40)0.0002
Eye movements assessed265/385 (69)166/292 (57)0.0018
Consciousness level assessed366/392 (93)271/294 (92)0.65
  • 1 Patients excluded if impaired level of consciousness/communication documented.

  • 2 Denominator restricted to patients in whom a CT scan was indicated within 24 hours according to the National Clinical Guideline for Stroke [19].

View this table:
Table 5

Comparison of care in ‘top’ and ‘bottom’ hospitals after 24 hours

Process of care indicator‘Top’ hospitals n (%)‘Bottom’ hospitals n (%)P value
Aspirin within 48 hours1133/330 (40)79/236 (33)0.12
At least 50% of stay on stroke or rehabilitation unit181/387 (47)122/291 (42)0.24
Stay limited to one or two wards226/384 (59)151/295 (51)0.06
Cared for by stroke team within 7 days2205/334 (61)143/231 (62)0.94
Swallowing assessed by S&LT within 72 hours3158/248 (64)118/186 (63)0.95
Physiotherapist assessment within 72 hours4224/341 (66)160/229 (70)0.34
  • Stroke team, recognized multidisciplinary team of clinicians specialized in stroke.

  • S&LT, speech and language therapist.

  • 1 Not applicable if patient died within 48 hours, or patient had intra-cerebral haemorrhage or aspirin was contra-indicated.

  • 2 Not applicable if patient died or discharged within 7 days.

  • 3 Not applicable if swallowing documented as normal, patient still unconscious, has died within 72 hours or is receiving palliative care.

  • 4 Not applicable if patient has died within 72 hours or is receiving palliative care.


While there were clear differences between the Dr Foster SMRs of ‘top’ and ‘bottom’ hospitals, the differences in 30 day mortality between the hospitals in our sample were not so great (Table 1). Indeed, the mortality in ‘top’ hospital 4 and ‘bottom’ hospital 6 in our sample were the same. This reflects the fact that our study was based on a sample of 1 year’s data, whereas the Dr Foster data was based on all 6 years’ data. Nevertheless, overall there were significant differences in the 30 day mortality between ‘top’ and ‘bottom’ hospitals in our study sample (99/402, 25% versus 113/300 38%, P < 0.001).

The characteristics of our study patients are shown in Table 2. With the exception of the proportion of patients who were fully conscious (69% versus 58%, P = 0.005), the differences in patient case mix factors at ‘top’ and ‘bottom’ hospitals were not statistically significant. However, for half of the case mix items (see Table 2), there were considerable amounts of missing data, with significantly less complete recording in ‘bottom’ hospitals (pre-stroke independence in ADL, P = 0.002; no loss of power in both arms, P = 0.002; and able to walk, P = 0.009) compared with ‘top’ hospitals (see Table 2).

View this table:
Table 2

Case mix profiles in ‘top’ and ‘bottom’ hospitals

‘Top’ hospitals n = 402‘Bottom’ hospitals n = 300P value
Missing data n (%)Missing data n (%)
Mean [median] age (years)76 [78]0 (0)75 [77]0 (0)0.061
Pre-stroke independence in ADL2288 (85)64 (16)192 (86)76 (25)0.96
No loss of power in either arm380 (26)98 (24)41 (21)106 (35)0.23
Fully conscious3275 (69)1 (0)173 (58)3 (1)0.006
Able to walk366 (24)130 (32)36 (21)126 (42)0.45
Lived alone pre-stroke132 (34)6 (1)112 (38)6 (2)0.23
  • 1 Students t-test comparing mean ages, d.f. = 699, t = 1.861.

  • 2 ADL, activities of daily living.

  • 3 At time of maximum severity after stroke.

Process of care data

Aspects of process of care within the first 24 hours after admission are shown in Table 3. Where the item was inapplicable for a given patient, that patient has been excluded from the denominator (see footnotes to Table 3). There were significantly more recording of eye movements, screening for swallowing disorders, assessment of visual fields, and sensory test performing in the ‘top’ hospitals. The proportion of patients in both sets of hospitals who had a CT scan within 24 hours was low (Table 3), though 81% (320/393) of patients in ‘top’ hospitals compared with 78% (226/289) in ‘bottom’ hospitals had a CT scan at some point during the admission (excluding those patients who died within 24 hours of admission, P = 0.35).

Table 4 shows the process of care data within the first 24 hours for each hospital. It can be seen that the differences in process of care between ‘top’ and ‘bottom’ hospitals are largely driven by one outlier hospital (5), which also had the highest Dr Foster SMR and the highest mortality in our sample. If hospital 5 was omitted, then there are no clear differences between ‘top’ and ‘bottom’ hospitals in process of care measurement, with the exception of screening for swallowing disorders (145/251 58% ‘top’ hospitals versus 45/109 41% ‘bottom’ hospitals excluding hospital 5, P = 0.0057).

View this table:
Table 4

Adherence to given process of care of standards for each hospital, n (%)

Process of care indicator‘Top’ hospitals‘Bottom’ hospitals
First 24 hours of care
    Visual fields assessed28/63 (44)13/50 (26)31/61 (51)14/61 (23)4/57 (7)11/49 (22)19/58 (33)
    Clear diagnostic description of likely site of cerebral lesion37/99 (37)27/97 (28)37/100 (37)43/101 (43)38/99 (38)29/99 (29)32/100 (32)
    Brain imaging performed35/89 (39)24/89 (27)24/68 (35)38/91 (42)34/76 (45)23/72 (32)28/81 (35)
    Screened for swallowing disorders31/66 (47)29/53 (55)45/66 (68)40/66 (61)18/60 (30)27/58 (47)18/51 (35)
    Sensory testing performed43/68 (63)30/52 (58)45/66 (68)30/65 (46)15/59 (25)32/55 (58)24/62 (39)
    Eye movements assessed66/95 (69)62/96 (65)68/95 (72)69/99 (70)47/99 (47)61/96 (64)58/97 (60)
    Consciousness level assessed90/96 (94)93/98 (95)92/99 (93)91/99 (92)90/100 (90)87/95 (92)94/99 (95)
Post 24 hours of care
    Aspirin within 48 hours29/81 (36)31/86 (36)29/79 (37)44/84 (52)27/80 (34)32/83 (39)20/73 (27)
    At least 50% of stay on stroke or rehabilitation unit47/95 (49)47/95 (49)34/98 (35)53/99 (54)32/98 (33)50/95 (53)40/98 (41)
    Stay limited to one or two wards64/98 (65)45/92 (49)63/99 (64)54/95 (57)44/98 (45)43/98 (44)64/99 (65)
    Cared for by stroke team within 7 days55/89 (62)45/81 (56)44/79 (56)61/85 (72)49/75 (65)58/85 (68)36/71 (51)
    Swallowing assessed by S&LT within 72 hours29/63 (46)46/60 (77)36/56 (64)47/69 (68)45/67 (67)34/60 (57)39/59 (66)
    Physiotherapist assessment within 72 hours62/89 (70)71/89 (80)54/84 (64)37/79 (47)35/73 (48)67/80 (84)58/76 (76)
  • Stroke team, recognized multidisciplinary team of clinicians specialized in stroke.

  • S&LT, speech and language therapist.

Aspects of care after 24 hours are summarized in Table 5. None of the differences between ‘top’ and ‘bottom’ hospitals were statistically significant. Table 4 shows these data at the individual hospital level. There is widespread variation between hospitals, but this is not associated with the hospital’s Dr Foster ranking.


Our findings show that there is scope for improving the quality of stroke care, including in hospitals with lower mortality. For example, less than half the stroke patients (i) had at least 50% of their hospital stay in a stroke/rehabilitation unit and (ii) were on aspirin within 48 hours. There were process differences between ‘top’ and ‘bottom’ hospitals at aggregate level in terms of care within the first 24 hours, but these were driven by one outlier hospital with high mortality and poor process of care. If this hospital is excluded, then there is little relationship between hospital SMR mortality rank and the quality of care as assessed by process of care measures with the exception of screening for swallowing disorders.

There were widespread differences between hospitals in adherence to process of care standards. These differences in process of care cannot be accounted for by case mix differences, since they either should be applied to all patients (e.g. swallowing assessment), or to all those in whom it is appropriate (i.e. conscious), and this we could take account of, since our data on consciousness were complete.

How to interpret the lack of correlation between process of care and hospital-specific mortality? Firstly, some of the process measures used in this study would not be expected to directly improve survival. Other process measures that might be expected to be associated with improved survival include: prompt recognition and treatment of complications such as dehydration, pneumonia, and thrombo-embolism and better use of preventive measures such as compression stockings in immobilized patients. Such an interpretation is consistent with an analysis by Evans et al. [17] of factors associated with better outcome in patients entered into a trial comparing stroke unit care with other models of specialized input. Stroke unit care (which was associated with lower mortality) was associated with higher rates of assessment of factors such as swallowing on initial assessment, as well as greater use of simple early interventions such as oxygen therapy, antipyretics, and measures to reduce aspiration and promote early nutrition, with lower subsequent complication rates in terms of stroke progression and chest infection. Nevertheless, some of the process measures we used do have a proven (aspirin within 48 hours; admission to stroke unit) [18] or plausible (screening for swallowing disorders within 24 hours) link to outcome. Therefore, a second possible interpretation is that any associations between process of care and outcome have been masked by case mix differences. This is highly plausible given that the adjustments made in the Dr Foster mortality league tables were limited to age, gender, length of stay and deprivation, and there was too much missing case mix data in our own data set to perform adequate case mix adjustment. A third possible interpretation is that the correlation between process and outcome might have been greater if we had ranked the hospitals in terms of observed mortality as opposed to their published SMRs. Studies where this has been done have tended to find closer correlations than we did. For example, a retrospective study in five Scottish hospitals found significantly higher mortality in one hospital as compared with the others after adjustment for case mix [19], and noted that this was associated with suboptimal care including lack of early CT scanning. A recent study (n = 181) of three New Zealand hospitals also found a significant relationship between overall process of care measures and survival [12] (mean process of care score in non-survivors versus survivors 57% versus 64% P = 0.004). We did not analyse our data in this way since the purpose of the study was to test the validity of the Dr Foster Ltd rankings, which are based on data from 5 years in order to increase statistical power. However, if circumstances change over 5 years, it may be that these rankings have traded statistical power for timeliness.


In agreement with previous studies [14] there is considerable scope for improving quality of stroke care in this study, irrespective of underlying mortality. Although we were unable to adjust mortality for measured patient case mix factors (because of the volume of missing data), our findings that differences in the processes of care are largely unrelated to hospital mortality ranking raise question marks over the validity of mortality-based league tables for stroke as a measure of hospital quality. The study also has implications for research. The associations observed between processes of care in the first 24 hours and outcome, albeit driven by one hospital, provides some evidence that ‘taking acute stroke care seriously’ [6] might lead to a reduction in early mortality. However, the evidence base for how best to manage stroke in the first 24 hours is limited. While some aspects of acute care such as thrombolysis and blood pressure control can be subjected to randomized controlled trials, it would be difficult to justify randomization of simple ‘first aid’ measures such as keeping someone hydrated, or maintaining their airway. A case can be made for large non-randomized studies to explore the association between outcome and simple aspects of acute stroke management, building on the work of Evans et al. [17]. This would generate a better understanding of what aspects of the process of acute care are important and hence enable care of stroke patients to be improved, whether within or outside an acute stroke unit.


View Abstract