OUP user menu

The Picker Patient Experience Questionnaire: development and validation using data from in-patient surveys in five countries

CRISPIN JENKINSON, ANGELA COULTER, STEPHEN BRUSTER
DOI: http://dx.doi.org/10.1093/intqhc/14.5.353 353-358 First published online: 1 October 2002

Abstract

Objective. The purpose of this study was to develop and test a core set of questions to measure patients’ experiences of in-patient care. Questions were selected from the bank of items developed for use in in-patient surveys undertaken by the Picker Institute for the purposes of assessing the quality of care.

Design. The data reported here come from surveys of patients who had attended acute care hospitals in five countries: the United Kingdom, Germany, Sweden, Switzerland, and the USA. Questionnaires were mailed to patients’ homes within 1 month of discharge, either to all patients, or to a random sample, discharged during a specified period.

Sample. A total of 62 925 questionnaires were returned, with response rates of 65% (UK), 74% (Germany), 63% (Sweden), 52% (Switzerland), and 46% (USA).

Results. Fifteen items were selected from the bank of questions included in the Picker in-patient questionnaires. These items have a high degree of face validity and when summed to an index they show a high degree of construct validity and internal reliability consistency.

Discussion. Fifteen items derived from the longer form Picker in-patient survey have been found to provide a meaningful picture of patient experiences of health care, and constitute the 15-item Picker Patient Experience Questionnaire. These questions comprise a core set that should be measured in all in-patient facility surveys. The Picker Patient Experience Questionnaire represents a step forward in the measurement of patient experience as it provides a core set of questions around which further optional modules may be added. Scores are easy to interpret and actionable.

Conclusion. This small set of questions could be incorporated into in-patient surveys in different settings, enabling the comparison of hospital performance and the establishment of national or international benchmarks.

  • in-patient experiences
  • Picker Patient Experience Questionnaire
  • PPE-15

There is increasing interest in eliciting feedback from patients to highlight aspects of care that need improvement and to monitor performance and quality of care. Governments and regulatory authorities in some countries now require hospitals to organize patient surveys at regular intervals. In England, the Department of Health has launched a programme of national surveys in which every NHS Trust is required to survey their patients once a year [1], whilst in Switzerland the National Coordination and Information Office for Quality Improvement has recommended Picker survey instruments assessing patient experiences of health care to be administered in 300 hospitals on an annual basis. Indicators of patients’ experience are to be included in the set of national performance indicators, results of patient surveys must be reported in an annual Patients’ Prospectus, and Trusts will be expected to show that they have absorbed and acted on the results. For these purposes, a standard set of questions is required to compare performance, both between hospitals and over time.

Whatever the pharmaceutical and technological advances of the past 50 years, the patient’s experience of illness and medical care is at the heart of one of the most fundamental purposes of clinical medicine, namely to relieve human suffering. Traditionally, assessments of medical care did not take account of patients’ reports. Instead, health care provision was assessed purely in terms of technical and physiological reports of outcomes [2]. More recently, however, health care systems have sought to achieve a balance in services that offer not only clinically effective and evidence-based care, but that are also judged by patients as acceptable and beneficial [3,4]. A number of survey questionnaires have been used for such purposes, but these have primarily elicited information on satisfaction with services. Questionnaires that ask patients to rate their care in terms of how satisfied they are tend to elicit very positive ratings, which are not sensitive to problems with the specific processes that affect the quality of care delivery [5]. A more valid approach is to ask patients to report in detail on their experiences by asking them specific questions about whether or not certain processes and events occurred during the course of a specific episode of care. This type of questionnaire can provide results that can be easily interpreted and acted upon [6]. Building on extensive qualitative research to determine which aspects of care are important to patients, the Picker Institute developed standardized instruments to measure the quality of care in relation to particular domains [7].

The Picker adult in-patient questionnaire consists of a number of sections asking patients about their condition, demographic details, and aspects of their health care experience. The conceptual basis and design of this questionnaire has been described in full elsewhere [8]. Briefly, development of the instrument involved consultation with experts, a systematic literature review and organization of patient focus groups, and in-depth interviews to determine issues of salience to them in health care encounters. Following in-depth interviews with patients in the different countries to test comprehension, the questionnaire was redrafted and piloted extensively before the final versions were produced.

The basic instrument contains a total of 40 standard items included in questionnaires containing around 100 questions, the exact number depending on the country in which they are used and the requirements of individual hospitals. (An additional four standard items are used for patients who underwent surgical procedures, but these have been omitted from the analysis reported here.) Each item is coded for statistical analysis as a dichotomous ‘problem score’, indicating the presence or absence of a problem. A problem is defined as an aspect of health care that could, in the eyes of the patient, be improved upon. An example of questions in the instrument, and how they are coded as problem scores, appears in Table 1. The items are then grouped, on the basis of their face validity, into eight dimensions that have been shown previously to represent the most salient issues in patients’ experience of hospital care [8]. The content of questions on which problem scores are based, and the dimensions into which they are grouped, are documented in Table 2.

View this table:
Table 1

Examples of questions from the Picker PPE-15 survey showing derivation of problem scores1

When you had important questions to ask a doctor, did you get answers you could understand?
 1. □ Yes, always
 2. ▪ Yes, sometimes
 3. ▪ No
 4. □ I had no need to ask
Sometimes in hospital one doctor or nurse will say one thing and another will say something quite different.
Did this happen to you?
 1. ▪ Yes, often
 2. ▪ Yes, sometimes
 3. □ No
Did doctors talk in front of you as if you weren’t there?
 1. ▪ Yes, often
 2. ▪ Yes, sometimes
 3. □ No
Did you want to be more involved in decisions made about your care and treatment?
 1. ▪ Yes, often
 2. ▪ Yes, sometimes
 3. □ No
  • 1Black boxes indicate responses coded as a ‘problem’.

View this table:
Table 2

Dimensions of patients’ experience in the Picker adult in-patient questionnaire and items within each dimension

Information and education
 Not given enough information in accident and emergency unit
 Delay in admission to ward not explained
 Doctors’ answers to questions not clear
 Nurses’ answers to questions not clear
 Test results not clearly explained
Coordination of care
 Emergency care not well organized
 Admission process not well organized
 Long wait to go to ward
 Not told which doctor was in overall charge of care
 Staff gave conflicting information
 Scheduled tests or procedures not performed at appointed time
Physical comfort
 Didn’t get help to go to bathroom/toilet
 Had to wait too long after pressing call button
 Had to wait too long for pain medicine
 Staff did not do enough to control pain
 Not given right amount of pain medicine
Emotional support
 Doctor didn’t discuss anxieties or fears
 Didn’t always have confidence and trust in doctors
 Didn’t always have confidence and trust in nurses
 Not easy to find someone to talk to about concerns
Respect for patient preferences
 Doctors sometimes talked as if I wasn’t there
 Nurses sometimes talked as if I wasn’t there
 Not sufficiently involved in decisions about treatment and care
 Not always treated with respect and dignity
Involvement of family and friends
 Family didn’t get opportunity to talk to doctor
 Family not given enough information about condition
 Family not given information needed to help recovery
Continuity and transition
 Purpose of medicines not fully explained
 Not told about medication side effects
 Not told about danger signals to watch for at home
 Not told when to resume normal activities
Overall impression
 Courtesy of admissions staff not good
 Courtesy of doctors not good
 Availability of doctors not good
 Courtesy of nurses not good
 Availability of nurses not good
 Doctor/nurse teamwork not good
 Overall care received not good
 Would not recommend this hospital to friends/family

Methods

Data sources

The purpose of the study was to design a core set of items from the Picker adult in-patient questionnaire, a short form of the original, which could be used to make comparisons between hospitals and for monitoring trends over time. The data reported here come from Picker surveys of patients who had attended acute care hospitals in five countries: the United Kingdom, Germany, Sweden, Switzerland, and the USA. Questionnaires including each of the 40 standard items were mailed to patients’ homes within 1 month of discharge, either to all patients or a random sample discharged during a specified period. A covering letter explained the purpose of the survey, and a stamped addressed envelope was included. Those patients who did not reply within 2–3 weeks of the initial mailing were sent a reminder postcard and, if this elicited no reply within the following 2–3 weeks, they were sent another copy of the questionnaire.

Criteria used to develop the PPE-15

The following criteria were adopted.

  1. Items included in the instrument should be applicable to as many respondents as possible (e.g. questions on emergency admissions will not be applicable to in-patients who were planned admissions).

  2. The index created by a subset of items selected from the original 40-item measure should be highly correlated with the number of items coded as ‘problems’ on the original measure. It has been suggested that, ideally, short form measures should correlate with longer form ‘parent’ measures at 0.9 or above [9].

  3. Evidence suggests that Picker survey instruments have high levels of internal consistency reliability [10]. The level of internal consistency generally regarded as adequate is 0.7 [11] (KR-20; effectively Cronbach’s alpha statistic for dichotomous variables).

  4. Item to total correlations, corrected for overlap, should exceed 0.3 for items within a measure [12].

The above criteria should operate not only in the UK, but also in other datasets from other countries containing the same 40 items.

Results

Sample sizes and response rates to surveys are detailed in Table 3. Basic patient characteristics are detailed in Table 4. More detailed information on the surveys and characteristics of respondents has been reported elsewhere [13].

View this table:
Table 3

Samples and response rates, Picker in-patient surveys, 1998–2000

CountryNo. ofTotal sampleExclusionsResponse (completedResponse rate
hospitalssize(returned undelivered)questionnaires received)(%)
UK  5  3592 146 224965
Germany  6  3716  96 266374
Sweden  9  5306 104 327463
Switzerland  9 13939  83 716352
USA27210342642124757646
View this table:
Table 4

Characteristics of hospital patient samples in each of the five countries (values shown are percentages)

CharacteristicUK (%)Germany (%)Sweden (%)Switzerland (%)USA (%)
Female53.553.149.651.555.1
Under 65 years of age51.258.243.858.449.7
Planned admission50.461.738.554.850.2

Twenty-five items were removed from the original 40 questions, either because they were not applicable to a large proportion of respondents, or because their removal resulted in an increase in the reliability of the instrument. The 15 topics measured by the instrument are included in Table 5. The percentage of respondents reporting a problem for each topic, broken down by country, is shown in Table 6. Reliability estimates are shown, broken down by country, in Table 7. All estimates reached ≥0.8.

View this table:
Table 5

Problems identified by the 15 questions included in the PPE-15

ItemItem content
1.Doctors’ answers to questions not clear
2.Nurses’ answers to questions not clear
3.Staff gave conflicting information
4.Doctor didn’t discuss anxieties or fears
5.Doctors sometimes talked as if I wasn’t there
6.Not sufficiently involved in decisions about treatment and care
7.Not always treated with respect and dignity
8.Nurses didn’t discuss anxieties and fears
9.Not easy to find someone to talk to about concerns
10.Staff did not do enough to control pain
11.Family didn’t get opportunity to talk to doctor
12.Family not given information needed to help recovery
13.Purpose of medicines not explained
14.Not told about medication side effects
15.Not told about danger signals to look for at home
View this table:
Table 6

Percentage of respondents indicating a problem on each item of the PPE-15 (values shown are percentages)

Item no.UK (%)Switzerland (%)Sweden (%)Germany (%)USA (%)
128.112.721.617.523.9
224.110.915.313.028.7
323.314.617.715.417.9
415.1 5.1 8.211.715.9
534.117.835.723.723.6
632.618.631.226.232.4
730.617.628.627.633.5
829.711.413.610.012.5
959.335.553.345.936.9
1020.1 9.011.112.917.3
1132.815.214.117.327.6
1238.316.722.027.825.5
1323.211.116.516.513.7
1435.831.244.431.529.4
1559.933.846.744.231.9
View this table:
Table 7

Internal consistency reliability and descriptive statistics on the PPE-15

UKSwitzerlandSwedenGermanyUSA
Internal consistency reliability   0.86   0.83   0.80   0.85    0.87
Mean  33.79  18.52  26.64  25.20   25.06
SD  27.31  19.48  21.42  23.99   25.27
Respondents (N)184661642844186744493

Item–total correlations were good (Spearman correlation). The recommended level of 0.3, corrected for overlap, was achieved for all items for the UK (0.42–0.54), Switzerland (0.36–0.49), and Germany (0.34–0.58). However, one problem score fell below this criterion in Sweden and the USA: question 8 (‘Doctors sometimes talked as if I wasn’t there’) achieved a correlation of 0.23 for Sweden and 0.25 for the USA. All other correlations exceeded the criterion of 0.3 (Sweden 0.32–0.48; USA 0.42–0.57). As removal of the item did not substantially increase reliability, and as it also has high face validity as an important aspect of patient care, it was decided to retain it in the final measure.

The PPE-15 index was highly correlated with the total number of items selected as ‘problems’ on the original measure [correlations ranged from 0.93 (P < 0.001) for Sweden to 0.95 (P < 0.001) for the UK, Switzerland, Germany, and the USA].

The PPE-15 is reproduced in the Appendix.

Discussion

Patients’ experiences of health and medical care are at the very core of the purpose of clinical medicine. If medical treatments succeed only in a limited technical sense, but without any benefit to those receiving them, then interventions have failed. For this reason subjective health status measures are used to assess the impact of medicine on the well-being of patients [14]. Similarly, feedback on patients’ experiences of health care is sought in order to determine priorities for quality improvement. Increasingly, measurement of patients’ experience is also seen as an important component of performance assessment. The PPE-15 provides a basic set of questions that should be applicable in all hospitals, and relevant to all patients. The data presented here demonstrate high internal consistency validity across countries, and very high association with the gold standard total ‘problem score’ from the long form of the instrument.

It is important to note that the PPE-15 is not intended to stand alone as a 15-question survey instrument. To ensure coherence and fluency it will usually be necessary to add introductory questions (e.g. about mode of admission), filter questions (e.g. to identify whether the patient suffered pain before asking how it was dealt with), and demographic questions. The PPE-15 has been selected for inclusion in the standard questionnaire to be used in the national surveys of NHS Trusts in England. The basic version of the NHS survey instrument comprises a total of 30 questions, including the 15 PPE items.

The PPE-15 was designed to be easily and quickly completed by patients, and to enable straightforward scoring. To this end the instrument is brief, and a simple additive scoring algorithm has been adopted. Previous research has suggested that complex weighting systems can lead to errors in computation and, perhaps more significantly, they rarely substantially influence the results gained over more simple additive approaches [15,16].

The PPE-15 provides a variety of scores. Firstly it can be used to examine specific aspects of patient experience. Such information can directly inform policy decisions within any given hospital. For example, if patients report problems communicating with staff, programmes could be put in place to monitor and improve the situation. The PPE-15 can also be represented as a summary index.

The PPE-15 represents a step forward in the measurement of patient experience as it provides a basic core around which further optional modules can be added. Scores are easy to interpret and actionable. It constitutes a minimum dataset of issues that are important to patients and that hospital managers and health care professionals might reasonably be expected to address. The results presented here indicate that many patients did not receive optimal care. Ongoing application of this survey instrument could be used to monitor these basic aspects of service over time, and hopefully aid in quality improvement.

Appendix

The PPE-15

1. When you had important questions to ask a doctor, did you get answers that you could understand?

Yes, always/Yes, sometimes/No/I had no need to ask

2. When you had important questions to ask a nurse, did you get answers that you could understand?

Yes, always/Yes, sometimes/No/I had no need to ask

3. Sometimes in a hospital, one doctor or nurse will say one thing and another will say something quite different. Did this happen to you?

Yes, often/Yes, sometimes/No

4. If you had any anxieties or fears about your condition or treatment, did a doctor discuss them with you?

Yes, completely/Yes, to some extent/No/I didn’t have any anxieties or fears

5. Did doctors talk in front of you as if you weren’t there?

Yes, often/Yes sometimes/No

6. Did you want to be more involved in decisions made about your care and treatment?

Yes, definitely/Yes, to some extent/No

7. Overall, did you feel you were treated with respect and dignity while you were in hospital?

Yes, always/Yes, sometimes/No

8. If you had any anxieties or fears about your condition or treatment, did a nurse discuss them with you?

Yes, completely/Yes, to some extent/No/I didn’t have any anxieties or fears

9. Did you find someone on the hospital staff to talk to about your concerns?

Yes, definitely/Yes, to some extent/No/I had no concerns

10. Were you ever in pain?

Yes/No

If yes…

Do you think the hospital staff did everything they could to help control your pain?

Yes, definitely/Yes, to some extent/No

11. If your family or someone else close to you wanted to talk to a doctor, did they have enough opportunity to do so?

Yes, definitely/Yes, to some extent/No/No family or friends were involved/My family didn’t want or need information/I didn’t want my family or friends to talk to a doctor

12. Did the doctors or nurses give your family or someone close to you all the information they needed to help you recover?

Yes, definitely/Yes, to some extent/No/No family or friends were involved/My family or friends didn’t want or need information

13. Did a member of staff explain the purpose of the medicines you were to take at home in a way you could understand?

Yes, completely/Yes, to some extent/No/I didn’t need an explanation/I had no medicines—go to question 15

14. Did a member of staff tell you about medication side effects to watch for when you went home?

Yes, completely/Yes, to some extent/No/I didn’t need an explanation

15. Did someone tell you about danger signals regarding your illness or treatment to watch for after you went home?

Yes, completely/Yes, to some extent/No

Footnotes

  • Address reprint requests to C. Jenkinson, Picker Institute Europe, King’s Mead House, Oxpens Road, Oxford OX1 1RX, UK. E-mail: crispin.jenkinson{at}pickereurope.ac.uk

REFERENCES

View Abstract