OUP user menu

Do older patients and their family caregivers agree about the quality of chronic illness care?

Erin R. Giovannetti, Lisa Reider, Jennifer L. Wolff, Kevin D. Frick, Chad Boult, Don Steinwachs, Cynthia M. Boyd
DOI: http://dx.doi.org/10.1093/intqhc/mzt052 515-524 First published online: 26 August 2013

Abstract

Objective Family caregivers often accompany patients to medical visits; however, it is unclear whether caregivers rate the quality of patients' care similarly to patients. This study aimed to (1) quantify the level of agreement between patients' and caregivers' reports on the quality of patients' care and (2) determine how the level of agreement varies by caregiver and patient characteristics.

Design Cross-sectional analysis.

Participants Multimorbid older (aged 65 and above) adults and their family caregivers (n = 247).

Methods Quality of care was rated separately by patients and their caregivers using the Patient Assessment of Chronic Illness Care (PACIC) instrument. The level of agreement was examined using a weighted kappa statistic (Kw).

Results Agreement of caregivers' and patients' PACIC scores was low (Kw = 0.15). Patients taking ten or more medications per day showed less agreement with their caregivers about the quality of care than patients taking five or fewer medications (Kw = 0.03 and 0.34, respectively, P < 0.05). Caregivers who reported greater difficulty assisting patients with health care tasks had less agreement with patients about the quality of care being provided when compared with caregivers who reported no difficulty (Kw = −0.05 and 0.31, respectively, P < .05). Patient–caregiver dyads had greater agreement on objective questions than on subjective questions (Kw = 0.25 and 0.15, respectively, P > 0.05).

Conclusion Patient–caregiver dyads following a more complex treatment plan (i.e. taking many medications) or having more difficulty following a treatment plan (i.e. having difficulty with health care tasks) had less agreement. Future qualitative research is needed to elucidate the underlying reasons patients and caregivers rate the quality of care differently.

  • quality of care
  • caregiver
  • primary care

Introduction

There is growing recognition of the need to improve the quality of chronic illness care [1]. An important perspective on the quality of chronic illness care can be provided by those who experience it [2]. Informal caregivers (family and unpaid friends who assist a patient) play an important role in managing patients' health care [3, 4] and often accompany patients to physician office visits [5]. These caregivers are well-positioned to provide additional information on the quality of patients' chronic illness care. However, this perspective has been rarely studied [68].

Caregivers are typically asked to evaluate the quality of patients' care as a proxy respondent when a patient is unable to respond for themselves [9]. These proxy responses are intended to be interchangeable with patients' ratings [10]. However, several studies have shown that patients and proxies almost never show perfect agreement. The level of agreement between proxies and patients varies across different domains of health and quality of care [9, 1115]. For example, proxy and patient responses are more likely to agree on objective and observable phenomena, such as functional impairment [16] and physical symptoms [14, 15, 17]. They are less likely to agree about subjective issues, such as quality of life [12, 18] and satisfaction with overall health care [8].

Although informative, the research on caregivers as ‘proxy respondents’ does not tell the whole story about how informal caregivers might respond as ‘external-raters’ of health care quality. As external-raters, informal caregivers are asked to make their own assessment of the quality of care rather than respond as if they were the patient [10, 19]. Understanding the concordance between caregivers' external rating of the patients' health care quality and the patients' self-rating of quality is particularly important because the patient does not necessarily provide the gold-standard response. In a medically complex older population, with high rates of mild cognitive impairment and dependence on family caregivers, it is possible the caregiver's rating of the quality of care may be more accurate than the patients.

There are a growing number of patient-reported quality-of-care measures being developed and used around the world to improve the quality of chronic illness care. This study aims to expand on this methodology by comparing patients' self-report and their caregivers' external rating of the quality of chronic illness care in the primary care setting. This study will (1) quantify the level of agreement between patients' and caregivers' reports on the quality of the patients' care and (2) to describe how the patient–caregiver level of agreement varies by patient and caregiver characteristics.

Methods

Study design

We conducted cross-sectional analysis of baseline data for patient–caregiver dyads enrolled in a cluster-randomized controlled trial of a model of primary care (Guided Care).

Parent study description (Guided Care)

The Guided Care intervention was designed to enhance the quality of health care for high-risk older adults by integrating a specially trained registered nurse into primary care practices. The patient eligibility criteria for the Guided Care study were (1) community-dwelling, (2) aged 65 or above, (3) seen by a participating physician within the previous year and (4) identified as at-risk of incurring high health care costs. Patients were identified by analyzing health insurance claims with the Hierarchical Condition Category predictive model. Eligible patients were in the highest quartile of risk for incurring high health care costs in the next year [20, 21]. Exclusions were (1) cognitive impairment with no legal representative, (2) inability to participate in the baseline or follow-up interviews and (3) not speaking English. Among eligible patients, 38% consented to participate in the baseline survey and be randomized to the Guided Care intervention or control group for three years (N = 904). Following informed consent, participating patients were administered an in-person baseline interview. All baseline interviews were conducted in the patients' homes by trained, closely supervised professional interviewers who used computer-assisted interviewing technology and underwent 10% reliability testing (see Boult et al. for more details on the parent study) [22].

Study sample

During the baseline interview, patients enrolled in Guided Care were asked whether they received assistance with activities of daily living (ADL), instrumental activities of daily living (IADL) or health care tasks from a family member or unpaid friend. Patients who reported receiving assistance were then asked to identify the person who helped the most (their ‘caregiver’). These identified caregivers were then invited to participate in the Guided Care study and baseline interview. Patients identified 353 eligible caregivers; 86% consented to participate (N = 308). Following consent, baseline interviews were conducted with caregivers in the caregivers' homes by the trained interviewers. If the patient and caregiver lived together, efforts were made to conduct the baseline interview with the caregiver and patient separately. The time between the patient and caregiver interviews averaged 26 days (see Wolff et al. for more detail) [23].

Outcome measure

Quality of care was assessed using the Patient Assessment of Chronic Illness Care (PACIC), a validated measure of patients' perceptions of the quality of chronic illness care [24]. It consists of 20 Likert-scaled items that ask about the frequency with which specific care processes occurred during the past 6 months (response categories: almost never = 1, generally not = 2, sometimes = 3, most of the time = 4 and almost always = 5). For caregivers, the PACIC questions were modified to inquire about the caregiver's perceptions of the quality of chronic illness care (e.g. ‘Over the past six months, when (PATIENT) received care for his/her chronic illness, he/she was…’). Caregivers were also given the opportunity to respond ‘don't know’ on any PACIC item. Items on the PACIC that were answered by the patient or caregiver with responses of ‘don't know’ or ‘refuse’ were counted as missing.

Previous validation studies using confirmatory factor analysis have shown that the PACIC measures a one-dimensional construct [25]. Therefore, a median PACIC item score (median response across all cases to a single item) and a median respondent score (median response across all 20 PACIC items for a single respondent) was constructed. We hypothesized a priori that agreement would differ across objective and subjective items, based on previous researches [12, 14, 15]. To distinguish objective PACIC items from subjective items, we convened a panel of experts including both geriatricians and health services researchers. Through a Delphi process, questions were categorized into (1) those that could be answered objectively, (2) those that inquired about communication about health care and (3) those that were subjective in nature.

Exploratory variables

The baseline interviews with patients also collected information regarding sociodemographic characteristics (age, gender, race, education and financial situation), health (SF-36 [26] and numbers of medications), self-reported chronic conditions (hypertension, angina, congestive heart failure, heart attack, stroke, asthma, chronic obstructive pulmonary disease, arthritis, sciatica, diabetes, cancer, osteoporosis, hip fracture and dementia) and the nature of assistance received from caregivers (number of ADL, IADL and health care tasks). The baseline interviews with caregivers also collected information about sociodemographic characteristics, including age, gender, education, relationship to patient, co-residence with the patient and average hours of care provided in a typical week. Caregivers were also administered the Caregiver Strain Index (CSI), a 13-item index (range: 0–24) originally developed to screen for caregiver strain after hospital discharge of an elderly family member [27]. Patients and caregivers were administered the Health Care Task Difficulty (HCTD) scale, which assessed their difficulty in performing (or assisting with) health care management tasks such as taking medication, visiting health care providers and managing medical bills. Using this scale, caregivers and patients were categorized into no (HCTD = 0), low (HCTD = 1), medium (HCTD = 2) or high (HCTD = 3+) difficulty groups [3].

Analysis

To quantify agreement, a weighted kappa statistic was calculated. The weighted kappa (Kw) statistic describes the level of agreement, corrected for agreement by chance and adjusted for the ordinal nature of the response categories. Weights are based on the degree of difference in response. For example, a patient and caregiver that differed by one category (almost never vs. generally not) would be weighted 0.25, where as a patient and caregiver that differed by four categories (almost never vs. almost always) would be weighted 1.00. A higher Kw indicates higher agreement (Kw > 0.40 is considered ‘moderate agreement’) [28]. Analyses were conducted using STATA v.11 [29]. Confidence intervals were estimated using bootstrapping techniques with the kapci command [30].

Initially, we calculated a patient–caregiver Kw for each PACIC item to examine the variability in agreement across items. A lack of significant variability across individual items allowed us to calculate a median patient–caregiver Kw that reflected overall agreement between patients and caregivers across all 20 items. Association between caregiver/patient characteristics and agreement was measured by comparing Kw and the corresponding 95% CI across characteristics. A significant difference was determined by non-overlapping 95% CI. To allow for comparison, all continuous variables (age, SF-36, number of chronic conditions, hours of care and CSI) were converted to binary variables above and below the sample medians.

A weighted kappa is limited by the prevalence of the trait being measured (e.g. a measure of a rare trait is likely to have a low kappa). To account for the possible bias inherent in the kappa statistic, we conducted a sensitivity analysis with polychoric correlations. The polychoric correlation is used when examining correlation across ordinal categories [31].

Results

From the original 308 patient–caregiver dyads, we excluded patient–caregiver dyads in which the caregiver was a patient–proxy (N = 41). We also excluded dyads in which the patient and/or caregiver had a missing response (no response or ‘don't know’ response) to 25% or more of the PACIC items (N = 20). The resulting sample size for analysis was 247 patient–caregiver dyads.

The agreement between patients and their caregivers on individual PACIC items is described in Table 1. Kw values were uniformly low, ranging from 0.06 to 0.30. In general, the agreement between patient and caregiver was higher for the objective PACIC items (Kw = 0.25, 95% CI: 0.16 to 0.33) than for the communication items (Kw = 0.17, 95% CI: 0.08 to 0.26) and the subjective items (Kw = 0.15, 95% CI: 0.06 to 0.24). These differences were not significant. Fig. 1 shows the distribution of median respondent scores on the PACIC for patients versus caregivers. The median response across all the PACIC items for patients is shown on y axis versus the corresponding median response of the patient's caregiver on x axis. The scatter plot shows very little agreement between patients and caregivers. The Kw for agreement between patients and caregivers on the overall PACIC was 0.15 (95% CI: 0.06 to 0.24).

View this table:
Table 1

Agreement between patients and caregivers on the PACIC Scale (N = 247 patient–caregiver Dyads)

 When I/PATIENT received care for my/his/her chronic illness over the past 6 months, I/PATIENT was:KwMedian responsea% Missingb
PTCG
Objective itemsGiven a list of things I/PATIENT should do to improve my/his/her health.0.20232
Given a copy of my/his/her treatment plan.0.30334
Contacted after a visit to see how things were going.0.13232
Referred to a dietician, health educator or counselor.0.12220
ALL OBJECTIVE ITEMSc0.2522.5
Communication itemsAsked for my/his/her ideas when we made a treatment plan.0.11224
Given choices about treatment to think about.0.12231
Asked to talk about any problems with my/his/her medicines or their effects.0.13334
Asked to talk about my/his/her goals in caring for my/his/her illness.0.06230
Encouraged to go to a specific group or class to help me cope with my/his/her chronic illness.0.25222
Asked questions about my/his/her health habits.0.19333
Asked how my/his/her chronic illness affects my/his/her life.0.13443
Encouraged to attend programs in the community that could help me/him/her.0.24121
Told how my/his/her visits with other types of doctors, like the eye doctor or surgeon, helped my/his/her treatment0.27234
Asked how my/his/her visits with other doctors were going.0.15224
ALL COMMUNICATION ITEMSc0.1722
Subjective itemsSatisfied that my/his/her care was well organized.0.16222
Shown how what I/PATIENT did to take care of my/his/her illness influenced my/his/her condition.0.13232
Helped to set goals to improve my/his/her eating or exercise.0.19442
Sure that my/his/her doctor or nurse thought about my/his/her values and my/his/her traditions when they recommended treatments to me.0.19237
Helped to make a treatment plan I/PATIENT could do in my/his/her daily life.0.13224
Helped to plan ahead so I/PATIENT could take care of my/his/her illness even in hard times.0.23224
ALL SUBJECTIVE ITEMSc0.1533
AGGREGATE PACIC AGREEMENT0.1522.5
  • PT, patient; CG, caregiver.

  • aPACIC response categories: 1 = almost never; 2 = generally not; 3 = sometimes; 4 = most of the time; 5 = almost always.

  • bMissing refers to the proportion of patient–caregiver dyads that were not included in the analysis because the patient or caregiver did not respond to the question or responded ‘don't know’.

  • cAggregate kappa values reflect the level of agreement between the subscale's calculated median scores for patients and caregivers.

Figure 1

This scatter plot shows median PACIC response for each patient–caregiver dyad in the sample. The data points are presented with a slight jitter (deviation from absolute value) to allow for easier viewing.

Table 2 displays the level of agreement between caregivers' and patients' aggregate median PACIC scores according to patient characteristics. Only gender and number of medications per week were significantly associated with the level of agreement. Male patients had significantly less agreement with their caregivers (Kw = 0.05, 95% CI: −0.05 to 0.15) than female patients (Kw = 0.24, 95% CI: 0.15 to 0.33). Similarly, patients taking ten or more medications a day had significantly less agreement with their caregivers (Kw = 0.03, 95% CI: −0.11 to 0.17) than patients taking five or fewer medications (Kw = 0.34, 95% CI: 0.17 to 0.51). We also explored the level of agreement by self-reported chronic condition reported in the baseline survey and found no significant differences (data not shown).

View this table:
Table 2

Agreement between patient and caregiver median PACIC scores, by patient characteristics

 N%Kw95% CIMedian response
PTCG
Gender
 Male117470.05(−0.05, 0.15)2.03.0
 Female130530.24(0.15, 0.33)2.02.5
Age
 <77 years111450.15(0.01, 0.29)2.53.0
 77+ years136550.14(0.03, 0.25)2.02.5
Race
 Non-White130530.10(−0.02, 0.22)2.02.0
 White117470.17(0.06, 0.28)3.03.0
Education
 <12 years79320.26(0.09, 0.43)2.02.5
 12+ years168680.11(0.00, 0.22)2.03.0
Financesa
 Enough money214870.14(0.06, 0.22)2.02.5
 Not enough money33130.28(0.03, 0.53)2.02.5
No. of chronic conditions
 0–4110450.16(0.00, 0.33)2.02.5
 5–11137550.14(0.02, 0.26)2.03.0
Physical health (SF36)
 <35110450.22(0.12, 0.32)2.02.8
 35+137550.10(−0.04, 0.24)2.02.5
Mental health (SF36)
 <49117470.14(0.01, 0.27)2.02.5
 49+130530.16(0.06, 0.26)2.02.8
ADL help
 No185750.12(0.02, 0.22)2.02.5
 Yes62250.26(0.09, 0.43)2.02.5
IADL help
 No48190.30(0.15, 0.45)2.03.0
 Yes199810.11(0.00, 0.23)2.02.5
HCT help
 0–4 tasks54220.19(0.02, 0.36)2.02.0
 5–6 tasks51210.22(0.04, 0.40)2.02.5
 7–8 tasks64260.04(−0.16, 0.24)2.03.0
 9+ tasks74300.19(0.05, 0.33)2.53.0
Number of meds
 <545180.34(0.17, 0.51)3.03.0
 5–10136550.15(−0.04, 0.34)2.02.5
 10+66270.03(−0.11, 0.17)2.03.0
  • Numbers in bold highlight a significant difference between Kw values across a characteristic.

  • PT, patient; CG, caregiver; Kw, weighted kappa statistic; ADL, activities of daily living; IADL, instrumental activities of daily living; HCT, health care tasks.

  • aQuestion asked: ‘How much money do you have at the end of the month?’

Table 3 shows the level of agreement between caregivers' and patients' aggregate median PACIC scores according to caregiver characteristics. Only the level of caregiver HCTD was significantly associated with the level of agreement between patients and caregivers. Caregivers reporting a high level of HCTD had significantly less agreement with patients about the quality of care (Kw = −0.05; 95% CI: −0.22 to 0.10) than caregivers reporting no difficulty (Kw = 0.31, 95% CI: 0.20 to 0.42). Sensitivity analyses using polychoric correlation did not change these results.

View this table:
Table 3

Agreement between patient and caregiver median PACIC scores, by caregiver characteristics

 N%Kw95% CIMedian Response
PTCG
Gender
 Male73300.29(0.13, 0.45)2.02.5
 Female174700.10(0.00, 0.20)2.02.5
Age
 <60 Years110450.22(0.04, 0.40)2.52.5
 60+ Years137550.11(0.00, 0.22)2.02.5
Education
 <12 years34140.13(0.0, 0.26)2.02.5
 12 or more years213860.32(0.10, 0.54)2.02.5
Financesa
 Enough money219890.14(0.02, 0.26)2.03.0
 Not enough money28110.31(0.08, 0.54)3.02.0
Relationship
 Spouse126510.12(−0.03, 0.27)2.03.0
 Son29120.28(0.06, 0.50)3.03.0
 Daughter67270.21(0.06, 0.36)2.02.5
 Other2510−0.04(−0.26, 0.18)2.02.5
Co-residence
 No70280.09(−0.08, 0.27)2.02.3
 Yes177720.18(0.07, 0.29)2.03.0
Hours of care per week
 <14 h112450.08(−0.02, 0.18)2.02.5
 14+ h135550.22(0.10, 0.33)2.02.5
Strain
 CSI <6118480.17(0.06, 0.28)2.02.5
 CSI 6+129520.14(0.02, 0.26)2.52.5
HCT difficulty
 None106430.31(0.20, 0.42)2.02.5
 Low48190.08(−0.14, 0.29)2.03.0
 Medium33130.05(−0.22, 0.31)2.02.5
 High6024−0.05(−0.22, 0.10)2.53.0
  • Numbers in bold highlight a significant difference between Kw values across a characteristic.

  • PT, patient; CG, caregiver; Kw, weighted kappa statistic; HCT, health care tasks.

  • aQuestion asked: ‘How much money do you have at the end of the month?’

Overall, caregivers rated the quality of care more highly than patients (median respondent scores for patients and caregivers were 2 and 2.5, respectively). To further explore the direction of the differences between patients' and caregivers' median scores, we generated histograms of the number of caregivers rating care above and below each median patient response (see Fig. 2). Across all patients, 41.3% (102/247) of caregivers rated care higher than patients and 32.8% (81/247) of caregivers rated care lower than patients. The magnitude of discordant responses between patients and caregivers varied; 25% (82/247) of patient–caregiver dyad responses differed by 2 or more points in median PACIC. Among the 102 caregivers rating care higher than patient, 41 rated care at least 2 points higher than patients. Among the 81 caregivers rating care lower than patients, 21 rated care at least 2 points lower than patients.

Figure 2

These histograms show the distribution of difference in median PACIC score (caregiver score–patient score) for different patient median scores. Black bars represent the number of caregiver who rated care more highly than patients (n = 102); gray bars represent caregivers who rated care the same as patients (n = 64) and white bars represent caregivers who rated care as worse than patients (n = 81). PACIC, Patient Assessment of Chronic Illness Care instrument; CG, caregiver; PT, patient.

Discussion

Agreement between patients' and caregivers' ratings of the quality of chronic illness care was low. The low level of agreement was consistent across items in the PACIC and across patient and caregiver subgroups. To our knowledge, no previous study has examined concordance between patient and caregiver assessment of the quality of chronic illness care. However, similar to studies of patient–proxy agreement on domains such as quality of life, disability and symptom severity, our study demonstrated that patient–caregiver dyads had a lower level of agreement on subjective PACIC questions and better agreement on objective questions [9, 12, 13]. Despite this difference, among objective questions such as, ‘Did you receive a copy of your treatment plan?’ agreement between patients and caregivers was still low (Kw = 0.30). Also in keeping with prior research on patient–proxy agreement, caregivers who were experiencing more difficulty assisting with health care tasks had lower levels of agreement compared with caregivers who reported no difficulty [9, 12, 14]. However, caregivers experiencing difficulty assisting with health care tasks still rated care quality, on average, more highly than patients.

Although previous studies of patient–proxy agreement have shown that patients with worse health tend to show lower level of agreement with proxies on areas such as quality-of-life, disability and satisfaction with care, we did not see a significant difference in PACIC agreement for patients in better health compared with patients in worse health (measured by SF-36 and the number of chronic conditions). We did, however, observe a significant relationship between the number of medications and agreement. Patients who took 10 or more medications a day were less likely to agree with their caregivers about the quality of chronic illness care compared with patients who took 5 or few medications. This finding suggests that the agreement may be more closely linked with the complexity of the patients' treatment plans than their health status.

There is no literature-based examining agreement between patients and external-raters on quality of care; however, we can explore these results in the context of previous studies of patient and proxy agreement on satisfaction with care. Previous studies of patient–proxy agreement have shown that proxies tend to rate satisfaction with health care higher than patients [7, 8]. In this study, caregivers did not consistently report higher satisfaction with care on the individual PACIC satisfaction item (‘satisfied that my/his/her care was well organized’); however, caregivers tended to rate the quality of care more favorably than patients overall. Among caregivers who disagreed with patients by 2 or more points, 34% (21/62) rated care at least two points lower than patients, whereas the remaining 66% (41/62) rated care at least 2 points higher.

This study demonstrated significant levels of disagreement between patients and their caregivers about the quality of chronic illness care being received in a primary care office. Future qualitative research is needed to elucidate the underlying reasons patients and caregivers rate the quality of care differently. Specifically, qualitative research is needed to understand where disagreement between caregivers and patients is due to measurement error (e.g. the caregiver did not know how to answer the question or the patient was less accurate because of cognitive impairment) and where the difference between patient and caregiver responses reflects a meaningful differences in opinion about the care being provided.

Limitations

There are several limitations to this study that are important to consider. First, we were not able to confirm whether the caregivers accompanied the patients to physician office visits and observed the care that was provided. To account for this limitation, we gave caregivers the option to respond ‘don't know’ on any of the PACIC questions; caregivers who responded ‘don't know’ to 25% or more of the PACIC questions were excluded from analysis. We therefore made the assumption that, by answering the questions, the caregiver felt knowledgeable about the quality of care the patient received. Second, the patient sample examined in this study included only high-risk older adults (e.g. patients at high risk of heavy utilization of health care in the coming year). The complex health needs of these patients likely influenced the way in which they rated the quality of their care. While this limits generalizability, findings are relevant to aging and higher-risk populations with multiple comorbidities [32]. Finally, it is important to note the limitations of the kappa statistic used in this analysis. The kappa statistic can be influenced by the relative frequency of the event and is lower for rare events. Although we conducted a sensitivity analysis using a polychoric correlation that showed the same results, caution must be used when comparing kappa statistics across studies.

Conclusion

The role of family caregivers in primary care is increasingly recognized as important, and some policies/strategies explicitly discuss not just patient-centered care, but patient- and family-centered care [3335]. As informal caregivers increasingly play a role in the health care management of patients, they can offer a unique perspective, which complements the patient's assessment of the quality of chronic illness care [10, 19]. Unlike previous studies that have explored caregiver/proxy and patient concordance on measures of quality of life, symptoms, pain or function, we do not presume the patient's assessment of quality of care is the gold standard. It is possible that for some patient–caregiver dyads, the caregiver's report may be more accurate than the patient's report. This may be particularly true for caregivers who are providing substantial support in managing the patient's health care or for patients with cognitive impairment. However, outside of end-of-life care, very few health care quality survey measures are designed to solicit family caregiver perspective on the care experience.

Internationally, patient assessment of quality of care is routinely being used in pay-for-performance [3638], quality improvement [39, 40] and value-based purchasing [41]. This research suggests that for older adults who are frequently accompanied by family caregivers to medical office visits, health care quality surveys, which focus solely on the patient perspective, present a limited view of the full health care experience. More quantitative and qualitative research is needed to understand how caregivers assess the quality of care and how their assessments differ from patients' assessments [38]. This research can be used to inform the design of new health care quality surveys that explicitly elicit these unique perspectives of patients and their family caregivers on the experience of care to present a more complete picture of the quality of patient and family centered care.

Conflict of interest

None declared.

Funding

E.R.G. completed this work during a fellowship at the Johns Hopkins School of Medicine with support from the National Institute of Aging (NIH NIA T32 A600021 052507). This work was additionally supported by the Agency for Healthcare Research and Quality 1R21HS017650-01. C.M.B. was supported by the Paul Beeson Career Development Award Program (NIH NIA K23 AG 032910, AFAR, the John A. Hartford Foundation, the Atlantic Philanthropies, the Starr Foundation and an anonymous donor). J.L.W. was supported by The NIH NIMH 5K01MH82885-2. The data used in this project were supported by the John A. Hartford Foundation, the Agency for Healthcare Research and Quality, NIH NIA R01 HS014580-01A1, the Jacob and Valeria Langeloth Foundation, Kaiser-Permanente Mid-Atlantic States, Johns Hopkins HealthCare and the Roger C. Lipitz Center for Integrated Health Care.

Acknowledgements

The authors acknowledge the invaluable contributions to this study made by Johns Hopkins Community Physicians, MedStar, Battelle Centers for Public Health Research, the Centers for Medicare & Medicaid Services and all of the participating patients, caregivers, physicians and Guided Care nurses.

Footnotes

  • This work was previously presented at the 2010 meeting of the American Public Health Association in Denver, CO.

References

View Abstract