OUP user menu

The impact of information disclosure on quality of care in HMO markets

Kyoungrae Jung
DOI: http://dx.doi.org/10.1093/intqhc/mzq062 461-468 First published online: 12 October 2010

Abstract

Objective To examine the impact of voluntary information disclosure on quality of care in Health Maintenance Organization (HMO) markets in the USA.

Setting Commercial HMOs that collected a set of standardized quality meausres, Health Plan Employer Data and Information Set (HEDIS), between 1997 and 2000 in the USA (1062 HMO-years). After collecting the HEDIS data, some HMOs disclosed their HEDIS-quality scores to the public (disclosing HMOs), whereas some HMOs declined to disclose the information (non-disclosing HMOs).

Design A secondary data analysis based on 4 years of quality scores of HMOs. The study uses non-disclosing plans as a control group. A treatment-effects model is used to address a potential bias associated with voluntary disclosure decisions by HMOs.

Main Outcome Measure(s) The study focuses on 13 HEDIS clinical indicators. On the basis of these indicators, a plan-level composite score and four domain scores were constructed. The four domains are childhood immunizations, treatments/exams for chronic conditions, screening tests and maternity services.

Results Public disclosure leads to an increase of 0.72 composite score units, which corresponds to ∼7% points in original quality scale (0–100%). The degree of quality improvement differed by the type of services.

Conclusions Public release of quality information had a significant and positive effect on quality in HMO markets during the earlier years of the voluntary disclosure program; however, the improvement was not universal across all quality measures.

  • quality information
  • public disclosure
  • quality of care
  • HMO markets

Introduction

Public reporting of quality information is a growing movement in the current US healthcare system. Almost every state has quality reporting programs for hospitals, and the Center for Medicare and Medicaid Services (CMS) has initiated public disclosure programs for hospitals, managed care plans, nursing homes and home health agencies. Information disclosure also has received attention in other countries. For example, in South Korea, a national insurance review agency has been publicly releasing the information about antibiotics use rates among healthcare organizations since 2006 [1]. Several countries have attempted to develop and collect quality measures capturing consumer experience with healthcare systems [2].

The rationale behind information disclosure is compelling. Economic theory suggests that if consumers cannot observe quality of services, providers will not have an incentive to improve quality [3]. Thus, it is expected that publicly releasing relevant information will help consumers recognize differences in quality among individual providers, which will in turn motivate providers to improve quality. Some also suggest that public reporting may directly influence healthcare providers to change their practice because it serves as a mechanism to provide feedback on their practice: if providers are informed about competitors’ quality scores, they may be motivated to improve quality [4]. With these expectations, information disclosure has been adopted in varied healthcare settings.

As public reporting rapidly expands, several studies have examined the effects of information disclosure on quality in hospital or nursing home markets but produced mixed results [59]. In Health Maintenance Organization (HMO) markets, some studies reported improvements in HMOs’ quality scores following nation-wide public disclosure programs [10, 11]. However, because none of these studies used control groups, the reported improvement in HMO quality cannot be fully attributed to public disclosure initiatives. A recent study evaluated a reporting program for Medicare HMOs, using the Medicare Fee-For-Service (FFS) sector as a control group, and reported no significant positive effect of public reporting on quality [12]. However, the Medicare FFS sector may not be a proper comparison group because other quality initiatives for FFS beneficiaries may have been implemented during the study period.

These mixed results from prior studies leave the role of information in quality inconclusive. Complementing the existing literature, my study estimates an empirical model to examine whether public reporting leads to quality improvement using a unique data set available from a voluntary disclosure program in HMO markets. All the HMOs in the study collected quality information; however, after collecting the data, some plans declined to disclose the data to the public. During the first 4 years of the program, about 75% of the HMOs that collected the data voluntarily disclosed. The availability of quality data for non-disclosers is a research opportunity to use them as a comparison group. I address a potential bias associated with HMOs’ voluntary disclosure decisions using a treatment-effects model.

HEDIS background

Since the growth of managed care plans in the late 1980s, the quality of care provided in managed care systems has been an important policy issue. Although HMOs provide comprehensive care for a lower premium than traditional FFS insurance, they brought concerns about low quality. This increased the attention devoted to assessing and disclosing the quality of care offered by HMOs [13]. The demand for disclosing quality information also came from HMOs themselves who wished to demonstrate their quality improvement efforts [14]. Although individual HMOs had developed quality improvement activities, the entire HMO industry was criticized for providing low quality of care and some HMOs wanted to demonstrate their commitment to quality [15].

Since 1996, the National Committee for Quality Assurance (NCQA), an independent, not-for-profit organization dedicated to improving healthcare quality, has collected and reported performance data about HMOs, Health Plan Employer Data and Information Set (HEDIS), which is a set of standardized quality measures encompassing several dimensions of HMO quality. Collecting HEDIS data and reporting to NCQA is voluntary, and after submitting the data to NCQA, public disclosure of the HEDIS scores is also voluntary. Some HMOs did not disclose after collecting the data, and I use these non-disclosing plans as a comparison group.

Methods

Data for the study were supplied by NCQA, which provides information on an HMO's disclosure status and HEDIS scores in a given year. The study sample consists of commercial HMOs that submitted HEDIS data to NCQA in any year between 1997 and 2000. The number of HMOs in the sample is 382. About 80% of these HMOs have more than 2 years of HEDIS data. Using an HMO-year as the unit of analysis (i.e. treating an HMO's quality data in a given year as a separate observation), the total number of observations is 1062 (HMO-years). Depending on year, 12–34% of HMOs declined to disclose after collecting the data. Table 1 presents the number of disclosing and non-disclosing HMOs by year.

View this table:
Table 1

The number of disclosers and non-disclosers by year

HEDIS reporting yearNumber of plansTotal (%)
Disclosers (%)Non-disclosers (%)
1997194 (87.8)27 (12.2)221 (100.0)
1998174 (66.4)88 (34.0)262 (100.0)
1999209 (69.7)92 (30.3)300 (100.0)
2000220 (78.8)59 (21.2)279 (100.0)
Total797 (75.1)265 (24.9)1062 (100.0)

The HEDIS data were merged with the InterStudy's Competitive Edge file, which contains information on HMO characteristics and enrollment. HMOs that submitted HEDIS data during the 4 years represented about half of the HMOs in the InterStudy file, and these HEDIS-reporting plans accounted for 77% of the total enrollment in the InterStudy HMOs. Variables from other data sources were then added to this merged file. The Area Resource File provides county-level socio-demographic characteristics. The 2000 US Census and Annual Reports from Health Policy Tracking Services were used for information on resident mobility and state mandates, respectively.

Variables that are available at the market level were aggregated to the HMO level using weighted averages across all the markets where the HMO operates. The weight was the share of the HMO's total enrollment in each market. Markets were defined using ‘health service areas (HSA)’, which are based on hospital admissions [16]. This aggregation approach has been commonly used in the prior literature on US HMO markets [17, 18]. The market definition based on HSA improves upon geographic definitions because it includes at least one hospital, which is necessary for HMOs to form provider networks [18].

HMO quality is measured by a summary index and domain-level quality scores derived from the HEDIS indicators. I use a summary measure because plan-level composite scores capture the plan's general policy over quality provision and common technical capacity across all types of care [19]. It also helps draw a conclusion about plans’ quality decisions, and it has been reported to produce more intuitive results than individual measures [20]. Domain-level scores allow me to examine the effect of reporting on quality by type of services.

Summary or domain scores are constructed based on process-oriented clinical quality indicators from HEDIS. Using clinical care measures is meaningful for this study because they are hard to observe without publicized information. Potential case-mix problems in quality comparisons are mitigated because eligible population is specifically defined for each indicator. I use 13 indicators that are commonly measured across all 4 years.

Table 2 presents descriptive statistics for all measures included in the study. A childhood vaccination measure (Measles, Mumps and Rubella) and prenatal care have the highest quality scores (over 80%). The eye exam measure for diabetic patients had the poorest scores (below 45%). The mean quality scores are significantly higher among disclosers than among non-disclosers (P < 0.001 for all measures). The measures are grouped into four domains based on factor analysis. The four domains are childhood immunizations, treatments/exams for chronic conditions, screening tests and maternity services.

View this table:
Table 2

Descriptive statistics of the HEDIS measures by disclosure status

HEDIS measuresDisclosers (N = 797)Non-disclosers (N = 265)
nMeanaSDnMeanaSD
Childhood immunization
 DTP (diphtheria, tetanus, pertussis)79180.710.426070.715.3
 Hepatitis B78478.911.726068.717.4
 H Influenza type B (HIB)78983.39.826074.413.7
 MMR (measles, mumps, rubella)79088.56.626083.18.4
 Polio (OPV)79185.69.226076.715.0
 Combined immunization 177767.913.125957.616.1
 Combined immunization 274058.816.325650.217.8
  Domain average77.69.968.813.7
Screening tests
 Mammogram79473.76.926069.36.7
 Pap smear test78573.38.726267.410.7
  Domain average73.57.268.37.8
Treatments/exams for chronic illnesses
 Beta-blocker treatments55078.316.115471.116.6
 Eye exam for diabetes78444.314.425635.911.3
  Domain average60.413.152.710.9
Maternity services
 Check-up after delivery74470.115.724364.319.3
 Prenatal care in the First Trimester77886.310.125480.615.1
  Domain average77.811.472.514.7
Summary measure
 Composite scoreb7970.170.77265−0.490.91
 Weighted average79771.558.6826565.269.74
  • SD, standard deviation.

  • aDifferences in the mean scores between disclosers and non-disclosers are statistically significant for all measures (P < 0.001).

  • bComposite score is constructed as a weighted average of standardized domain-scores. The weight is the proportion of enrollees who are eligible for the measures in each domain, and weights are normalized to sum to one.

Once HMOs agree to disclose their quality scores, they cannot select which measures to disclose; however, because plans are not required to report scores for measures with less than 30 cases, there were overall 4.8% missing values. I imputed the missing values with the predicted values obtained from a regression model that uses other quality scores available for a plan [21]. To derive a composite score, I first calculated the average domain scores in each year. Each average domain score was standardized to enable comparisons across domains. The composite score was then calculated as a weighted average of the four standardized domain scores. The weight is the proportion of enrollees who are eligible for the measures in each domain (weights are normalized to sum to 1). The overall mean of the composite score is 0.004, and the standard deviation is 0.85.

The basic empirical model is written as: Embedded Image (1) QUALit measures ith plan's quality score at time t. DISCit represents whether the plan discloses HEDIS scores to the public at year t. Xit represents all control variables, including demand shifters, marginal cost factors and market environments. Time dummies (Tt) are used to capture year-specific effects that are common to all plans. Standard errors are adjusted for clustering at the plan level.

An issue in this analysis is how to deal with the possibility of unobserved variables that influence both plan quality and disclosure status. For example, some plans may decide to disclose based on preferences of physicians with whom they contract and the physicians arguing for disclosure may be high-quality providers. Health risk of consumers also affects both quality and disclosure. Sick people may have high demand for quality; however, plans in markets with high-risk consumers are less likely to disclose being concerned about adverse selection [22]. Because those variables that represent physician characteristics and consumers’ health risk are incompletely measured, DISCit and εit will be correlated, and the ordinary least squares (OLS) estimate of the disclosure (γ) will be biased.

To address this problem, I use a treatment-effects approach since disclosure is a binary variable [23]. The first step of this approach is to model probit estimation for disclosure decisions by HMOs and to formulate selection terms, which are known as inverse mills ratios (IMRs). These terms are then passed to the second stage of quality model—Equation (1)—to correct the potential bias described above (i.e. endogeneity problem). The first-stage model includes all covariates used in the second-stage quality model. In addition, to meet exclusion criteria of the treatment-effects approach, the first-stage probit model should include identifying variables (IVs) that are strong predictors of disclosure but that do not influence quality (i.e. these IVs are excluded from the second-stage model). As IVs, I use two variables: proportion of residents who moved in a county within 5 years (resident mobility) and the number of state mandates on plan benefits. These variables are identified from the prior literature that developed an economic model to explain HMOs’ disclosure decisions [22].

First, the resident mobility variable captures the extent to which consumers have experience with plans [24]. If consumers are inexperienced, they may turn to publicized information to learn about plan quality. Thus, low-quality plans may choose not to disclose their information in markets with inexperienced consumers. While influencing HMOs’ disclosure decisions, this variable is unlikely to contribute to HMO quality.

Second, the state mandates variable is used because consumers’ expectations about plan quality may be high in markets with stringent state regulations about plan benefits. During the study period, many state mandates were introduced, motivated by the managed care backlash in the 1990s [25]. A national survey found that the public supported for government regulations to address the backlash [26]. If state governments responded to the backlash by implementing mandates, consumers’ perceptions about HMO quality in such states may have improved, which may lead some low-quality plans to withhold their quality information. Because most mandates were about plan benefits, such as coverage of inpatient services after delivery, it is unlikely that they affect HEDIS clinical quality.

The results on the IVs from the first-stage probit estimation are presented in Table 3 (the bottom rows of the second column; the results on other covariates from the first-stage model are not shown). The coefficients of both variables are significant in the first-stage model (P < 0.05), which implies that they are strong predictors of disclosure decisions by HMOs.

View this table:
Table 3

Regression results of the quality model

VariablesCoefficient (robust standard errors)
OLS95% CIP-valueTreatment-effects model95% CIP-value
Public Disclosure0.39 (0.07)*0.26, 0.530.0000.73 (0.24)*0.25, 1.200.003
Demand factors
 Percent college educated−0.01 (0.01)−0.04, 0.020.566−0.01 (0.01)−0.03, 0.020.615
 Income per capita (1000)−0.01 (0.02)−0.05, 0.020.462−0.01 (0.01)−0.04, 0.010.343
 Percent under poverty line−0.03 (0.02)−0.06, 0.010.151−0.02 (0.01)−0.05, 0.010.128
 Percent whites0.00 (0.00)−0.01, 0.000.3530.00 (0.00)−0.01, 0.000.240
 Percent unemployed−0.09 (0.04)*−0.17, −0.010.036−0.08 (0.03)*−0.15, −0.020.013
Risk factors
 Death rates from CVD/DM (per 100 000)0.01 (0.00)*0.00, 0.020.0010.01 (0.00)*0.00, 0.020.000
 Death rates among people under 65 (per 100 000)0.00 (0.00)*−0.01, 0.000.0120.00 (0.00)*−0.01, 0.000.001
Marginal cost factors
 RN wage ($ per hour)−0.01 (0.02)−0.06, 0.040.6840.00 (0.02)−0.05, 0.040.840
 Medicare part A payment rates ($ per beneficiary)0.00 (0.00)0.00, 0.000.9680.00 (0.00)0.00, 0.000.893
 Medicare part B payment rates ($ per beneficiary)0.00 (0.00)0.00, 0.000.9870.00 (0.00)0.00, 0.000.984
Marginal cost factors
 Population density (per 100 square miles)0.01 (0.02)−0.03, 0.050.6900.00 (0.01)−0.03, 0.030.848
 Plan age0.01 (0.00)*0.00, 0.020.0060.01 (0.00)*0.00, 0.020.005
 Medicare (binary variable)0.06 (0.06)−0.06, 0.170.3520.04 (0.05)−0.06, 0.130.438
 Medicaid (binary variable)−0.11 (0.07)−0.24, 0.010.079−0.12 (0.05)*−0.21, −0.020.016
 Number of new counties0.00 (0.00)0.00, 0.000.2510.00 (0.00)0.00, 0.000.063
 Staff/group model HMO (binary variable)0.42 (0.10)*0.22, 0.610.0000.40 (0.10)*0.20, 0.590.000
 IPA model HMO (binary variable)−0.07 (0.06)−0.20, 0.060.273−0.10 (0.05)−0.20, 0.000.052
 National affiliation (binary variable)−0.07 (0.08)−0.24, 0.100.399−0.06 (0.06)−0.17, 0.060.311
 BCBS affiliation (binary variable)0.02 (0.09)−0.16, 0.200.8650.02 (0.07)−0.11, 0.150.771
 Federal qualification (binary variable)0.08 (0.06)−0.04, 0.200.2110.09 (0.05)−0.01, 0.180.071
 For-profit HMO (binary variable)−0.25 (0.08)*−0.40, −0.100.001−0.23 (0.06)*−0.35, −0.110.000
Market environments
 HMO penetration (%)0.01 (0.00)*0.00, 0.020.0100.01 (0.00)*0.00, 0.020.010
 Number of competitors−0.01 (0.01)−0.02, 0.000.155−0.01 (0.01)−0.02, 0.000.155
Market environments
 Proportion of large firms (>500)0.04 (0.05)−0.05, 0.130.4200.03 (0.04)−0.03, 0.100.323
 Competitors’ disclosure rates0.00 (0.00)*0.00, 0.010.0240.00 (0.00)0.00, 0.010.076
 Competitors’ quality0.24 (0.07)*0.10, 0.380.0010.22 (0.06)*0.10, 0.330.000
Model parameters
 Inverse mills ratio−0.20 (0.14)−0.48, 0.080.160
First-stage disclosure model estimatesa
 Percent of move-in people within 5 years (in-migration)−0.06 (0.01)*−0.09, −0.040.000
 Number of state mandates−0.06 (0.03)*−0.11, 0.000.044
  • OLS, ordinary least squares regression. Standard errors are clustered at the plan level in all the models.

  • aThe first-stage model includes all covariates included in the quality model in additions to the two variables presented.

  • *P < 0.05.

For other covariates, I included consumers’ socio-demographic characteristics as demand factors, including per capita income, percent of population below the poverty line, percent unemployed, percent whites and percent college educated. These variables have been shown to be associated with HMO quality [27, 28]. Consumers’ health risk also captures demand for quality as sick people may have high willingness to pay for quality. However, because health risk represents cost burdens to plans because sick people are high resource users, plans in markets with high risks may offer low quality. The health risk of HMO markets is measured by death rates from cardiovascular diseases (CVD) and diabetes mellitus (DM), supplemented by death rates among the people under age 65.

As marginal cost shifters, I used plan age, wage rates for registered nurses (RN), population density, and Medicare payment rates. Offering Medicare or Medicaid products is also included because quality initiatives by public programs may have spill-over effects, reducing costs of improving quality for commercial enrollees. The number of counties that a plan began to serve within four years measures costs related to market entry. As plan-level cost factors, I used HMO model type, federal qualification, national affiliation with Blue Cross and Blue Shield (one of the largest national insurance firms in US), affiliation with other national firms, and profit status [18].

Market characteristics used in the model include market competition (measured by the number of HMOs), HMO penetration rates, the percent of large establishments (>500 employees), competitors’ average quality scores, and competitors’ disclosure rates.

Results

Table 3 reports the estimates of the impact of information disclosure on composite quality scores. The first and second columns present the coefficients from OLS analysis and the treatment-effects regression, respectively.

The disclosure variable has significant and positive effects on quality in both models. The OLS estimate indicates disclosure was associated with an increase of 0.40 composite score units (P < 0.001). This estimate is within the range reported in the prior study that conducted an OLS analysis using one-year of HEDIS data [29]. The marginal effect of disclosure is larger in the treatment-effects model than the OLS estimate, as indicated in the negative coefficient of the IMR. This implies that some variables affecting plans’ disclosure and quality decisions in an opposite direction are not captured in the OLS model. A potential category of such omitted variables is consumers’ health risk. Plans may offer high-quality care in markets with sick consumers who have high willingness to pay for quality [30]. However, plans in such markets may withhold their quality information if they are concerned about attracting high-risk consumers [22]. Although the model includes two mortality rates to control for health risk factors, they are unlikely to completely capture consumers’ health risk.

The coefficient from the treatment-effects model indicates public disclosure leads to a 0.73 increase in the composite score (P = 0.003). This corresponds to about 7% points in an original quality scale (0–100%) and a relative improvement of about 10%. This estimate is an upper bound of the range reported in a prior study. A recent study in nursing home settings found that a relative improvement in quality after public reporting varied from <1 to 9%, depending on the quality measures [9]. A relatively large effect found in my study may be because the analysis is based on data from the initial years of the public reporting movement. During the study period, quality issues began to receive wide attention and incentive-based quality improvement programs such as public reporting were emerging.

The coefficients of other covariates generally have plausible signs, and the results from the OLS model and the treatment-effect model are very close. First, the death rate from CVD/DM is positively related to quality, suggesting that plans in markets with sick consumers offer high quality responding to the market demand. Both CVD and diabetes are common chronic conditions whose management guidelines are well developed. Plans in markets with high death rates from CVD/DM may have organized initiatives to treat patients with those conditions (e.g. disease management programs). However, serving markets with high death rates among people under 65 is found negatively associated with quality. Since the study sample consists of commercial plans, their target population is under age 65, and overall death rates among the target population represent general health risk of a market. Thus, this variable may capture unmeasured health risk of a market, whose costs exceed the increased demand, such as environmental health risk or accident rates.

Next, I explored a different approach to address the endogeneity problem—instrumental variable method, which purges the error terms that are correlated with both plans’ disclosure decisions and quality scores. The estimate of the disclosure variable from the instrumental variables analysis (0.70) was very similar to the estimate from the treatment-effects model (0.73). Further, I conducted a treatment-effects analysis that adds HMO dummies to the quality model. Inclusion of HMO dummy variables partially corrects the endogeneity issue by controlling for any time-invariant plan-specific effects that affect both quality and disclosure decisions. The coefficient of disclosure from this model was 0.63 (P = 0.009), which is similar to the estimate from the treatment-effects model without plan dummies. These similar estimates across different approaches increase the confidence that endogeneity is properly addressed in my study.

Table 4 reports the coefficients of disclosure from the regression analyses with domain scores. The OLS estimate is similar across all four domains, lying between 0.34 and 0.48 (P < 0.001 for all domains). In the treatment-effect models, the coefficient of disclosure varies by domain. The coefficients were significant in all models, except in the analysis for the screening tests domain. This is consistent with the descriptive data (Table 2), which show that this domain had the smallest difference in quality scores between disclosers and non-disclosers. Screening tests have long been a target of quality improvement efforts, and there may have been quality initiatives focused on screening tests before the introduction of the HEDIS program, which both disclosers and non-disclosers adopted. I conducted domain-level analyses adding plan dummies to the quality model. I found that the coefficients of disclosure were all significant and similar to those from the treatment-effects model without plan dummy variables, except for the analysis of the screening tests domain. The coefficient of disclosure in the screening domain was insignificant, and it was close to zero.

View this table:
Table 4

Domain-level analysis: the effect of disclosure on quality

Domain modelCoefficient (robust standard errors)
OLS95% CIP-valueTreatment-effects model95% CIP-value
Chronic illness treatments0.43 (0.07)*0.29, 0.560.0000.75 (0.28)*0.20, 1.300.008
Screening tests0.38 (0.08)*0.23, 0.540.0000.40 (0.28)−0.15, 0.950.158
Maternity services0.34 (0.09)*0.16, 0.520.0000.88 (0.33)*0.24, 1.520.007
Childhood immunization0.48 (0.08)*0.32, 0.640.0001.06 (0.30)*0.46, 1.660.000
  • OLS, ordinary least squares regression. Standard errors are clustered at the plan level in all the models.

  • *P < 0.05.

Finally, to check whether these results were driven by the construction of a summary index and domain scores, I estimated the models using individual quality indicators as the outcome measure and obtained consistent results: significant and positive effects were found in the analyses of the quality indicators in the domains that showed significant results. Further, to assess whether imputation of missing values (4.8%) influenced the study results, I used the average quality score only based on the non-missing values as the outcome measure and found similar results: the OLS estimate indicated that public reporting was associated with a 5% point increase in quality scores, and the treatment-effects estimate suggested that public reporting would lead to an increase of about 10% points in quality scores.

Discussion

Although public disclosure of quality information is becoming a common component in the current US healthcare system, solid evidence about the effect of information on quality is limited. Using a unique data set from earlier years of a voluntary disclosure program, I estimated the impact of disclosure on quality in commercial HMO markets.

The analysis found positive effects of disclosure on HMO quality. This finding supports the argument that public release of quality information may lead to quality improvement. As discussed, the larger effect in the treatment-effects model suggests that variables having opposite effects on quality and disclosure are omitted in the OLS analysis. Health risk factors are potential omitted variables. In this study, I found that the death rate from CVD/DM was positively related to plan quality. However, high-quality plans in markets with high mortality rates from CVD/DM tended not to disclose [22].

The results indicate that the effect of disclosure on quality depends on the type of services. Most of prior studies also report that quality improvement after public reporting is not universal across all quality measures [8, 9]. Although some concern that HMO plans may provide suboptimal quality of chronic care measures following information disclosure [31], my study finds that disclosure has positive effects on quality of chronic care services. This may be because HMO plans do not believe that disclosing high scores on specific quality measures would attract high-risk enrollees. Or, the HEDIS measures may not be subject to distorted motives. HEDIS indicators are based on services that are delivered in primary care settings. Some chronic care indicators capture services that prevent further progress of chronic diseases, rather than high-cost procedures to treat those conditions. Alternatively, plans may use selection incentives in disclosure decisions rather than in quality decisions [22]. Under a voluntary disclosure, plans have discretion over disclosure choice. Plans can thus choose not to disclose while providing high quality, if adverse selection is of concern.

Several limitations of this study should be noted. First, this study uses data from the earlier years of the HEDIS program. This may have contributed to the finding of the relatively large impact of disclosure on quality. During these years, intense attention was paid to improving quality of measures identified in disclosure programs. An examination of public HEDIS data from the Medicare HEDIS program shows that there was large improvement in quality during the initial years of the program; however, the rate of improvement decreased over years. Further, the earlier years of HEDIS data contain only a few categories of health services. HEDIS measures have expanded to broader categories of services over years. Analysis of data from a longer time period will be informative to understand whether disclosure has positive effects on quality for diverse ranges of services.

Second, it has been shown that disclosure programs in hospital markets involved undesirable effects—turning away patients who are unfavorable to quality outcomes [6, 7]. No evidence of this type of selection incentives has been reported using HEDIS, probably because HEDIS defines specific populations eligible for each indicator. It is, however, possible that plans selectively enroll high socioeconomic-profile consumers to earn high scores because they are likely to receive recommended care without plans’ efforts. This possibility of changes in enrollees’ profile is not considered in this study.

Third, the improvement in quality scores may partially represent plans’ enhanced ability to measure quality. Since data to capture quality-measuring capacity are unavailable, I could not explore this possibility. However, including plan and year dummy variables controls for plan-specific capacity and year-specific improvement in the measurement.

Finally, this study is limited to examining improvements in quality measures included in the HEDIS data. There has been a concern that public reporting initiatives may lead providers to reallocate their resources only to quality measures that are reported to the public, without improving overall quality or patients’ health outcomes. Although important, this is beyond the scope of this study and needs to be clarified in future research.

In conclusion, given the current movement toward public reporting, the finding of positive roles of information in quality is encouraging. This result suggests that public reporting may serve as a mechanism to improve quality. In future research, evaluating what types of information or presentation methods are effective in improving quality and eventually health outcomes will help find an efficient approach to information disclosure.

Acknowledgements

I thank seminar participants at the 6th World Congress of Health Economics Meeting in Copenhagen and at the Health Policy and Management seminar at the University of Minnesota. I am grateful to the National Committee for Quality Assurance (NCQA) for making its Health Plan and Employer Data Information Set available.

References

View Abstract