OUP user menu

The perceived impact of public reporting hospital performance data: interviews with hospital staff

Joanne M. Hafner, Scott C. Williams, Richard G. Koss, Brette A. Tschurtz, Stephen P. Schmaltz, Jerod M. Loeb
DOI: http://dx.doi.org/10.1093/intqhc/mzr056 697-704 First published online: 12 August 2011


Objective To assess perceptions about the value and impact of publicly reporting hospital performance measure data.

Design Qualitative research.

Setting and participants Administrators, physicians, nurses and other front-line staff from 29 randomly selected Joint Commission-accredited hospitals reporting core performance measure data.

Methods Structured focus-group interviews were conducted to gather hospital staff perceptions of the perceived impact of publicly reporting performance measure data.

Results Interviews revealed six common themes. Publicly reporting data: (i) led to increased involvement of leadership in performance improvement; (ii) created a sense of accountability to both internal and external customers; (iii) contributed to a heightened awareness of performance measure data throughout the hospital; (iv) influenced or re-focused organizational priorities; (v) raised concerns about data quality and (vi) led to questions about consumer understanding of performance reports. Few differences were noted in responses based on hospitals' performance on the measures.

Conclusions Public reporting of performance measure data appears to motivate and energize organizations to improve or maintain high levels of performance. Despite commonly cited concerns over the limitations, validity and interpretability of publicly reported data, the heightened awareness of the data intensified the focus on performance improvement activities. As the healthcare industry has moved toward greater transparency and accountability, healthcare professionals have responded by re-prioritizing hospital quality improvement efforts to address newly exposed gaps in care.

  • quality measurement
  • quality management
  • quality improvement qualitative methods
  • leadership
  • performance measurement
  • public reporting
  • performance improvement


Evidence-based performance measures are considered to be well suited for performance improvement purposes, since they can often identify specific deficiencies in processes of care, which can then be readily addressed and thus have a salutary effect on clinical outcomes. Since 2002, Joint Commission-accredited hospitals have reported their performance on nationally standardized clinical measures of quality to satisfy both regulatory (Centers for Medicare and Medicaid (CMS)) and accreditation requirements. These measures have been endorsed by the National Quality Forum [1] and adopted by the Hospital Quality Alliance [2]. Since 2004, hospital performance on the measures has been publicly reported through both The Joint Commission (www.qualitycheck.org) and CMS (www.hospitalcompare.hhs.gov) websites. US hospitals are not alone in reporting performance data on clinical indicators as performance data are increasingly used by ministries of health in European countries and Canada to track the quality of the healthcare as delivered by government health systems as well as inform their citizen consumers [37].

One of the most commonly stated objectives for publicly reporting a hospital's performance data is to stimulate performance improvement in hospital-delivered care processes [8, 9], and public reporting of this type of data has been associated with improved performance [8, 1013]. Clinical performance measure rates for US hospitals have shown a steady rate of improvement [1417]. This improvement trend was reported well before public reporting efforts were in place [18, 19], making it difficult to assess the unique impact of public reporting on improvement. For the purposes of this study, however, the actual impact of public reporting is less relevant than its perceived impact.

Public reporting of hospital data is now an accepted reality. Research from various non-healthcare industries indicates that the decision to change systems or processes (i.e. performance improvement) is often dependent upon the perceptions of the individuals who must implement the process changes [20, 21]. Little is known, however, about the perceptions of hospital staff regarding the public reporting of performance measures or how their perceptions influence the actions taken in response to measure data [22, 23]. Our study sought to qualitatively explore hospital staff perceptions about public reporting of performance measure data and investigate how these perceptions influenced their actions. Additionally, we sought to examine how (or if) a hospital's performance measure rates influence staff perceptions about the measures and the public reporting of the data.


This study was conducted as part of a multi-phased project. On-site interviews were conducted with hospital representatives using a focus-group format to qualitatively assess hospital staff perceptions related to nationally standardized performance measures. This study reports on a subset of interview questions that were specifically focused on the impact of publicly reporting standardized measures.


A total of 36 Joint Commission-accredited hospitals, from a pool of 555 hospitals, (that had previously responded to a Joint Commission electronic survey addressing staff perceptions of performance measures) were invited to participate in an on-site focus-group interview with Joint Commission research staff based upon an analysis of the results of nationally standardized acute myocardial infarction (AMI), heart failure and pneumonia performance measures. In order to identify hospitals that had a wide range of performance on the measures, data from all 555 hospitals were analyzed by measure set. For each measure set, five hospital statistics were calculated using the hospital's quarterly data from 3Q2002 through 4Q2003: (i) the baseline rate for each measure in the set (defined as the 3Q2002 rate); (ii) the overall rate for each measure in the set (combining all six quarters of data); (iii) the rate of change for each measure in the set (calculated as the hospital slope which was obtained by regressing each hospitals' quarterly rates); (iv) the measure set composite scores (calculated by taking the aggregated number of numerator cases for all the measures in the measure set and dividing by the aggregated number of denominator cases over the period of the study) and (v) the total size of the measure set population over the study time period. These hospital statistics were then used in a k-means cluster analysis to determine performance clusters among the hospitals [24]. Twelve clusters were produced, based upon the number of clusters that appeared distinguishable when looking at multivariate plots of the data. This analysis yielded a pool of hospitals that broadly represented all hospitals submitting data to The Joint Commission (e.g. hospitals reporting consistently high overall performance on all measures, hospitals performing consistently below average on all the measures, hospitals with low performance at baseline but a high rate of improvement over time and hospitals with high rates on some measures, but low rates on other measures). Thirty-six hospitals (three hospitals from each of these performance clusters) were then randomly selected and invited to participate in the study interviews.

E-mail invitations were distributed to the Joint Commission contact within each of the 36 selected organizations requesting an interview with hospital staff involved with the collection and reporting of performance measure data; no other direction was given regarding meeting attendees. The e-mail stated the purpose of the interview and that participation was voluntary, that respondents would not be identified beyond the study, that only aggregated data would be reported and that participation (or lack there of) would not influence the hospital's accreditation status.

Qualitative interviews

In order to ascertain whether or not publicly reporting performance measure data stimulated performance improvement efforts, and to investigate staff perceptions of publicly reporting data, seven interview questions were used to address the topic of public reporting:

  1. In what ways do you believe public reporting of performance measure data has influenced your institution's quality improvement activities?

  2. What individuals or groups have made use of the publicly reported data for your hospital? How have they used the data?

  3. In what ways have healthcare professionals made use of publicly reported data? Please describe the kinds of questions that have been posed by healthcare professionals.

  4. Has your institution received media inquiries about its performance measure data? If so, what type(s) of media and what type(s) of questions?

  5. Please describe the kinds of questions that have been posed by consumers?

  6. Have inquiries come from other groups? Please describe the types of groups and kinds of inquiries.

  7. Does your hospital ask consumers (i.e. patients) how they selected your hospital for services? If so, have responses been related to performance measure data? In what ways?

These questions were part of a more comprehensive (42-item) interview tool that was designed to guide a broader discussion of performance measure issues. Not all questions in the guide were asked directly as respondents often offered comments on a topic ahead of the question in the guide and thereby obviated the need to ask the question. However, interviewees were given the opportunity to answer or elaborate on all the questions during the course of the interview session. It is also important to note that, because the interviews were conducted in a group format with multiple staff members in each hospital, responses frequently represented the consensus opinion of the group. For example, one or two staff members frequently provided a response, which was recorded, and other participants were observed to nod or verbally agree. No attempt was made to formally solicit a response from each participant, and conflicting opinions were similarly recorded as responses. No attempt was made to resolve disagreements or to bring the group to a consensus around an opinion. Among research staff, the recorded responses to participant questions were reviewed and compared between the scribe's and interviewer's notes; discrepancies were reviewed and adjudicated until staff consensus was reached, resulting in a single interview transcript for each hospital.

Interview analysis

Interview responses were organized using NVivo 7.0, SP4 software from QSR International Pty. Ltd. (1999–2007). Notations, using a combination of the hospital identifier (the hospital's Joint Commission organization number) and the responder's role in the organization, such as physician, nurse, administrator, etc., were attached to each response.

In order to aggregate hospital responses into thematic categories, a coding schema was developed based on the themes that emerged from the initial review of the responses to the seven questions. Three randomly selected interviews were coded using the initial coding schema. A team of three researchers reviewed each response both individually and again as a group to apply the coding schema. As new themes emerged, the coding schema was updated and themes were modified to capture subtle differences and commonalities within the responses. This iterative process continued until confidence was established in the consistency with which the coding schema was applied. All interviews were then coded using the final coding schema.

To determine whether differences in staff perceptions about the impact of publicly reporting performance measure data were related to the hospital's performance on nationally reported measures, the mean rate for each performance measure set was calculated for the individual hospitals. Because hospitals were initially selected to participate based upon variation in their performance measure rates (i.e. assignment to performance clusters was based upon differences in measure rates, rates of improvement, etc.), the hospitals could be readily categorized into performance-based groups. Measure rates equal to or below the group mean were identified as ‘low’, and measure rates above the mean were identified as ‘high’. Hospitals with performance measure rates that were above the mean for all of their reported measures were identified as ‘high performers’; hospitals with performance measure rates equal to or below the mean for all of their reported measures were identified as ‘low performers’; hospitals with measure rates that were both above and below the mean were identified as ‘mixed performers’. Interview responses were then examined within the context of the performance measure set and assigned to one of the three performance groups (high, low or mixed).


Of the 36 hospitals contacted, 29 hospitals agreed to participate in the on-site interviews that occurred between March and August, 2006. All hospitals were located in urban areas but varied by size, geographic location and teaching status (Table 1). A total of 201 hospital staff participated, ranging from 1 to 12 staff (average = 7) attending the interview session at each hospital. Participants represented a wide number of hospital departments and disciplines that included physicians (n= 23), staff nurses (n= 96), administrators (n= 66), pharmacists (n= 3) and others (n= 10); many had dual roles, including quality improvement responsibilities and each participant had either direct or indirect responsibility for performance measure data collection and reporting. Participants also varied significantly in terms of their administrative responsibilities. Of the participants, 32 were executives, 67 were managers and 91 were non-managers. Interview sessions lasted from 1.5 to 2 h. [25].

View this table:
Table 1

Performance measure rates and demographic characteristics of interviewed hospitals (n = 29)

Composite measure rate for 2005Demographic characteristics
HospitalAMIa (%)Heart failurea (%)Pneumoniaa (%)Teaching (%)Type of ownershipBed sizeRegion
1989.872.283.6NoNon-profitLargeS Atlantic
2063.080.5NoNon-profitMediumS Atlantic
2195.885.983.3NoNon-profitLargeS Atlantic
2293.584.783.5NoNon-profitMediumS Atlantic
2389.770.665.7YesNon-profitMediumS Atlantic
2482.489.3NoNon-profitLargeS Atlantic
2592.278.677.6YesNon-profitLargeS Atlantic
2691.364.278.0NoNon-federalLargeS Atlantic
2789.575.880.3NoChurchMediumS Atlantic
2891.265.678.3NoNon-profitSmallS Atlantic
2992.481.783.0NoNon-federalSmallS Atlantic
  • Demographic information was obtained from the American Hospital Association Annual Survey Database FY2005 edition. MSA, metropolitan statistical area. AMI, acute myocardial infarction. Bed size: small < 100 beds; medium = 100–299; large > 300 beds.

  • aDash (—) denotes that measure set data were not reported.

For the interviewed hospitals, the calculated mean performance measure rates for AMI, heart failure and pneumonia were 92.4, 78.1 and 82.3%, respectively. In 2005, the mean national performance measure rates for AMI, heart failure and pneumonia were 92.7, 78.2 and 81.7%, respectively. Nine of the interviewed hospitals reported performance measure rates above the mean for all measures (‘high performers’) and seven hospitals reported measure rates equal to or below the mean (‘low performers’) for all reported measures. The remaining 13 hospitals' reported measure rates were both below and above the mean (‘mixed performers’). All but five hospitals reported data for all three measure sets.

In interviews involving both leadership and front-line staff, the more detailed responses to questions were proffered by those in leadership roles with the front-line staff in attendance affirming the response either with non-verbal cues or with simple one-word responses.

Responses to the questions addressing the impact of publicly reporting data revealed six recurrent themes. These themes centered on (i) the increased involvement of senior leadership; (ii) a sense of accountability to both internal and external customers; (iii) a heightened awareness of the data; (iv) a re-focusing of priorities; (v) data quality concerns that focused on the validity and interpretability of the reported data and (iv) consumer understanding of performance measure data. Few differences in the substance or tone of the comments about publicly reported data were noted in the responses based upon the hospital's performance group.

Increased involvement of senior leadership

When asked, ‘In what ways do you believe public reporting of performance measure data has influenced your institution's quality improvement activities?’ Seventeen participants (10 nurses and 7 administrators representing 13 hospitals) indicated that public reporting motivated senior management to ‘pay closer attention’ to the performance measure data and become more involved in their organization's quality improvement activities. One administrator commented that public reporting ‘has been a plus … everyone is more aware of what the issues are … we use the data to effect the way we do quality care’. Interviewees stated that public reporting also provided the justification and assistance needed in securing additional resources for quality initiatives as exemplified by these comments: ‘We have a lot more support from administration and the medical staff’ (nurse); ‘it allows the administration to give us the backing we need’ (administrator); ‘public reporting has forced us to put resources in place’ (administrator). One physician also reported that his CEO ‘is committed to transparency and publishes internal data on the internet’. No differences were noted, however, in the responses by staff from either high-performing or low-performing hospitals.

Accountability to both internal and external customers

Accountability to both internal and external customers emerged as a theme at 28 of the 29 hospitals. Staff reported that the data were used for benchmarking by ‘everyone from staff to the Board’. As one nurse stated, her staff used the data to verify changes and improvements in performance and ‘if the numbers do not look good, we look to see what we can do to bring them up’.

Public reporting stimulated interest and involvement with quality improvement activities. Administrators and physicians alike credited public reporting with creating ‘a lot of engagement and a desire to improve’. As one administrator noted, ‘When you tell a surgeon his numbers are going to be out there, you get their attention and they ask what they need to do!’, and as one physician added, this was perceived as ‘ … a good effect’. Physicians not only became more engaged in quality improvement, they also disseminated information. One nurse reported that frequently the physicians, ‘offer suggestions (for process improvements) based on what they see at other hospitals’.

The focus on external accountability to patients, the local community or third-party payers varied among hospital interviewees. Comments by staff from five organizations were focused on their desire ‘to do the right thing’ for their patients. One nurse summarized it this way, ‘we have a responsibility to the public to report data’; ‘[It is] important for the community to know [about our] positive performance data and what process improvement activities are in place in the hospital (to address low performance data)’. Staff at these hospitals indicated that there has been an increase in consumer awareness of publicly reported data. Only one individual, however, actually reported that their patients specifically ‘choose to come (to their hospital) because of the publicly reported data’.

Interviewees expressed a concern that local media typically tended to focus on ‘why the numbers are low and rarely on why they are high’. Media scrutiny of performance data has added to the sense of competition in local markets. Two nurses noted that, ‘it makes us energized to do better (knowing) there are people out there looking at our data’. Interviewees at 15 hospitals were more focused on their obligations to third-party payers and as one physician noted, ‘Insurance companies and managed care organizations are looking at quality; Contracts with physicians now have targets based on performance measures’. Physicians and administrators reported incorporating performance metric targets into the hospital's physician contracts in efforts to ‘raise the bar’ and thereby secure higher reimbursement rates as insurers implement pay-for-performance criteria. Comments from a physician (in a low-performing hospital) and an administrator (from a mixed-performing hospital) both suggested that some hospitals are ‘hanging on by their fingernails’ as insurers ‘drive consumers to hospitals who are good performers’.

Heightened awareness of performance measure data

The public reporting of performance measure data was credited with increasing awareness of the importance of the data throughout all levels of an organization. For staff at five (high-performing) hospitals, results were viewed as ‘a way for us to shine’. Physicians from three of these hospitals reported using their hospital's data in speaking engagements with ‘committees and local medical groups’. These staff proudly reported receiving calls from other institutions asking ‘how we've done what we've done’.

This heightened awareness of publicly reported performance measure data stimulated more internally directed responses and sharpened the organization's focus on all their reported data. Performance measure rates above the national mean were perceived as enhancing the hospital's standing within the community, as one nurse noted, it provided ‘more credibility to the hospital's performance’. Concomitantly, publicly reported data stimulated the awareness of the need to critically analyze and respond to data that fell below the national mean. As two nurses noted, ‘it forced us to look at it, to compare it. Before it just sat there, now it drives us to do better’.

Interviewees from five (lower performing) hospitals also commented that this new awareness stimulated staff to become more involved in improving the hospital's performance numbers. As one nurse stated, ‘knowing that the public is aware of the scores makes us more energized to do better’. Even staff who expressed doubt about the extent of consumer awareness of the data, noted that ‘public reporting has opened some eyes’ and, as one administrator reported, ‘I don't think anyone believed it was going to be public until [the] newspaper article - that's when people gasped!’

Re-focused priorities

The public reporting of performance measure data created a sense of urgency within the organizations to either improve, if rates were below the national average, or maintain (or continue to improve upon) current high-performing levels. As one nurse noted, ‘Once you are a top 100 hospital, [you] want to stay there’. An administrator stated that ‘without public reporting, we would be less zealous’. Comments from seven interviewees (from four hospitals) repeatedly noted that public reporting helped to ‘prioritize our focus areas’ and shift the goals on their corporate agenda, often moving quality issues to the top of the list. Two administrators from these same hospitals concurred, ‘it created a sense of urgency to reach better outcomes quicker’ by putting ‘quality in the forefront’.

Data quality concerns

The validity and reliability of publicly reported data was a concern expressed by interviewees in 17 of the 29 organizations. Interviewees from five hospitals reported that they were frequently defending their scores as they were challenged with questions from staff and physicians who asked, ‘What is the evidence that supports the data, and is it reliable?’ These challenges were typically reported regardless of the hospital's overall performance, although there were some differences among respondents from higher and lower performing hospitals. Interviewees from the higher performing hospitals tended to use these interactions as learning opportunities. As one nurse remarked, ‘The doctors have challenged the numbers, but that's part of the learning process.’ Interviewees from hospitals with lower measure rates were more concerned about the timeliness of the data than their higher performing counterparts. Likewise, skepticism and concern regarding the methodological accuracy with which data are collected was voiced somewhat more frequently by physicians and staff from lower performing hospitals.

Interviewees at eight hospitals expressed concerns about the ‘limitations of the data’, noting that the data did not represent ‘a global picture of what the patient may experience’ at their hospitals. These staff expressed frustration with comparisons made between their hospital and other organizations, often suggesting that the comparisons were not valid since hospital size and caseload influenced data results. These explanations, however, tended to contradict one another. According to one respondent from a large hospital, ‘The small hospital numbers were good because they had smaller number vs. us; they are not comparing apples to apples’. In contrast, a physician from a smaller hospital complained that, ‘they do more cases than we do but they found a way to exclude patients to make their numbers look better – its gamesmanship’. A nurse at another small hospital opined that ‘smaller hospitals are practicing good medicine, but the data does not always reflect that’.

For staff in 10 of the organizations interviewed, the timeliness with which performance data were publicly reported was an additional concern. Administrators, physicians and staff considered publicly reported data to be ‘too old’ to reliably provide an accurate reflection of current practices. Some hospitals developed and produced internal data reports for staff and patients because, ‘Public data lags behind our internally reported data’; ‘health professionals don't ask about publically reported data because they receive more current (data) from other sources’. One administrator, however, did acknowledge that despite ‘the significant lag time in availability, it (public reporting) does help develop buy-in (with performance improvement activities) by the staff’.

Consumer understanding of the data

Interviewees from 18 of the 29 hospitals questioned their patients' ability to understand and interpret the data. The availability of multiple reports, each with different reporting criteria and display formats, was often identified as a contributing factor to the confusion and difficulty with data interpretability. As one nurse, whose job responsibility was to collect the data for analysis, declared, ‘The reports are confusing for us, and we are supposed to be in the know; the consumer doesn't understand the report - it's confusing to them and of little value’.

Staff at 11 hospitals identified educational opportunities for staff and patients with regard to publicly reported data. The reports provided ‘an opportunity to educate the public’ and staff. As one administrator remarked, ‘some of the reports are a little misleading; however, it's important that medical staff, who assist the population in where they get their care, understand the results’. Assisting physicians and staff with data interpretation was perceived by this administrator as a useful patient-education tool since physicians and staff often assist patients in their healthcare decision-making process.

Interviewees in 10 hospitals suggested that the data are ‘too complex for the media to explain’, and ‘news reports don't help consumers interpret numbers correctly’. One administrator expressed the concern that patients could be ‘misled’ and reach ‘inappropriate conclusions’ about a particular hospital based on publicly reported performance data.


It was anticipated that staff from hospitals with higher and lower performance measure rates would report different perceptions about public reporting, voice distinctly different opinions, or focus on unique sets of issues. For example, it was hypothesized that lower performing hospitals would be more likely to question the validity of data or express concerns that public reporting portrays an unfair image of their hospital to consumers. As expected, a few of these concerns were raised. Interviewees from hospitals with lower measure rates were more concerned about the timeliness of the data, and they expressed concerns regarding the methodological accuracy of data collection, whereas interviewees from higher performing hospitals tended to view data quality challenges as learning opportunities. While such reactions to performance measure data have been previously reported in the literature [16, 2629], the most striking finding of this study was that these differences were voiced so infrequently. For the most part, perceptions about performance measures and public reporting appeared to be generally unrelated to actual performance.

Staff at both high- and low-performing hospitals expressed concerns about data quality, and both groups raised concerns about consumer understanding of data. Such concerns about data quality and consumer understanding are not new. These themes have been recently reported in the literature; even as researchers report the positive impact that public reporting appears to have on quality improvement [7, 3034]. In our study, most of the differences observed between the high- and low-performing hospitals tended to be related to how the organizations ‘responded’ to the publicly reported data. For example, while both high and low performers expressed concerns about data quality, the high performers tended to view this as an educational opportunity for physicians and hospital staff. In general, however, the issues and themes that emerged from these interviews (i.e. the sense of accountability to consumers, the heightened awareness and subsequent refocusing of an organization's priorities along with leadership's role and the concern for data validity and reliability) did not differ discernibly among study hospitals based upon their performance.

Our study hospitals confirmed and reinforced the integral and critical role senior leadership plays in an organization's quality improvement culture [29, 30]. Despite concerns about data quality and consumer understanding of data, participants consistently identified the important role that public reporting played in capturing the attention of hospital leaders, which resulted in an increased focus on quality improvement. Many of the early perceptions identified in our study about the potential usefulness and impact of performance measurement remain very applicable today, even as performance and quality measures and the manner in which they are reported, continues to change and evolve.


This review of the perceived impact of public reporting should be interpreted with several limitations in mind. While interviewed hospitals varied in size and locality, they should not be considered a random sample of hospitals. Study hospitals were selected based upon their representation of a variety of performance measure results and not upon hospital characteristics such as size, ownership type, location, etc. Furthermore, the number of hospitals in each of the performance groups was small, which limits the power to detect significant differences between the high- and low- performance groups. Hospitals were also free to choose the individuals who participated in the focus-group interviews. As a result, although 201 individuals participated in the interviews, the number of participants and the role each held (nurse, physician and administrator) varied by organization. Findings based on one particular role type were impossible to quantify. On a related note, the group format of interviews did not isolate hospital leaders from hospital staff. It is impossible to know if participant responses would have been different had the study used an individual interview approach and/or solicited anonymous responses. Based upon the content of the actual comments, however, participants at all levels appeared to be comfortable openly sharing their opinions.

It is also important to note that the interviews were conducted by Joint Commission staff. While the participants were assured prior to (and at multiple points throughout) the interview that their responses had no bearing on their accreditation status, the potential for bias must be considered. Anecdotally, such a response bias was not directly observed, as participants did not seem reluctant to offer critical feedback, and often engaged in seemingly frank discussions with their fellow participants during the interviews. It may also be worth noting that interviewers were not blinded with respect to the performance of the hospital. Although structured interview guides were employed by interviewers, it is possible that knowledge of hospital performance may have biased the interviewers and their interpretation of responses. Also, as previously mentioned, the presence of senior or mid-level managers in the session may have influenced some front-line staff responses.

Finally, sites were selected based on 2003–05 performance data and the interviews were conducted in 2006. Given the ever-evolving landscape of hospital performance measurement initiatives, the perceptions related to public reporting of hospital data conveyed during the course of these interviews may no longer reflect current perceptions.


Our study suggests that public reporting motivates and energizes organizations to improve or maintain performance. Despite commonly cited concerns over the limitations, validity and interpretability of publicly reported data, the heightened awareness of the data intensified the focus on performance improvement activities. As one administrator succinctly stated, ‘public reporting has had a greater impact than pay-for-performance’.

The public reporting of hospital data is representative of a more substantial environmental change as the healthcare industry moves toward greater transparency and accountability. It seems unlikely that this trend will be reversed, and results of the current research suggest that healthcare professionals have taken this shift in the landscape in stride—re-prioritizing hospital quality improvement efforts to address newly exposed gaps in care. Of course, re-prioritization implies that as certain areas receive more attention, other areas will receive less. As the environment continues to change, and public reporting expands to include new focus areas, it will be important to explore the potential impact of re-prioritization.


This work was supported by the Agency for Healthcare Research and Quality, Department of Health and Human Services (grant #5 U18 HS013728).


The study protocol was reviewed and approved for an IRB exemption from signed consent by Independent Review Consulting (IRC), Inc. 100 Tamal Plaza, Suite 158, Corte Madera, CA 94925 (www.irb-irc.net).


Our thanks to the hospital staff who participated in the interviews; to Irma Mebane-Sims, PhD for interview instrument development, project management and participation in the onsite interviews; and to Elizabeth Devan, RN BSN, graduate nursing student for assistance with interview coding and contributions to preliminary interview data analysis.


View Abstract