OUP user menu

A qualitative examination of primary care providers’ and physician managers’ uses and views of research evidence

Karl A. Lorenz, Gery W. Ryan, Sally C. Morton, Kitty S. Chan, Steven Wang, Paul G. Shekelle
DOI: http://dx.doi.org/10.1093/intqhc/mzi054 409-414 First published online: 27 May 2005

Abstract

Objectives. To examine the reasons and search strategies related to physicians’ search for evidence and to compare clinician and physician manager approaches.

Design. Qualitative analysis of verbatim transcripts of four focus groups in 2002.

Study setting. Clinicians and managers in community practices in Southern California.

Participants. Pediatricians, family practitioners, and general internists (i.e. child and adult primary care providers) in non-academic practice and physician managers whose primary responsibility involved making management decisions within a moderate to large sized health care delivery system (e.g. health plan, community hospital, large group practice).

Main outcome measures. Themes related to clinician and manager reasons for using evidence and approach to selecting among evidence sources.

Results. Clinicians and managers differed substantially in their reasons for using evidence. Whereas clinicians consistently invoked clinical intuition as a guide to most routine clinical decisions, managers articulated both motivation and interest in using medical research to guide decision-making, most commonly prompted by cost. Both clinicians and managers rated trustworthiness as a paramount consideration in arbitrating between evidence sources, because neither group evinced comfort with the complexity of primary literature. Both groups expressed a preference for tested, convenient, and respected evidence sources such as expert colleagues and professional societies.

Conclusions. Because clinicians invoke intuition in confronting the challenges of daily practice, evidence-based medicine interventions that target managers are likely to have larger effects on health outcomes than those that target primary care providers and individual patient treatment. Ensuring trustworthiness of evidence is of the utmost importance. Because both groups express discomfort with the format of primary evidence sources, strategies should probably not rely on individual appraisal.

  • evidence-based medicine
  • focus groups

A goal of evidence-based medicine (EBM) is to empower clinicians with skills to implement evidence-based changes in routine practice. EBM has been defined as ‘the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’ [1]. Extensive effort has produced materials to assist clinicians in integrating sophisticated knowledge from the research literature in patient encounters [2]. This end-user approach balances a need to address deficiencies implementing evidence in care with clinicians’ autonomy.

Research has identified barriers to implementing EBM, and proponents advocate identifying more effective strategies for translating research into practice [39]. One consideration is whether clinicians are the most appropriate end-users. Among practitioners, factors affecting EBM implementation include technical complexity, and that EBM may undervalue experience, constrain independence, be subject to biases, and insufficiently responsive to patients’ preferences [1015]. Different incentives operate at the organizational level, and comparing these perspectives might enlighten deployment strategies [16,17].

For that reason, we compared clinician and manager perspectives by exploring the reasons they initiated evidence searches, what search mechanisms they employed, and how they assessed and utilized evidence. We were particularly interested in how these parties might differ in their need, acquisition, and utilization of evidence. Given the different roles of clinician and managers, we suspected that they might have different evidence seeking and utilization strategies that could inform more effective approaches to evidence-based management.

Methods

As part of an assessment of a physician guide to evidence-based practice, in April and May 2002 we convened four focus groups—two of primary care providers and two of physician managers. With regard to inclusion and exclusion criteria, participants were required to work in non-academic settings in the Los Angeles area. We recruited managers whose primary responsibility involved management within moderate to large health care delivery systems (e.g. health plan, community hospital, large group practice). All primary care providers were board certified and in practice at least 3 years.

One primary care provider group comprised internists and family practitioners, and a second group included pediatricians. Many managers were also primary care providers, but also included medical and surgical specialists, many of whom were part-time clinicians. A total of 19 primary care providers and 16 managers participated. The groups included both men and women and reflected the ethnic diversity of managers and clinicians in practice in Southern California. Participants received an incentive of US$350 consistent with a 5-hour time commitment. The RAND institutional review board approved the study. No members of the research team possess any conflicts of interest related to the current study.

Following focus group methods [18,19], participants first completed a written questionnaire including open-ended questions related to recent experiences with evidence seeking that allowed them to express themselves independently. We initiated discussion by asking about recent cases that prompted an evidence search. Using participant examples, the moderator (K.L.) prompted discussion of: (i) why participants would initiate a search; and (ii) strategies participants would use when seeking for or evaluating evidence. With regard to search strategies, the moderator probed about the sources of evidence they used; how they distinguish between sources; and the usefulness of different sources. Each member of the research team observed one or more of the 2-hour sessions. All sessions were tape-recorded and transcribed verbatim.

Although there are many ways for discovering themes in text [20], we chose to analyze transcripts using a multi-stage, cutting-and-sorting technique [21]. We employed a rigorous inductive approach similar in spirit to grounded theory analysis [2224]. Firstly, two coders (K.L., S.W.) reviewed the transcripts to identify and extract text segments relevant to: (i) reasons participants search for evidence; or (ii) strategies participants use for seeking and evaluating evidence. Each segment was pasted on separate cards, and two coders (K.L., K.C.) independently sorted them into piles based on the similarity of ideas, words, and processes. Each pile was then labeled with a thematic category, and a final theme list was agreed upon by consensus.

Results

Factors prompting a search for evidence

Participants first addressed the motivation of evidence searches. Across all groups, rationales included unfamiliarity, treatment uncertainty, cost, need to justify decisions, a desire to make programmatic changes, and personal interest. The most common were unfamiliarity with a problem, costs, and the need to justify a decision. Though participants mentioned the full range of rationales, clinicians emphasized seeking evidence for individual cases and rarely mentioned cost, whereas managers emphasized aggregate-level problems and frequently invoked cost.

Clinicians

One of the central themes mentioned by clinicians was that most care is routine, and does not require evidence. Participants described internalized repertoires for clinical problems that one primary care provider described as ‘non-evidence based common sense’. A pediatrician described the routine nature of the problems that typify daily practice:

I think in 98% of what we see everyday, maybe even more, in my general pediatric practice, is very routine and I’ve seen it an awful lot. And I’m very confident that I know what to do.

All clinicians consistently referenced this principle. The routine character of most cases establishes intuitive expectations about the scope of practice. As one pediatrician explained: ‘So you know what the boundaries are and you know when a patient steps out of that boundary’.

Clinicians reported seeking evidence when patients extend beyond this familiar threshold. Their examples included extreme symptoms or functional impairment, family concern, or unusual presentations because of the patient’s age, travel history, or a problem’s recurrent nature. One internist described travel-related illness:

Being a primary care physician, internal medicine, I don’t routinely deal with this and I don’t know worms . . . So it’s a novel area to me, and I needed to know what this was, or potentially what this was, before I send the worm to the laboratory to be analyzed.

Other clinicians described cases where they were familiar with the problem, but unfamiliar with new medication:

A woman patient of mine was on [Drug A], and although her one-year bone density scan was worse, she wanted to stay on it for other reasons. The family was asking for [Drug B] to be added. I don’t recall seeing [Drug A] and [Drug B] data, so they were stretching the boundaries of my clinical knowledge. That was something I looked up.

Often evidence searches are stimulated by patients’ queries or demands. As a clinician noted:

People hear things on radio or T.V., even before we get any journals, or even before we get a chance to read about it. I think a lot of times the public jumps on these types of things.

Following a local hospital’s advertisements about MRIs for back pain, a clinician related that:

Everyone with low back pain wanted an MRI . . . These people didn’t need it, but they’d heard the ads . . . Then when you try to bring in the evidence, you’ve got an angry patient.

In such cases, clinicians seek evidence to defend their recommendations.

Physician managers

Unlike clinicians whose fundamental responsibility is to individuals, managers oversee populations and assure that care is delivered cost-effectively. The impact of their decisions on many individuals heightens managers’ sense of the importance of applying evidence:

As a doctor for an individual patient, I can go to my colleagues and consultants and trust because of experience how to make decisions for that particular patient, but as a manager it is a little bit different . . . Everything has to be verifiable in the literature to make broad decisions for a whole group of people. You can’t use your consultants for a whole group decision, only for individual patients, and that is where the art of medicine comes in because I can’t make a broad decision [for a group] based on one neurologist’s recommendation.

Managers, unlike clinicians, commonly cited cost and described a complex financial calculus including revenue, profitability, and competitiveness. Financial concerns surfaced in the context of denying care, but also in light of ensuring quality. One manager described his response to a patient who preferred virtual to regular colonoscopy:

Really the clinical question is whether virtual colonoscopy is comparable to regular colonoscopy or are you going to miss something? And for patient convenience, are the two procedures equivalent? If they’re equivalent, I’m probably going to be stuck at some point in the future of paying for it—proving and paying for it. But until that is proven to me that they are equally in facto, I’m not going to.

For managers, evidence searches are also driven by concerns over quality, legality, and group decisions. A manager described a complex, but typical, case where a patient said:

I have a family history of cerebral aneurysm and I want an MRI.

The manager described taking her to the Medical Director:

For me it was an individual clinical decision related to this patient, but now we’re making a policy decision about the IPA [independent practice association], with not very much guidance. There is a little bit of literature on it, kind of hard to get to. But then we’re oftentimes pushing the envelope on this one . . . In the clinical trench, I’m on a medical, legal slippery slope.

Next, the manager initiated his own unsuccessful literature search. He then contacted the medical director of a health plan he’d used in the past—a person who ‘really liked doing literature searches on these kinds of issues’. After finding three different articles, they concluded that the patient did not meet high-risk criteria. When asked whether he would have done the literature search had she offered to pay out-of-pocket, the manager responded: ‘Well, I wouldn’t have gotten the question.’

Managers described a greater interest in seeking evidence than clinicians, partly because of the frequency with which they encounter non-routine problems. Only in extraordinary cases did clinicians describe a need to seek additional evidence. However, managers cited a likelihood that they would be called upon to make non-routine decisions (e.g. practice guidelines, organizational precedents). Even individual cases that managers adjudicate occur sporadically and differ substantially from one to another.

Search strategies and sources of evidence

Participants also addressed the evidence sources they consider and search strategies. Participants mentioned seeking advice from colleagues, medical journals, textbooks, medical conferences, Internet-based sources, health plans, major medical organizations (e.g. NIH), and commercial sources (e.g. pharmaceutical and medical supply representatives). Top sources included specialists/colleagues, searches for individual studies, reputable journals, and meta-analyses. With one exception, clinicians and managers reported the same sources, although clinicians more commonly reported using textbooks. All participants repeatedly emphasized trustworthiness, but also cited statistical simplicity, consistency, accessibility, brevity, and practicality in searching for and evaluating evidence.

All discussions made apparent the critical role of trustworthiness in the search process. Trustworthiness is required because clinicians and managers lack time to assess evidence or do not trust themselves to make informed choices. As one clinician put it when confronted by hundreds of articles generated by an Internet search on poison oak treatment: ‘If I want to read all of this probably I’d have to take a week off.’ Another clinician summed up his fears: ‘The last thing you want me [to do] as a general internist [is] to have to review every little article on every little thing. I can’t do that. I totally don’t trust myself to do it.’ Time constraints and inexperience with empirical data force both clinicians and managers to evaluate evidence based on trustworthiness.

Participants indicated that trustworthiness plays an important role at two stages: (i) determining what evidence sources they use, and (ii) how they evaluate evidence. As the most commonly mentioned evidence source, peers and other experts were more trusted than text-based sources. Participants assessed colleagues’ trustworthiness on multiple dimensions. For clinicians, the most important peer attributes are their track record and personal familiarity. One clinician described a relationship with a colleague he sought out:

I sent up ten cases who were neurology cases and he took good care of them, my own private patients and things like that. I see his work and his results with my patients, then that is going to translate into other patients I am going to send [to him].

Managers emphasized personal connections less than a person’s recognized expertise:

I will go to the specialist who has done the research. I pick the one I feel is most academic, more likely to have read the literature lately.

The search for someone with more global stature is not surprising given the aggregate problems managers face and their access to such sources.

Participants also described the role of trustworthiness in assessing text-based sources such as articles, journals, web pages, and professional organizations. Participants viewed trustworthiness in terms of both reputation and objectivity. In describing the selection process for journal articles, a clinician asked:

Is this in a reputable journal, a journal that I trust, a journal with editors who I trust rather than the Journal of Irreproducible Results? . . . And if the article is in a prominent journal, I’m not going to necessarily sit there and question whether it’s good or not.

A manager similarly indicated:

I do what everyone else does. You look at the sources it came from. If you trust the source (because you’ve got to trust somebody) . . . then you hope that’s as good as you can get.

Comments about trustworthiness also reflected a positive regard for objective sources. Although managers were more vocal than clinicians about commercial enterprises’ vested interests, this didn’t stop managers from using commercial sources. As one manager described: ‘You have to realize that it is a very biased source, but it is an easy place to get information’. Interestingly, when speaking about experts, participants were less likely to associate biases with experts than text-based sources.

Participants also commented on the ambiguity of evidence about many topics and the lack of publication of negative results that makes evidence-based conclusions impossible. Inconsistency renders evidence suspect, and reinforces the tendency to take refuge in clinical expertise:

There’s all kinds of evidence. The problem is the whole topic of evidence-based medicine has been prostituted, because every article has evidence. Well, the evidence may be crap, but it’s still evidence . . . when it gets down to expert opinion, your opinion’s as good as any other expert. And most of what you make decisions on your whole life is expert opinion.

In addition to trustworthiness, participants cited methodological simplicity, consistency with accepted practice, accessibility, and practicality to guide evidence selection. Participants described the typical language of research journals as too complex and impractical. As a clinician explained:

I think I really hated statistics in medical school and college, and so I would always kind of skip that mumbo-jumbo and just kind of go down to the comments section . . . I have a friend who actually is at NIH and she was asking about some article and whether I thought it was good, or whatever. And I’m like, ‘I don’t know. I just wanted to see what the bottom line was.’ But in her terms of thinking, how the experiment was set up and all those things were important. And I think it is because sometimes I guess you can make anything turn out the way you want it.

Closely related to complexity was the notion that evidence has to be accessible as readily digestible summaries. Participants described several components to accessibility, most importantly time. When the group of internists was asked, ‘Where do you get information that helps you on a practical level?’ one respondent summed up the group’s experience: ‘From textbooks, colleagues, your referrals to specialists. We get them quick, we get them fast. We get them when we need them.’Another clinician emphasized the time advantages that come with asking colleagues:

Everybody’s looking up stuff. I go over to my friend the specialist. He’s looked it up before; he’s on top of it; he’s going to tell me what to do. So I think it’s a time factor. A lot of times we just don’t have time, and we’re seeing upwards of 20–30 patients a day.

Another characteristic of a desirable source is practicality, that evidence should be targeted directly at the problem at hand. Both groups endorsed this principle. As a manager asserted:

You get to the point where you say, ‘Let’s just see what some local folks have done. Has it worked for them? Let’s just take that and go with that.’ We end up with that. It is never a massive research analysis. You just kind of call around and check studies around the area and we just adopt their protocols and go for it. We kind of bypass a lot of literature stuff and just go for what is practical.

Managers mentioned a final aspect of practicality—the impact of the pharmaceutical industry on expedient decision-making. In contrast to wading through a metaanalysis or conducting research, physicians dispense or prescribe drugs that have recently entered the market:

The problem with most of these studies is you get a lot of information with very little knowledge. They throw a bunch of statistics at you. But the bottom line is, does it work or doesn’t it? From the practical point of view of the doctor in the trenches, if a patient had hyper-acidicity and he has GERD, the physician reaches into the sample closet. You don’t even have samples of the cheap drugs anymore. Eighty percent of all doctors’ decision-making is induced by the pharmaceutical industry.

Participants also reported discounting evidence because of inconsistency with accepted knowledge, or when sources differ. Participants often referred to this problem with reference to the published literature, but also generalized it to other sources. One manager explained:

We are dealing with information that you get from any source. Somehow it has to fit and make sense. When something seems aberrant in its conclusion you question it. You go looking for other sources to verify [it].

Discussion

Clinicians and managers confront different daily problems and are prompted to seek evidence by unfamiliarity. Although only extraordinary cases drive clinicians to seek evidence, managers are continually confronting problem cases, setting costs and quality standards, and establishing precedents. Unfamiliarity plus medical, legal, and moral consequences of group decisions prompt managers to seek evidence. However, seeking evidence forces both groups into an activity where they are not the experts. Therefore, they rely heavily on familiarity and reputation of the source, and lack of familiarity with research methods requires evidence to be packaged in clear, practical formats. Limited time exacerbates these constraints.

These findings have important implications for the promotion and success of EBM. Firstly, EBM interventions directed at managers and administered at the system level are most likely to improve health. EBM has the greatest effects on outcomes when applied to frequent problems, but there is little point in trying to teach clinicians to evaluate the evidence underlying routine practice if they are unlikely to seek evidence for common problems. Health system performance is inadequate despite knowledge about what constitutes best practice for many common problems [25,26]. However, the quest for efficiency and quality focuses managers on common problems and those with the largest potential for health improvement.

Secondly, because trustworthiness is an important attribute in promoting the acceptability of evidence, widely respected organizations should participate in appraisal. Within local organizations such as hospitals that develop guidelines and other evidence resources, this underscores the importance of ensuring that locally respected leaders endorse the process [27]. However, physicians’ embrace of trustworthiness may be self-referential, meaning that trustworthy sources (e.g. professional societies) can also have their own agenda, it’s just that these are congruent with the doctor’s agenda.

Thirdly, because individual physicians are unlikely to seek evidence for improving routine care, providing high-quality evidence to the physician directly will be more successful than strategies that depend on coaxing physicians to search and appraise evidence. As an example, systems such as the Veterans Administration (VA) use approaches (e.g. clinical reminders, evidence-based formulary) that rely on an electronic medical record. Similar examples may be informative for other health systems [28,29].

With regard to limitations, we conducted only four focus groups although we used surveys to compare independent responses. This strengthens our confidence that the group differences represent real differences rather than group process. It was not feasible for us to conduct respondent validation, and as qualitative data, the extent to which the themes are true in general should be evaluated. We conducted groups in Southern California; however, although our comparison of clinicians and managers yielded unique insights, major themes articulated by primary care providers were described previously [1015]. However, additional research should confirm these differences between clinician and physician manager perspectives.

In summary, managers are more amenable than clinicians to using evidence, and may represent the most effective target to improve routine care. Future research should confirm these differences and describe them quantitatively. Given physician reluctance to use EBM, misperceptions about what constitutes quality evidence, and the lack of knowledge about best practices in promoting EBM, it seems prudent to develop, implement, and test a variety of models. However, system-wide strategies that rely on managers rather than individual evidence appraisal may be more successful.

Acknowledgements

The authors thank Robert Brook, MD, ScD, for initiating this project. Robin P. Hertz, PhD, senior director of outcomes research/population studies at Pfizer Inc., provided valuable support. This study was funded by a grant from Pfizer Inc. to RAND. Dr Karl Lorenz is the recipient of a VA HSR&D Career Development Award.

References

View Abstract