OUP user menu

Pseudoinnovation: the development and spread of healthcare quality improvement methodologies

Kieran Walshe
DOI: http://dx.doi.org/10.1093/intqhc/mzp012 153-159 First published online: 21 April 2009


Background Over the last two decades, we have seen the successive rise and fall of a number of concepts, ideas or methods in healthcare quality improvement (QI). Paradoxically, the content of many of these QI methodologies is very similar, though their presentation often seeks to differentiate or distinguish them.

Methods This paper sets out to explore the processes by which new QI methodologies are developed and disseminated and the impact this has on the effectiveness of QI programmes in healthcare organizations. It draws on both a bibliometric analysis of the QI literature over the period from 1988 to 2007 and a review of the literature on the effectiveness of QI programmes and their evaluation.

Results The repeated presentation of an essentially similar set of QI ideas and methods under different names and terminologies is a process of ‘pseudoinnovation’, which may be driven by both the incentives for QI methodology developers and the demands and expectations of those responsible for QI in healthcare organizations. We argue that this process has important disbenefits because QI programmes need sustained and long-term investment and support in order to bring about significant improvements. The repeated redesign of QI programmes may have damaged or limited their effectiveness in many healthcare organizations.

Conclusions A more sceptical and scientifically rigorous approach to the development, evaluation and dissemination of QI methodologies is needed, in which a combination of theoretical, empirical and experiential evidence is used to guide and plan their uptake. Our expectations of the evidence base for QI methodologies should be on a par with our expectations in relation to other forms of healthcare interventions.

  • quality management
  • measurement of quality
  • general methodology
  • healthcare system


The last two decades have seen the rise and fall of a number of concepts, ideas or methods in healthcare quality improvement (QI). We have progressed from medical audit to clinical audit and to clinical governance; from total quality management to continuous QI and to business process re-engineering; from statistical process control to six sigma and to lean thinking. At times, keeping abreast of the latest ‘new thing’ in healthcare QI can seem to require almost constant attention to the journals, conferences, books and training events in this field. Paradoxically, given this appearance of constant change, the content of many of these QI methodologies is broadly very similar, though their presentation often seeks to differentiate or distinguish them.

The purpose of this paper is to explore how new QI methodologies (a term used very broadly to encompass concepts, ideas and empirical tools and techniques) are developed, diffused and adopted or taken up by healthcare organizations [1]; to discuss the impact this may have on the effectiveness of QI programmes in healthcare organizations; and to suggest how future innovations in this field might be better assessed.

The spread of QI methodologies: a bibliometric analysis

One way to measure the spread or uptake of ideas is through bibliometric statistics [2], charting the frequency with which particular words or terms are used in citation databases like Medline. It is an imperfect tool, but perhaps the most consistent measure available for studying the dissemination or uptake of new QI methodologies over time. For this paper, Medline and the Health Management Information Consortium (HMIC) databases from 1988 to 2007 were accessed in May 2008. A number of commonly used QI terms were identified through broad searches on QI, and these QI terms were then searched for in the title and abstract text only. Some searches used multiple formulations of the QI term (e.g. ‘lean thinking’ or ‘lean production’). Full search details are available from the author.

Table 1 shows the way in which the total number of citations on Medline and HMIC for each of the 10 common QI terms (used in either the title or abstract text), distributed over the two decades from 1988 to 2007 (Fig. 1). Three conclusions can be drawn from the graph. First, ideas (or terms) often rise in popularity, are used for 3 or 4 years and then fall out of use or fashion again. It can be seen that ‘medical audit’ and ‘clinical audit’ did so in the early and mid-1990s and that a similar burst of interest in total quality management (TQM) was followed by a somewhat more sustained interest in continuous quality improvement (CQI). Perhaps the most pronounced example is that of clinical governance—a term that never appeared until 1998 when Scally and Donaldson [3] coined and popularized it, which was then used widely for around 5 or 6 years, but now appears to be fading from use.

Figure 1

The distribution by year of the total use of each of the10 common QI terms in citation titles/abstracts on Medline/HMIC 1998–2007 (see online supplementary material for a colour version of this figure).

View this table:
Table 1

The distribution by year of the total use of each of the 10 common QI terms in citation titles/abstracts on Medline/HMIC 1998 to 2007

YearClinical governanceTQMCQIMedical auditClinical auditLeanPatient safetySix sigmaProcess redesignAccreditation

Second, the graph suggests that today's ‘hot topics’ are lean thinking, six sigma and patient safety, in all of which interest is waxing though past experience would suggest that each is likely to wane again over time. Third, it is worth noting that only one term—accreditation—seems relatively immune to fashion, with a consistent level of use over the whole period studied.

Although Fig. 1 shows the way how total citations for a given QI term were distributed over a 20-year period, it says nothing about the relative frequency or popularity of different terms. This is illustrated in Fig. 2, which shows how the 20 193 citations using these 10 common QI terms between 1988 and 2007 were distributed between them. It is immediately apparent that some terms have been much more widely used than others. Two terms—accreditation and patient safety—make up 58% of all citations. Though six sigma and lean thinking are terms showing recent and growing interest, they each represent less than 1% of all citations. Even in 2007, there were 933 citations using the term ‘patient safety’ and 468 on accreditation, compared with 33 on six sigma and 24 on lean thinking.

Figure 2

The relative frequency of 10 common QI terms in citation titles/abstracts on Medline/HMIC during the period 1998–2007 (total citations to all terms = 20 193).

This bibliometric analysis has some limitations. The use of QI terms in academic literature is an uncertain proxy for the actual use of the methodologies in practice in healthcare organizations, and the QI terms themselves are a heterogeneous set, some representing quite narrow and specific approaches (like lean and six sigma) and others representing broad areas of interest (like patient safety) or used in a variety of contexts (like accreditation). It also tells us nothing about the content of these QI ideas—what they mean or what they contribute to advancing the science of healthcare QI.

The spread of QI methodologies: content and form

While new QI methodologies are often superficially different, particularly in the language or terminology they employ and the way in which ideas and methods are described and presented, there is a high degree of underlying commonality of approach, in at least four main areas. First, almost all make use of the idea of a cycle of improvement, which involves a series of steps involving data collection, problem description and diagnosis, the generation and selection of potential changes and then the implementation and evaluation of those changes likely to bring about improvement [4]. Second, most make use of a common set of QI tools and techniques in each stage of this improvement cycle—such as cause/effect or fishbone diagrams, process mapping or flowcharting, brainstorming, quantitative indicator construction and comparative data analysis and so on [5]. Third, most acknowledge the corporate or organizational dimension of improvement, the need for supportive leadership from senior managers and clinicians and clear organizational commitment to the aims of QI [6]. Fourth, most recognize the importance of the engagement or involvement of frontline clinical staff in QI, and the need for improvement processes to be grounded in their knowledge of service delivery and ideas on improvement [7].

While there are some significant differences between QI methodologies, they often relate to the emphasis placed on particular ideas or the way they are presented. For example, devotees of total quality management place great importance on the need for organizational leadership and a corporate approach to improvement [8]. Advocates of continuous QI stress the engagement of staff, the opportunities for improvement presented when errors or defects are found, and the idea that many, small improvements lead to a significant and continuing improvement [9]. In contrast, supporters of business process re-engineering or process redesign focus on conceptualizing the organization as a process, on the application of some basic principles in process flow analysis and design, and on the opportunity to make quantum leaps in improvement through radical process redesign [10]. Proponents of lean thinking emphasize the importance of understanding process functioning and reducing or eliminating waste (or unproductive effort) [11], while adherents of six sigma focus on the use of statistic process control techniques and the reduction of variation [12]. To use a linguistic simile, these QI methodologies are more like dialect forms of a common language than they are like different languages. They share a basic grammar and vocabulary, and differ mainly in areas like pronunciation and accent.

QI: innovation or pseudoinnovation

Given that there is so much common ground to be found among QI methodologies, the constant process by which new QI ideas come into fashion, and then after a few years are replaced by something else, must be more a phenomenon of reinvention than true innovation [13]—what might be called ‘pseudoinnovation’. It could be argued here that most so-called ‘new’ QI ideas are largely a repackaging of the existing intellectual content, given a different spin or a fresh presentational gloss by their promoters. Differences in terminology serve to accentuate an appearance of innovation, and to conceal the essential continuity of content.

This may happen for two main reasons. First, the economic and social interests of the developers of QI methodologies are served by having something new to offer. Bluntly, each supposed wave of innovation brings with it fresh opportunities to sell consultancy and training to healthcare organizations, to produce books and journal articles about the methodology, to research, write and present on it, to organize conferences and seminars and so on. Second, the consumers or users of these ‘innovations’—senior managers and clinicians, and QI professionals and the like in healthcare organizations—are perhaps too credulous and willing to accept the often overstated claims and enthusiastic rhetoric of the innovators. It may be tempting to believe that the latest panacea for QI will be quick, simple and dramatically effective, even though our experience teaches us that in reality worthwhile healthcare QIs are often hard-fought, slow and painstaking achievements [14].

It is instructive to compare the development and dissemination of QI methodologies to the way in which other forms of innovation spread, and especially to the introduction of new clinical technologies [15]. These are subject to tough and independent scientific scrutiny, and we expect to have robust evidence to demonstrate their clinical and cost effectiveness before healthcare organizations adopt them or healthcare funders are willing to pay for them. In contrast, new QI methodologies are often promoted on the basis of fairly superficial descriptive accounts of their successful application, often in single or small numbers of selectively chosen organizations, with little or no quantitative evidence either of their benefits or their costs [16]. These accounts rarely provide sufficient information to allow the methods to be adequately replicated elsewhere, and they are often produced by the originators of the method themselves, whose conflicts of interest in researching their own ‘product’ go unremarked and unchallenged.

In fact, research suggests that the impact or effectiveness of healthcare QI programmes is often rather variable and limited [17, 18]. Furthermore, one thing the different QI methodologies have in common is that researchers often find that their results in widespread implementation are disappointing compared with the early experience of their use in pilot or experimental programmes with a small number of selected healthcare organizations, when they often appear more successful. This could be because the self-selected organizations which participate in the initial pilot or experimental programmes are more receptive to or committed to QI, more oriented towards innovation and change, or better resourced to undertake QI [19]. It could be because the organizations, which adopt the QI methodology in widespread implementation, do so less willingly or for reasons of institutional conformity [20]. It could also be because the implementation of the QI methodology itself is different. Pilot or experimental programme sites may get more support and advice from the developers of the methodology, and have a better understanding of how it is meant to be used, while those involved in later widespread implementation may have too little information about the actual content of the QI methodology, or how it is meant to be used.

The widely variable effectiveness of individual QI methodologies and the likely causes of that variation provide reasons to suggest that there is probably more to be gained by adopting a given QI methodology and sticking with it, developing skills and experience in its use, and building up engagement, commitment and organizational capacity in its application. Another reason, as we have established, is that the different QI methodologies have much in common and are often only superficially different in terminology or presentation, and so it seems unlikely that there is much to be gained from switching from one to another anyway. Moreover, there are good reasons to believe that such behaviour could have several important disadvantages. First, we know that when any service is reorganized or restructured, a drop in performance is often observed, as the organization's resources and attention are consumed by internal change processes instead of being used in service provision or production [21]. Repeated sequential changes in QI programmes are, therefore, likely to produce sequential or even cumulative deterioration in their performance, a phenomenon which has been called ‘redisorganization’ [22]. Second, much of the investment in any QI programme is in social and intellectual capital [23]—the capacity, capability and engagement of staff in both QI and in clinical teams—which is likely to be lost or at least diminished when QI methods are changed and those involved have to learn and apply a new QI methodology.

We could go further and hypothesize that at least part of the explanation for the disappointing impact of healthcare QI at a system level noted above may lie in this process of repeated pseudoinnovation or reinvention. If it takes time to establish a QI programme, to secure engagement and involvement, to embed it within the organization's structure and systems and to develop capacity and capability in improvement, then the tendency to chop and change direction, successively adopting and then discarding different QI methodologies every 2 or 3 years, may mean that no programme stands much chance of success before it is revised or replaced.

Assessing future QI innovations

The arguments presented in this paper suggest that QI professionals, clinicians, managers and policymakers in healthcare systems should be more sceptical about supposed innovations in QI methodologies and should seek more or better evidence before using them. To do so, they might apply emerging thinking in the field of evidence-based management, about the nature and form of evidence and its use in making managerial decisions [24, 25]. Table 2 suggests there are three types of evidence about any QI methodology, which are needed to make an informed assessment of whether it should be adopted—theoretical, empirical and experiential [26]. These three types of evidence help us to answer different questions about the methodology, and evidence in all three areas is needed to make an informed decision and to then go on to implement that QI methodology.

View this table:
Table 2

Assessing QI methodologies

Types of evidenceKey questions about the QI methodologySources of evidence to answer those questions
Theoretical evidenceHow and why does it work? What is the underlying ‘programme theory’?Descriptions of the methodology's intended mechanism or action, setting out the programme logic or intended causal sequence and drawing on appropriate social science theory
Empirical evidenceWhen, for whom and how well does it work? What effects does it have? What does it cost?Qualitative and quantitative evaluations of the methodology's implementation, using rigorous and robust comparative methods to quantify effects, and undertaken independently
Experiential evidenceWhat is it like to use? What has been learned about its application in a wide variety of settings or contexts?Descriptive accounts of the methodology in use, synthesis of practitioner experience and feedback, collation of learning and interchange among networks of users

Theoretical evidence underpins the ‘programme theory’ of the improvement methodology, and explains how and why it is expected to work [27]. It may draw on established social science theory in areas such as organizational behaviour, psychology, economics, public administration or sociology. Crucially, it should spell out in sufficient detail the programme logic or intended causal sequence of events. This makes the intended internal working of the improvement methodology clear to those trying to use it, and allows that programme logic to be questioned or challenged.

Empirical evidence tackles the question of whether the improvement methodology works, and if so in what circumstances, settings or organizational contexts it works best. It uses the framework of the established programme theory to shape hypotheses and guide data collection and sets out to quantify the impact of the methodology, and to measure its costs [28]. It should be undertaken comparatively (so there is some opportunity to test the counterfactual—what would have happened without the improvement methodology) and independently, so that those testing the improvement methodology have no vested interest in its success. It can be challenging, in fast-moving and turbulent organizational environments, to ensure that comparisons are valid and meaningful.

Experiential evidence provides a synthesis of the experience of other individuals and organizations in using or applying the improvement methodology—how have they found it, what practical lessons have they learned from its application and what advice would they offer to others about its use. The aim here should be to capture and précis the key learning points about the process of implementation especially [29].

Often, we have at best partial sets of evidence on both new and existing QI methodologies. We may have some empirical evidence (though often only on benefits, not on costs, and lacking any comparative component); and some experiential evidence (though often culled from enthusiastic but atypical early adopters and pioneers rather than from the experience of more typical individuals and organizations); and we rarely have much theoretical evidence—the underlying programme theories and intended mechanisms of QI methodologies are usually implied, but are not explicitly stated.


We all know and can recognize in ourselves the allure of the new, the untried or the excitingly different idea. There is a universal human tendency to follow fads and fashions, and healthcare organizations and their leaders are no exception to this rule [30, 31]. QI professionals, clinicians and managers are prone to embrace apparently promising QI methodologies with too much enthusiasm, and to show too little healthy scepticism about two key characteristics of such innovations. First, are they really new? And second, are they really an improvement? Such behaviour runs counter to Deming's first principle for management—the need for constancy of purpose [32].

It can be argued that the serial ‘pumping and dumping’ of a host of different QI methodologies in healthcare over the last 20 years has led not to sustained and continuing improvement but to some waste of effort and resources, and a failure to achieve in all healthcare organizations the benefits that sustained and consistent investment in QI could have brought.

It could be argued here that there is a need for the evidence about QI methodologies to be better organized, clearly synthesized and made more available to those who manage and lead QI programmes in healthcare organizations. There is also a need for more research in some areas, where the necessary theoretical, empirical and experiential evidence is not currently available; but perhaps the greatest need is for those who lead QI programmes and activities in healthcare organizations to exercise greater scepticism and demand more robust evidence for the methods and approaches they use.


View Abstract