OUP user menu

Towards evidence-based clinical practice: an international survey of 18 clinical guideline programs

DOI: http://dx.doi.org/10.1093/intqhc/15.1.31 31-045 First published online: 1 February 2003


Objective. To describe systematically the structures and working methods of guideline programs.

Design. Descriptive survey using a questionnaire with 32 items based on a framework derived from the literature. Answers were tabulated and checked by participants.

Study participants. Key informants of 18 prominent guideline organizations in the United States, Canada, Australia, New Zealand, and nine European countries.

Main outcome measures. History, aims, methodology, products and deliveries, implementation, evaluation, procedure for updating guidelines, and future plans.

Results. Most guideline programs were established to improve the quality and effectiveness of health care. Most use electronic databases to collect evidence and systematic reviews to analyze the evidence. Consensus procedures are used when evidence is lacking. All guidelines are reviewed before publication. Authorization is commonly used to endorse guidelines. All guidelines are furnished with tools for application and the Internet is widely used for dissemination. Implementation strategies vary among different organizations, with larger organizations leaving this to local organizations. Almost all have a quality assurance system for their programs. Half of the programs do not have formal update procedures.

Conclusions. Principles of evidence-based medicine dominate current guideline programs. Recent programs are benefiting from the methodology created by long-standing programs. Differences are found in the emphasis on dissemination and implementation, probably due to differences in health care systems and political and cultural factors. International collaboration should be encouraged to improve guideline methodology and to globalize the collection and analysis of evidence needed for guideline development.

  • clinical guidelines
  • evidence-based medicine
  • program
  • survey

Clinical practice guidelines are developed throughout the world to improve the quality of health care. The methods used to develop guidelines vary among organizations [1,2] and the quality of the methods has long given cause for concern [3,4]. With the growth of evidence-based medicine in the 1990s, there has been a shift from professional consensus to scientific rigor, employing systematic reviews and meta-analyses as the basis for developing valid guidelines [5]. In addition, Eddy introduced the ‘explicit approach’, in which the recommendations are linked to the supporting scientific evidence and the benefits, harms, and costs of the interventions are transparently presented (e.g. by using balance sheets) [6]. Ideally, the recommendations should be accompanied by a statement of the strength of the underlying evidence and expert judgment, as well as by projections of the relevant health consequences of alternative courses of care [7]. The effect of clinical guidelines on medical practice and their impact on patient care is, however, often limited [8,9]. Hence, guideline development needs to be complemented by evidence-based implementation [10]. All efforts should ultimately lead to ‘evidence-based clinical practice’ in which the clinician ‘uses the best evidence available in consultation with the patient, to decide upon the option which suits that patient best’ [11,12].

Guides for the development, implementation, and evaluation of clinical guidelines have been developed in different countries, such as Australia [13] and the UK [14], but it is not known whether the recommended approaches are actually used in current guideline programs. Recent studies of international guideline activities were not conducted systematically nor did they describe the content of existing guideline programs [2,1517].

The aim of this survey was to describe systematically the structures and working methods of current guideline programs in different countries throughout the world, covering the entire scope of guideline development, dissemination, implementation, and evaluation. Our study was conducted within the context of an international research project, the AGREE (Appraisal Guideline Research and Evaluation) project, which aimed to harmonize guideline development methods in order to reduce duplication of efforts and to ensure efficient use of resources [18].


For this study we adopted the definition of the Institutes of Medicine (IOM) of clinical practice guidelines as ‘systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances’ [19]. Furthermore, we defined a guideline program as ‘a structured and coordinated program designed with the specific aim of producing several clinical practice guidelines’.

Selection of guideline programs

We aimed at studying a wide range of programs from different countries. We only included national programs or programs with a high impact on a national level. The sample consisted of programs from countries involved in the AGREE project, with a maximum of two programs per country. To widen the scope, we also included the well-known technology assessment program from Sweden and the national guideline program used in Australia as an example for other guideline organizations in that country. In total, 18 guidelines programs were selected.

Design of the questionnaire

We produced a conceptual framework covering relevant aspects of guideline programs using criteria for guideline programs from different authors. Starting points were the ‘criteria for good guideline programs’ formulated by Lohr, and the framework for guideline implementation studies produced by Mäkelä and Thorsen [20,21]. We also used the IOM provisional instrument and the Cluzeau instrument, both of which provide criteria for assessing clinical guidelines, among which some criteria could also be applied to guideline programs [7,22]. The first version of the framework was tested by describing a few programs. Valid information was difficult to obtain for some criteria, such as ‘credibility of agency responsible for guideline development’ and ‘process for selection of panel members for the guideline development group’. These criteria were therefore discarded. Based on the final framework (Figure 1) we designed a questionnaire that covered all of the items (available as supplementary data at IJQHC Online). The answer categories for some of the items were derived from the National Guideline Classification Scheme of the US National Guideline Clearinghouse [23].

Figure 1

Framework for description of clinical guideline programs. 1For these items, answer categories were derived from the US National Guideline Clearinghouse Classification Scheme [16].

Data collection and analysis

The questionnaire was sent to key informants of the guideline programs. These were persons in a leading role in the guideline development organization or with lengthy experience of the guideline program. Their answers were tabulated in simple linear classes (as shown in the tables below), except answers to open questions, which were summarized in short statements. When responses were not clear, we sent between four and eight additional specific questions to the key informant. For validation, we sent the first draft of the results back to the informants, asking them to check our interpretations. They did so, enabling them at the same time to compare their responses with those of others, and all gave their approval of our interpretations.


All key informants responded to the original questionnaire and the validation procedures.

Basic characteristics of guideline organizations

Nine organizations were professional societies, six were governmental agencies, two were national (or central) but not governmental, and one was an academic institution (Table 1). In the late 1970s, the National Institutes of Health Consensus Development Program led the development of consensus statements. The Dutch and the Swedish organizations started guideline development in the 1980s, with the others starting in the 1990s or in 2000. The common reasons given for establishing a guideline program were to improve the quality of health care, to support evidence-based care, to improve cost-effectiveness of care, and to contribute to more effective care. Some programs were intended to increase equity or to strengthen the medical profession, or were part of a research effort.

View this table:
Table 1

Basic characteristics of guideline organizations

CountryName of organizationType ofWebsiteYear of firstReason for guideline development
(acronym/short name)organization guideline
AustraliaNational Health and Medical ResearchNational www.health.gov.au/hfs/nhmrc 1995Pilot test development of national standards for others
Council (NHMRC)governmentto follow
CanadaCancer Care Ontario Practice GuidelinesProvincial www.cancercare.on.ca/ccopg 1994Facilitate evidence based-decision making in cancer
Initiative (CCOPGI)governmentcare
DenmarkDanish College of General PractitionersProfessional www.dsam.dk 1998Quality improvement
EnglandCentre for Health Services Research,Academic www.ncl.ac.uk/chsr 1995Develop guidelines for research purposes
University of Newcastle-upon-Tyneinstitution
(North of England)
Royal College of Physicians LondonProfessional www.rcplondon.ac.uk 1990Provide guidance for multidisciplinary management
(RCP London)umbrellaof patient problems after stroke
FinlandFinnish Medical Society DuodecimProfessional www.duodecim.fi 1997Provide necessary instruments for evidence-based,
(Duodecim)umbrellaequitable cost-effective health care
FranceAgence Nationale d’Accréditation etNational www.anaes.fr 1993Improvement of quality of care
d’Évaluation en Santé (ANAES)government
Fédération Nationale des Centres de LutteProfessional www.fnclcc.fr/sor.htm 1993Professional initiative to improve quality of cancer
umbrellacare following evidence for practice variation at
local, regional, and national level
GermanyAssociation of the Scientific MedicalProfessional www.awmf.de 1992Quality improvement and improvement of clinical
Societies in Germany (AWMF)umbrellaresearch, now recommended by the High Advisory
Board of the Federal Ministery of Health
ItalyAgency for Regional Health ServicesCentral but not www.assr.it 2000Provide tools to be used at regional level in the
(ASSR)governmentalpromotion of effective and appropriate use of health
The NetherlandsDutch Institute for HealthcareProfessional www.cbo.nl 1980Support medical audit in hospitals
Improvement CBO
Dutch College of General PractitionersProfessional www.nhg.artsennet.nl 1989Professionalization of GPs with formulating state of
(NHG)the art in order to give other parties (specialists,
government) a clear view of their competence
New ZealandNew Zealand Guidelines Group (NZGG)National but not www.nzgg.org.nz 1998Reduce gap between current and appropriate care
ScotlandScottish Intercollegiate GuidelinesProfessional www.sign.ac.uk 1995Support health care quality improvement by
Network (SIGN)promoting effective clinical care to reduce variation
in clinical practice
SwedenSwedish Council on Technology AssessmentNational www.sbu.se 1989Not applicable
in Health Care (SBU)government
SwitzerlandSwiss Medical Association (FMH)Professional www.fmh.ch 2000One component of a global programme to promote
quality of care in Switzerland
United StatesUS Preventive Services Task ForceNational www.ahrq.gov/clinic/ 1989Confusion over preventive care, reluctance of
(USPSTF)Governmentuspstfix.htminsurers to cover preventive care, and reluctance of
providers to provide preventive care
National Institutes of Health Consensus Development Program (NIHCDP)Nationalconsensus.nih.gov1977Facilitate translating medical scientific findings into

All guideline programs except the Swiss program receive governmental support. Some agencies are funded exclusively by the government, but usually professional organizations also fund guideline development. The average budget for developing a single guideline varies from US $10 000–25 000 in New Zealand to $200 000 in the United States. The differences in the budget for dissemination are even larger, varying from nothing to $200 000 per guideline.

Purpose and topics

All of these guideline programs aim to ensure appropriate clinical care and six of the programs also attempt to contain health care costs (Table 2). Most programs target primary as well as secondary care, and their guidelines have a broad scope that covers prevention, diagnosis, and management of a wide range of clinical topics. Two of the programs focus on prevention and two are limited to cancer care. Four programs target primary care exclusively and two are for hospital specialists. Eleven programs also consider patients and policymakers as target users of their guidelines.

View this table:
Table 2

Purpose and topics

OrganizationObjectivesLevel of careTarget usersScope of guidelinesWho selects topics?
NHMRC (Australia)Appropriate carePrimary, secondary, andPhysicians, nurses, patients, healthDiagnosis, treatment/Initially NHMRC, more recently specialist
tertiary carecare organizations, hospitals,managementcolleges
CCOPGI (Canada)Appropriate care, costPrimary, secondary, andPhysicians, patients, policy makers,Screening, prevention,Guideline development committees; policy
containment, equal access totertiary careregional cancer systemsdiagnosis, treatment/advisory committee may request for
healthcaremanagementguideline on new, expensive drug
DSAM (Denmark)Appropriate care, costPrimary careFamily physiciansPreventionDanish College of GPs
North of EnglandAppropriate care, costPrimary carePhysicians, nursesTreatment, managementProgram leader
RCP LondonAppropriate careSecondary and tertiary carePhysicians, paramedics, nurses,Prevention, diagnosis,Department of Health and National Institute
(England)patients, health care organizations,treatment/managementfor Clinical Excellence (NICE)
hospitals, policymakers
Duodecim (Finland)Appropriate care, equitabilityPublic health, primary,Physicians, paramedics, nurses,Screening, prevention,Current Care board (representatives of
secondary, and tertiary carepatients, health care organizations,diagnosis, treatment/major stakeholders)
hospitals, policymakersmanagement
ANAES (France)Appropriate care, costPrimary, secondary, andPhysicians, paramedics (in privateScreening, prevention,ANAES, medical speciality societies, health
containmenttertiary careand public settings)diagnosis, treatment/insurance
FNCLCC (France)Appropriate care, costPublic health, secondary carePhysicians, paramedics, nurses,Diagnosis, treatment/Professionals, scientific committee,
containmentpatients, health care organizations,managementcommissioned by national government
hospitals, policymakersagency
AWMF (Germany)Appropriate carePrimary, secondary, andHealth care providers, self-Screening, prevention,Medical specialty organizations
tertiary caregovernmental bodies in heath carediagnosis, treatment/
ASSR (Italy)Appropriate care andPublic health, primary,Physicians, nurses, patients, healthScreening, prevention,National health plan issued by Ministry of
organization of health servicessecondary, and tertiary carecare organizations, hospitals,diagnosis, treatment/Health
CBO (Netherlands)Appropriate care, effectiveSecondary carePhysicians, paramedics, nursesScreening, prevention,Committee of independent medical
health carediagnosis, treatment/specialists and hospitals
NHG (Netherlands)Appropriate carePrimary careFamily physiciansScreening, prevention,Independent advisory board of family
diagnosis, treatment/physicians
NZGG (New Zealand)Appropriate care, cost-Primary, secondary, andPhysicians, paramedics, nurses,Screening, prevention,Practitioners, using suitability screen
effectivenesstertiary carepatients, health care organizations,diagnosis, treatment/developed by NZGG
hospitals, policymakersmanagement
SIGN (Scotland)Appropriate care, reducePublic health, primary,Physicians, paramedics, nurses,Screening, prevention,SIGN council (representatives from medical
variation in clinical practicesecondary, and tertiary carepatients, health care organizations,diagnosis, treatment/specialist societies, medical colleges, and
hospitals, policymakersmanagementfunding organization)
SBU (Sweden)Appropriate care, cost-Public health, primary,Physicians, paramedics, nurses,Screening, prevention,SBU board
effectivenesssecondary, and tertiary carepatients, health care organizations,diagnosis, treatment/
hospitals, policymakersmanagement
FMH (Switzerland)Appropriate carePublic health, primary,PhysiciansScreening, prevention,Swiss Medical Association, specialist
secondary, and tertiary carediagnosis, treatment/societies
USPSTF (USA)Appropriate carePrimary carePhysicians, nurses, health careScreening, prevention,USPSTF members, with input from outside
organizations, hospitals, policymakersdiagnosisgroups including primary care professional
societies, prevention experts, government
health experts
NIHCDP (USA)Appropriate carePublic health, primary,Physicians, paramedics, nurses,Screening, prevention,The NIH Office of Medical Applications of
secondary and tertiary carepatients, health care organizations,diagnosis, treatment/Research with input from NIH institute
hospitals, policymakersmanagementdirectors
  • Organizations are as defined in Table 1.

In most programs the organization responsible for guideline development coordinates the selection of topics for guidelines. People outside the organization—in some cases policymakers or health authorities—can propose topics for guideline development. In Italy, the topics are identified by the national health plan.

People involved in guideline development

Guideline development groups are typically fairly large, consisting of 10–20 members (Table 3). In four programs, smaller groups are preferred, and two programs have groups of >20 persons. The number of disciplines per group is most often three to five. Most programs invite methodological experts to participate, typically epidemiologists (15 programs), library scientists (12), statisticians (four), communication experts (four), health economists (three), and clinical or social psychologists (two). The remaining three programs include experts if they are necessary. Patient representatives participate in guideline groups in 11 programs and are involved during the pre-release review in two. Editorial support is given by permanent guideline staff in 14 programs (four of which also employ temporary staff per guideline) and by temporary committees only in four programs, while one program has no systematic arrangement for editorial support.

View this table:
Table 3

People involved in guideline development

OrganizationComposition of guideline development groupEditorial support
Average numberAverage numberExperts always involved (beyond clinical experts)Involvement of patients
of membersof disciplines
NHMRC (Australia) 10–15 3–5Epidemiologists, health economistsYesStanding staff
CCOPGI (Canada) 15–20 3–5Library scientists, epidemiologists, staticians,YesStanding staff
communication experts
DSAM (Denmark)  5–10 3Only if necessaryNoStanding staff and hearing staff
North of England 10–15 3–5Epidemiologists, health economistsYesStanding staff
RCP London (England)>20>5Library scientists, epidemiologists, clinicalYesCommittee that varies for different guidelines
Duodecim (Finland)  5–10 3–5Library scientists, epidemiologistsNo1Standing staff and appointed ‘group editor’
ANAES (France)>20 3–5Library scientists, epidemiologistsNoStanding staff and committee that varies for different guidelines
FNCLCC (France) 10–15 3–5Library scientists, epidemiologists, staticiansYesStanding staff
AWMF (Germany)  5–10 1–20Library scientists, epidemiologists, social psychologistsNo2Committee that varies for different guidelines
ASSR (Italy) 10–15 0–3Library scientists, epidemiologists, staticians,NoCommittee that varies for different guidelines
communication experts
CBO (Netherlands) 15–20>5Library scientists, epidemiologistsYesStanding staff
NHG (Netherlands)  5–10 0–3Only if necessaryNoStanding staff
NZGG (New Zealand) 10–15 3–5EpidemiologistsYes2Standing staff and budgeted for each guideline
SIGN (Scotland) 15–20>5Library scientists, epidemiologistsYesBy standing staff and committee that varies for
different guidelines
SBU (Sweden) 10–15 3–5Library scientists, epidemiologists, communicationYesChairman and project coordinator
experts, health economists
FMH (Switzerland) 10–15 3–>5Only if necessaryNoNo usual support; varies for different guidelines
USPSTF (USA) 10–15>5Library scientists, epidemiologistsNoStanding staff
NIHCDP (USA) 15–20 5Library scientists, epidemiologists, staticians,YesStanding staff
communication experts
  • 1Patients are not members of a guideline development group but are involved by reviewing representatives for patient organizations.

  • 2Usually consumer group representatives rather than patients.

Methodology of guideline development

Training in the methodology of guideline development is offered to the members of the guideline development group in almost all programs. In seven programs the training is obligatory for all group members. All guideline programs use electronic database searches to collect evidence and most also use searches by hand (Table 4). Most evidence is analyzed by systematic reviews, supported in two programs by decision analyses. All but one program link recommendations to evidence, and seven programs use formal consensus methods to formulate recommendations. External review is used in all but one program, and the majority ask for formal authorization from outside. Guideline comparison is used in seven programs, with pilot testing before release in two. Authorization by the professional organization of target users is generally employed to endorse the guidelines.

View this table:
Table 4

Methodology of guideline development

OrganizationMethods used to collect theMethods used to analyze theMethods used to formulateMethod of review4
NHMRC (Australia)By hand, electronicMeta, systematicEvidence, informalComparison, external
CCOPGI (Canada)By hand, electronic, unpublished dataMeta, systematicEvidence, formalExternal, internal
DSAM (Denmark)By hand, electronicSystematicInformalComparison, internal
North of EnglandElectronicMeta, systematicEvidence, informalExternal
RCP London (England)ElectronicMeta, systematic, non-systematic,Evidence, formal, informalComparison, external
Duodecim (Finland)ElectronicMeta, systematicEvidence, informalExternal, internal
ANAES (France)By hand, electronic, unpublished dataSystematic, experienceEvidence, formal, informalPilot, external, internal
FNCLCC (France)By hand, electronicSystematic, non-systematic,Evidence, informalExternal, internal
AWMF (Germany)By hand, electronicMeta, systematic, non-systematic,Evidence, formal, informal, subjectivePilot, comparison, external, internal
ASSR (Italy)ElectronicMeta, systematicEvidence, formalPilot, comparison, external, internal
CBO (Netherlands)By hand, electronicSystematic, non-systematicEvidence, formal, informalExternal
NHG (Netherlands)By hand, electronicNon-systematic, experienceEvidence, informalExternal, internal
NZGG (New Zealand)By hand, electronic, patient data,Decision, meta, systematic,EvidenceComparison, external, internal
unpublished datanon-systematic, experience
SIGN (Scotland)By hand, electronicSystematic, experienceEvidence, informalExternal, internal
SBU (Sweden)By hand, electronic, patient data,Decision, meta, systematicEvidenceExternal
unpublished data
FMH (Switzerland)By hand, electronicSystematic, non-systematic,Evidence, informalExternal, internal
USPSTF (USA)ElectronicMeta, systematicEvidenceComparison, external, internal
NIHCDP (USA)By hand, electronic, unpublished dataMeta, systematicEvidence, formalExternal
  • 1By hand = hand searches of published literature, electronic = searches of electronic databases, patient data = searches of patient registry data, unpublished data = searches of unpublished data.

  • 2Decision = decision-analysis, experience = experience based, meta = meta-analysis, non-systematic = non-systematic review, systematic = systematic review.

  • 3Evidence = evidence-linked, formal = formal expert consensus, informal = informal expert consensus, subjective = subjective review.

  • 4Comparison = comparison with guidelines from other groups, external = external peer review, internal = internal peer review, pilot = pilot testing.

Products and deliveries

Long-standing programs have produced more guidelines than those started recently (Table 5). Average guideline length varies among programs, but guidelines tend to consist of more than 15 pages. Guidelines are usually presented in both a summary or short version and an extended version with notes or references or both. Eleven programs also produce patient versions. Almost all programs develop tools for application, such as flow charts or algorithms. Balance sheets are produced in three programs and risk tables in four. All but one program provide their guidelines on the Internet.

View this table:
Table 5

Products and deliveries

OrganizationTotal number of guidelinesAverage number of pagesProducts1Media used
NHMRC (Australia) 10–20>50Extensive, short, summary, patient, flowchartsPaper, Internet
CCOPGI (Canada) 30–50 15–25Extensive, summary, patient, flowchartsPaper, CD-ROM, Internet
DSAM (Denmark)  0–10 15–25Extensive, summary, flowcharts, risk tablesPaper, CD-ROM, Internet
North of England  0–10>50Extensive, summary, balance sheetsPaper, Internet
RCP London (England)  0–10>50Extensive, summary, patientPaper, Internet
Duodecim (Finland) 20–30 15–50Extensive, short, patient, flowchartsPaper, CD-ROM, Internet
ANAES (France)>50>50Extensive, short, summary2, patient2, flowchartsPaper, Internet
FNCLCC (France)>50 25–50Extensive, short, patient, flowchartsPaper, CD-ROM, Internet
AWMF (Germany)>50 15–25Extensive, short, summary, patient, flowchartPaper, Internet
ASSR (Italy)  0–10 25–50Extensive, summary, patient2, flowcharts2Paper, Internet
CBO (The Netherlands)>50 25–50Extensive, short, summary, risk tablesPaper, Internet
NHG (The Netherlands)>50 10–15Extensive, summary, patient, risk tablesPaper
NZGG (New Zealand)  5–10 25–50Extensive, short, summary, patient, flowcharts, balance sheets,Paper, Internet
risk tables
SIGN (Scotland) 30–50 25–50Extensive, summary, flowchartsPaper, CD-ROM, Internet
SBU (Sweden) 30–50>50Extensive, short, patientPaper, Internet
FMH (Switzerland)  0–10 10–15Extensive, flowchartsPaper, Internet
USPSTF (USA)>50 10–15Extensive, short, summary, balance sheetsPaper, Internet
NIHCDP (USA)>50 15–25Extensive, shortPaper, Internet
  • 1Extensive = extensive version with notes/references, flow charts = flow charts /algorithms, patient = patient version, short = short version, summary = one or two pages summary.

  • 2Planned products, therefore not yet available.

Implementation, evaluation, and update procedure

A wide range of strategies is used to implement guidelines and the strategies vary according to guideline topics (Table 6). Those used most often are educational materials and conferences. A specific group of ANAES guidelines is implemented by health insurers using financial disincentives. Some agencies do not take responsibility for implementing their guidelines but leave this to regional or local organizations. More than half of the programs monitor or evaluate the effects of at least some guidelines. Almost all programs use some type of quality system for good guideline development. Five organizations submit their guidelines to a guideline clearing house. All programs report that they update their guidelines at least occasionally. Half of the programs do not have formal update procedures.

View this table:
Table 6

Implementation, evaluation, and update procedure

OrganizationImplementation strategies1Use of monitoringQuality system2Update procedure
NHMRC (Australia)Educational, conferences, leaders, visits, audit, organizationalYesCriteria, comments, appraisalNot formal
CCOPGI (Canada)Educational, conferences, leadersYesComments, clearing houseFormal, regular
DSAM (Denmark)Educational, conferences, leaders, visits, audit, organizational, financial3YesCriteria, commentsFormal, every 2–3
North of EnglandEducational, conferences, visits4, reminders4NoCriteria, commentsNot formal, irregular
RCP London (England)Educational, conferences, leaders, visits, audit, patient, organizationalYescomments, clearing houseFormal, regular
Duodecim (Finland)Educational, conferences, visits, audit, organizationalYes, for someCriteria, commentsFormal, every 2 years
ANAES (France)Educational, leaders, audit, organizational, financial5NoCriteriaFormal, irregular
FNCLCC (France)Educational, conferences, leaders, audit, reminders, organizationalNoCriteria, commentsFormal, irregular
AWMF (Germany)Educational, conferences, leaders, audit, patient, organizational, financial6Yes, for someCriteria, comments, appraisal,Not formal, regular
Clearing house
ASSR (Italy)Educational, audit, remindersYesCriteria, appraisalNot yet but planned
CBO (Netherlands)Conferences, auditNoCriteriaFormal, every 5 years
NHG (Netherlands)Educational, conferences, visits, reminders, organizational, financial3YesCommentsFormal, every 3 years
NZGG (New Zealand)Educational, conferences, leaders, audit, organizationalYesCriteria, comments, appraisalFormal, irregular
SIGN (Scotland)Conferences, leaders, organizationalYesCriteria, comments, clearing houseFormal, every 2 years
SBU (Sweden)Educational, conferences, leaders, visits, organizational3YesNot applicableFormal, every 2–3
FMH (Switzerland)ConferencesYesCriteriaFormal, regular
USPSTF (USA)Conferences, reminders7NoCriteria, comments, clearing houseNot formal, regular
NIHCDP (USA)Educational, conferencesYesNot availableNot formal, irregular
  • 1Audit = audit and feedback, educational = educational materials, financial = financial incentives, leaders = local opinion leaders, organizational = organizational interventions, patient = patient mediated interventions, reminders = (computer) reminders.

  • 2Appraisal = appraising existing guidelines, comments = revising guidelines based on comments from the professional community, criteria = developing and publishing criteria for good guideline development (‘guidelines for guidelines’), clearing house = submitting guidelines to guideline clearing house.

  • 3For one guideline.

  • 4Used in implementation trials.

  • 5Used by health insurers.

  • 6Strategies vary between different medical societies.

  • 7Computerized systems are developed by others.

Future plans

The future plans of guideline programs reflect active development. Nine programs consider themselves to be in a transitional phase, so their plans are evolving very rapidly. Plans for better management of the quality of the guideline process were mentioned by half of the respondents. Many of them plan to increase the amount of training for guideline development groups. Four programs intend to create a strategy for more active implementation or dissemination. The remaining plans are divided evenly among creation of a better evidence base, more patient involvement, better updating procedures, increased attention to cost-effectiveness or economic issues in guidelines, and more international collaboration. The only program that does not yet present its guidelines on the Internet has plans to do so, and one program plans to translate its native-language guidelines into English.


All of the guideline programs included in this study intend to develop clinical guidelines rigorously. While their integration in health care systems varies, there appears to be a trend in guideline development methodology toward the increasing use of similar procedures. Various organizations seem to be using the available information on good guideline development methods, and newcomers are modeling their programs on existing programs. In particular, the evidence-based approach (i.e. using electronic database searches, systematic review, and evidence linkage) is being adopted with greater consistency by all organizations. Long-standing programs do not necessarily have stricter procedures for development than more recent programs, but the governmental agencies in our sample tend to utilize more quality assurance measures than professional societies.

Most guideline programs combine an evidence-based approach with formal or informal consensus procedures. In particular, when evidence is contradictory, controversial, or lacking, consensus procedures are needed to solve problems in health care. Consensus could be considered to be an additional source of evidence when it is obtained from formal surveys of experts and the broader population of practitioners, or from feasibility studies [24]. Exploring and comparing existing guidelines could provide additional insight into how evidence and consensus could be combined.

While the programs share basic principles, we found some important differences in the details. Patients are not involved in all programs, and pilot testing and guideline comparison are only used in a few. National agencies take less responsibility for implementation of guidelines than do professional organizations. Larger organizations seem to prefer leaving implementation to regional and local organizations, while guideline development organizations in smaller countries are more involved in implementing their guidelines. Finally, professional organizations use more formal update procedures than other organizations.

Differences among guideline programs could be due in part to differences in resources. For instance, governmental agencies have larger budgets for guideline development, which could explain why their guideline development groups include more members and more disciplines than those of the professional organizations. Differences in scope and purpose due to different health care systems and political and cultural factors could explain differences in dissemination and implementation strategies.

Even with small budgets, professional organizations can develop high quality guidelines if they work within a structured program, adopting the quality criteria of other programs and using evidence collected elsewhere. In some countries, guideline development is facilitated by large government-funded organizations, such as the Agency for Healthcare Research and Quality in the United States, the National Health and Medical Research Council in Australia, and the National Institute for Clinical Excellence (NICE) in England and Wales.

The future plans show that guideline organizations are aiming at active international collaboration. There is a growing awareness that cooperative partnerships such as the AGREE Collaboration (see Appendix) may contribute to improving methods of guideline development, implementation, and evaluation, and to avoiding duplication of efforts. International databases such as the Cochrane Library are useful sources of evidence but only provide part of the evidence needed for guidelines. Pragmatic approaches are needed for subjects not covered by existing reviews [25]. Browman suggested establishing a registry of clinical guidelines under development [26]. Guideline organizations could thereby benefit from the evidence collected and work done by others. Exchanging guidelines that fulfill agreed quality criteria [18], such as using guideline clearing houses [23,27] and sharing the monitoring of emerging literature in order to keep guidelines up-to-date, will prevent duplication of effort [28]. In this way the collection and analysis of evidence can become a worldwide effort. However, the formulation of recommendations will still depend on country-specific or local decisions, influenced by professional and cultural values and considerations of the cost of applying the evidence. Therefore, aiming for international guidelines will probably be ‘a step too far’ [29].

Our study is the most recent survey of clinical guideline programs throughout the world. McGlynn et al. did similar work in 1990 but only included consensus development conference programs. The studies of Audet et al. and the Institutes of Medicine only covered American programs. In contrast, we collected structured information on 18 organizations responsible for guideline development programs from the United States, Canada, Australia, New Zealand, and nine European countries. We did not aim at conducting a comprehensive review of guideline programs. Many programs, in particular those of professional organizations in Canada and the United States, were not included in our sample. Nevertheless, by providing models of good guideline development in each country, our sample can be considered to be representative of large national programs with a high impact.


Principles of evidence-based medicine have largely affected the methodology of guideline development. Consensus on the essential features of guideline programs is growing. Recent new programs are benefiting from the more advanced methodology created by experienced, long-standing programs. However, there are still differences between programs with respect to the ownership (i.e. governmental agencies versus professional organizations) and the emphasis on dissemination and implementation. International collaboration should be encouraged, to improve guideline methodology and to promote worldwide collection and analysis of evidence needed for guideline development. Patient involvement could be improved to enhance the use of guidelines in practice. Thus, we may anticipate that, ultimately, evidence-based guidelines will lead to evidence-based clinical practice.


The AGREE Collaboration

The following individuals were participating in the AGREE Collaboration:

José Asua, MD, PhD (Basque Office for Health Technology Assessment, Spain)

Anne Bataillard, MD (Fédération Nationale des Centres de Lutte Contre le Cancer, Paris, France)

Melissa Brouwers, PhD (McMaster University and Cancer Care Ontario, Hamilton, Ontario, Canada)

George Browman, MD (Hamilton Regional Cancer Centre, Hamilton, Canada)

Jako Burgers, MD [Centre for Quality of Care Research (WOK), University Medical Centre Nijmegen, Netherlands]

Bernard Burnand, MD, MPH (Institut Universitaire de Médecine Sociale et Préventive, Lausanne, Switzerland)

Françoise Cluzeau, MSc, PhD (St George’s Hospital Medical School, London, UK)

Isabelle Durand-Zaleski, PhD (Hôpital Henri Mondor, Cedez, France)

Pierre Durieux, MD (Hôpital Européen Georges Pompidou, Paris, France)

Cindy Farquhar, MD, PhD (New Zealand Guidelines Group, Auckland, New Zealand)

Gene Feder, MD, PhD (Barts and the London Queen Mary’s School of Medicine and Dentistry, University of London, UK)

Béatrice Fervers, MD (Fédération Nationale des Centres de Lutte Contre le Cancer, Paris, France)

Roberto Grilli, MD (Agenzia Sanitaria Regionale, Bologna, Italy)

Jeremy Grimshaw, MB, PhD (Ottawa Health Services Research Institute, Ottawa, Canada)

Richard Grol, PhD [Centre for Quality of Care Research (WOK), University Medical Centre Nijmegen, Netherlands]

Steven Hanna, PhD (McMaster University, Hamilton, Ontario, Canada)

Pieter ten Have, MD (Dutch Institute for Healthcare Improvement CBO, Utrecht, The Netherlands)

Rod Jackson, PhD (Effective Practice Institute, University of Auckland, New Zealand)

Albert Jovell, MD, PhD (Fundacio Biblioteca Josep Laporte, Barcelona, Spain)

Niek Klazinga, MD, PhD (Academic Medical Centre, University of Amsterdam, Netherlands)

Finn Kristensen, MD, PhD (Danish Institute for Health Technology Assessment, Copenhagen, Denmark)

Peter Littlejohns, MBBS, MD (National Institute for Clinical Excellence, London, UK)

Pia Bruun Madsen (Danish Institute for Health Technology Assessment, Copenhagen, Denmark)

Marjukka Mäkelä, MD, PhD, MSc (Finnish Office for Health Care Technology Assessment, Helsinki, Finland)

Juliet Miller, MA, MBA [Scottish Intercollegiate Guidelines Network (SIGN), Edinburgh, UK]

Günter Ollenschläger, MD, PhD (Agency for Quality in Medicine, Cologne, Germany)

Camilla Palmhøj-Nielsen (Danish Institute for Health Technology Assessment, Copenhagen, Denmark)

Loes Pijnenborg, MD, PhD (Dutch College of General Practitioners, Utrecht, The Netherlands)

Safia Qureshi, PhD [Scottish Intercollegiate Guidelines Network (SIGN), Edinburgh, UK]

Rosa Rico-Iturrioz, MD, MSc (Basque Office for Health Technology Assessment, Spain)

Kitty Rosenbrand, MD (Dutch Institute for Healthcare Improvement CBO, Utrecht, The Netherlands)

Jean Slutsky (Agency for Healthcare Research and Quality, Rockville, USA)

John-Paul Vader, MD, MPH (Institut Universitaire de Médecine Sociale et Préventive, Lausanne, Switzerland)

Joost Zaat, MD, PhD [Centre for Quality of Care Research (WOK), University Medical Centre Nijmegen, Netherlands]


We thank Akke van der Bij for her administrative and technical support, Françoise Cluzeau for her valuable comment on drafts of the manuscript, and the following individuals for providing data on their respective guideline programs included in this study: Camilla Palmhøj-Nielsen and Pia Bruun Madsen (Danish College of General Practitioners), Anne Bataillard and Béatrice Fervers (Fédération Nationale des Centres de Lutte Contre le Cancer), Pierre Durieux and Patrice Dosquet (Agence Nationale d’Accréditation et d’Évaluation en Santé), Günter Ollenschläger (Association of the Scientific Medical Societies in Germany), Helena Dahlgren (Swedish Council on Technology Assessment in Health Care), Bernard Burnand (Swiss Medical Association), Martin Eccles (Centre for Health Services Research, University of Newcastle-upon-Tyne), Penny Irwin (Royal College of Physicians London), Jeremy Grimshaw and Juliet Miller (Scottish Intercollegiate Guidelines Network), Cindy Farquhar and Rod Jackson (New Zealand Guidelines Group), Roberto Grilli (Agency for Regional Health Services in Italy), Pieter ten Have (Dutch Institute for Healthcare Improvement CBO), Frans Meulenberg (Dutch College of General Practitioners), the late Chris Silagy (National Health and Medical Research Council), Melissa Brouwers (Cancer Care Ontario Practice Guidelines Initiative), David Atkins (US Preventive Services Task Force), and John Bowersox (National Institutes of Health Consensus Development Program). Finally, we are grateful to all individuals participating in the AGREE Collaboration (see Appendix) who contributed to this paper by commenting on earlier drafts during the AGREE workshops. The AGREE Collaboration is funded by the EU-BIOMED2 Program (BMH4-98-3669).


  • Address reprint requests to Jako S. Burgers, Centre for Quality of Care Research, University Medical Centre Nijmegen, PO Box 9101, 6500 HB Nijmegen, The Netherlands. E-mail: j.burgers{at}hsv.kun.nl


View Abstract