Centre for Health Evidence Home Page

You are here: CHE Home » Users' Guides » Applying the Users' Guides


CHE News

About Us

    A Brief History
    What We Do & Why
    Where We're Going

Users' Guides to EBP





    Annual Progress Report

Site Map

Contact Us

EBM: Principles of Applying Users' Guides to Patient Care

Gordon H Guyatt MD, Brian Haynes, Roman Z. Jaeschke, Deborah J Cook, Lee Green, C. David Naylor, Mark C. Wilson, W. Scott Richardson for the Evidence Based Medicine Working Group

Based on the Users' Guides to Evidence-based Medicine and reproduced with permission from JAMA. (2000;284(10):1290-1296). Copyright 2000, American Medical Association.

Clinical Scenario

A senior resident, a junior attending, a senior attending, and an emeritus professor were discussing evidence-based medicine (EBM) over lunch in the hospital cafeteria.

"EBM", announced the resident with some passion, "is a revolutionary development in medical practice." She went on to describe EBM's fundamental innovations in solving patient problems.

"A compelling exposition," remarked the emeritus professor.

"Wait a minute," the junior attending exclaimed, also with some heat, and presented an alternative position: that EBM merely provided a set of additional tools for traditional approaches to patient care.

"You make a strong and convincing case," the emeritus professor commented.

"Wait a minute," the senior attending exclaimed to her older colleague, "their positions are diametrically opposed. They can't both be right."

The emeritus professor looked thoughtfully at the puzzled doctor and, with the barest hint of a smile, replied, "Come to think of it, you're right too."



Evidence-based Medicine (EBM), the approach to clinical care that underlies the 24 Users' Guides to the medical literature that JAMA has published over the last 8 years [1], is about solving clinical problems. The Users' Guides provide clinicians with strategies and tools to interpret and integrate evidence from published research in their patient care. As we developed the Guides, our understanding of EBM has evolved. In this article, since we are addressing physicians, we use the term EBM but what we report applies to all clinical care provisionsand the rubric "evidence-based health care" is equally appropriate.

In 1992, in an article that provided a background to the Users' Guides, we described EBM as a shift in medical paradigms [2]. EBM, in contrast to the traditional paradigm, acknowledges that intuition, unsystematic clinical experience, and pathophysiologic rationale are insufficient grounds for clinical decision-making, and stresses the examination of evidence from clinical research. EBM suggests that a formal set of rules must complement medical training and common sense for clinicians to effectively interpret the results of clinical research. Finally, EBM places a lower value on authority than the traditional paradigm of medical practice.

While we continue to find the paradigm shift a valid way of conceptualizing EBM, as the scenario suggests, the world is often complex enough to invite more than one useful way of thinking about an idea or a phenomenon. In this article, we describe the two key principles that clinicians must grasp to be effective practitioners of EBM. One of these relates to the value-laden nature of clinical decisions; the other to the hierarchy of evidence postulated by EBM. The article continues with a comment on additional skills necessary for optimal clinical practice, and concludes with a discussion of the challenges facing EBM in the new millennium.


Two Fundamental Principles of EBM

An evidence-based practitioner must be able to understand the patient's circumstances or predicament (including issues such as their social and supports, and financial resources); identify knowledge gaps, and frame questions to fill those gaps; to conduct an efficient literature search; to critically appraise the research evidence; and to apply that evidence to patient care [3]. The Users' Guides have dealt with the framing of the question in the scenarios with which each guide has begun, with searching the literature [4], with appraising the literature in the "validity section" of each guide, and with applying the evidence in the "results" and "applicability" sections of each guide. Underlying these steps are two fundamental principles. One, relating primarily to the assessment of validity, posits a hierarchy of evidence to guide clinical decision making. Another, relating primarily to the application of evidence, suggests that decision-makers must always trade off the benefits and risks, inconvenience, and costs associated with alternative management strategies, and in doing so consider the patient's values [5]. In the sections that follow, we will discuss these two principles in detail.

Clinical Decision-Making: Evidence is Never Enough

Picture a patient with chronic pain due to terminal cancer who has come to terms with her condition, has resolved her affairs and said her good-byes, and wishes only palliative therapy. The patient develops pneumococcal pneumonia. The evidence that antibiotic therapy reduces morbidity and mortality from pneumococcal pneumonia is strong. Almost all clinicians would agree that this strong evidence does not dictate that this patient receive antibiotics. Despite the fact that antibiotics might reduce symptoms and prolong the patient's life, her values are such that she would prefer a rapid and natural passing.

Picture a second patient, an 85 year old severely demented man, incontinent, contracted and mute, without family or friends, who spends his day in apparent discomfort. This man develops pneumococcal pneumonia. While many clinicians would argue that those responsible for this patient's care should not administer antibiotic therapy because of his circumstances, others would suggest they should. Once again, evidence of treatment effectiveness does not automatically imply that treatment be administered. The management decision requires a judgement about the trade-off between risks and benefits, and because values or preferences differ, the best course of action will vary between patients and between clinicians.

Picture a third patient, a healthy 30-year old mother of two children who develops pneumococcal pneumonia. No clinician would have any doubt about the wisdom of administering antibiotic therapy to this patient. This does not mean that an underlying value judgement has been unnecessary. Rather, our values are sufficiently concordant, and the benefits so overwhelm the risks that the underlying value judgement is unapparent.

In current health care practice, judgements often reflect clinician or societal values concerning whether intervention benefits are worth the cost. Consider the decisions regarding administration of tissue plasminogen activator (tPA) versus streptokinase to patients with acute myocardial infarction, or clopidogrel versus aspirin to patients with a transient ischemic attack. In both cases, evidence from large randomized trials suggests the more expensive agents are, for many patients, more effective. In both cases, many authoritative bodies recommend first-line treatment with the less effective drug, presumably because they believe society's resources would be better used in other ways. Implicitly, they are making a value or preference judgement about the trade-off between deaths and strokes prevented, and resources spent.

By values and preferences, we mean the underlying processes we bring to bear in weighing what our patients and our society will gain, or lose, when we make a management decision. A number of the Users' Guides focus on how clinicians can use research results to clearly understand the magnitude of potential benefits and risks associated with alternative management strategies.[6] [7] [8] [9] [10] Three guides focus on the process of balancing those benefits and risks when using treatment recommendations [11] [12] and in making individual treatment decisions.[13] The explicit enumeration and balancing of benefits and risks brings the underlying value judgements involved in making management decisions into bold relief.

Acknowledging that values play a role in every important patient care decision highlights our limited understanding of eliciting and incorporating societal and individual values. Health economists have played a major role in developing a science of measuring patient preferences.[14] [15] Some decision aids incorporate patient values indirectly: if patients truly understand the potential risks and benefits, their decisions will likely reflect their preferences.[16] These developments constitute a promising start. Nevertheless, many unanswered questions concerning how to elicit preferences, and how to incorporate them in clinical encounters already subject to crushing time pressures, remain. Addressing these issues constitutes an enormously challenging frontier for EBM.

A Hierarchy of Evidence

What is the nature of the "evidence" in EBM? We suggest a broad definition: any empirical observation about the apparent relation between events constitutes potential evidence. Thus, the unsystematic observations of the individual clinician constitute one source of evidence, and physiologic experiments another. Unsystematic clinical observations are limited by small sample size and, more importantly, by limitations in human processes of making inferences. [17] Predictions about intervention effects on clinically important outcomes from physiologic experiments are usually right, but occasionally disastrously wrong. Recent examples include mortality increasing effects of growth hormone in critically ill patients [18], of combined vasodilators and inotropes ibopamine [19] and epoprostonol [20] in patients with congestive heart failure (CHF), and of beta-carotene in patients with previous myocardial infarction [21] as well as the mortality reducing effect of beta blockers [22] despite long held beliefs their negative inotropic action would harm CHF patients. Observational studies are inevitably limited by the possibility that apparent differences in treatment effect are really due to differences in patients' prognosis in the treatment and control groups.

Given the limitations of unsystematic clinical observations and physiologic rationale, EBM suggests a hierarchy of evidence. Table 1 presents a hierarchy of study designs for issues of treatment -- very different hierarchies are necessary for issues of diagnosis or prognosis. Clinical research goes beyond unsystematic clinical observation in providing strategies that avoid or attenuate the spurious results. Because few if any interventions are effective in all patients, we would ideally test a treatment in the patient to whom we would like to apply it. Numerous factors can lead clinicians astray as they try to interpret the results of conventional open trials of therapy -- natural history, placebo effects, patient and health worker expectations, and the patient's desire to please.


Table 1

A hierarchy of strength of evidence for treatment decisions
  • N of 1 randomized trial
  • Systematic reviews of randomized trial
  • Single randomized trial
  • Systematic review of observational studies addressing patient-important outcomes
  • Single observational study addressing patient-important outcomes
  • Physiologic studies
  • Unsystematic clinical observations


The same strategies that minimize bias in conventional trials of therapy involving multiple patients can guard against misleading results in studies involving single patients. [23] In the "N of 1" randomized control trial (RCT), patients undertake pairs of treatment periods in which they receive a target treatment in one period of each pair, and a placebo or alternative in the other. Patients and clinicians are blind to allocation, the order of the target and control are randomized, and patients make quantitative ratings of their symptoms during each period. The N of 1 RCT continues until both the patient and clinician conclude that the patient is, or is not, obtaining benefit from the target intervention. N of 1 RCTs are unsuitable for short-term problems; for therapies that cure (such as surgical procedures); for therapies that act over long periods of time or prevent rare or unique events (such as stroke, myocardial infarction, or death); and are possible only when patients and clinicians have the interest and time required. However, when the conditions are right, N of 1 randomized trials are feasible [24] [25], can provide definitive evidence of treatment effectiveness in individual patients, and may lead to long-term differences in treatment administration. [26]

When considering any other source of evidence about treatment, clinicians are generalizing from results in other people to their patients, inevitably weakening inferences about treatment impact and introducing complex issues of how trial results apply to individuals. Inferences may nevertheless be very strong if results come from a systematic review of methodologically strong RCTs with consistent results and are generally somewhat weaker if we are dealing with only a single RCT unless it is very large and has enrolled a diverse patient population (Table 1). Because observational studies may under- or more typically over-estimate treatment effects in an unpredictable fashion [27] [28], their results are far less trustworthy than those of RCTs. Physiologic studies and unsystematic clinical observations provide the weakest inferences about treatment effects. The Users Guides have summarized how clinicians can fully evaluate each of these types of studies. [29] [30] [31]

This hierarchy is not absolute. If treatment effects are sufficiently large and consistent, for instance, observational studies may provide more compelling evidence than most RCTs. Observational studies have allowed extremely strong inferences about the efficacy of insulin in diabetic ketoacidosis or hip replacement in patients with debilitating hip osteoarthritis. At the same time, instances in which RCT results contradict consistent results from observational studies reinforce the need for caution. A recent striking example comes from a large, well-conducted randomized trial of hormone replacement therapy as secondary prevention of coronary artery disease in postmenopausal women. While the dramatically positive results of a number of observational studies had suggested the investigators would find a large reduction in risk of coronary events with hormone replacement therapy, the treated patients did no better than the control group. [32] Defining the extent to which clinicians should temper the strength of their inferences when only observational studies are available remains one of the important challenges for EBM. The challenge is particularly important given that much of the evidence regarding the harmful effects of our therapies comes from observational studies.

The hierarchy implies a clear course of action for physicians addressing patient problems: they should consider looking for the highest available evidence from the hierarchy. The hierarchy makes it clear that any statement to the effect that there is no evidence addressing the effect of a particular treatment is a non sequitur. The evidence may be extremely weak -- the unsystematic observation of a single clinician, or generalization from only indirectly related physiologic studies -- but there is always evidence. Having described the fundamental principles of EBM, we will briefly comment on additional skills that clinicians must master for optimal patient care, and their relation to EBM.


Clinical Skills, Humanism, Social Responsibility and EBM

The evidence-based process of resolving a clinical question will be fruitful only if the problem is appropriately formulated. One of us, a secondary care internist, developed a lesion on his lip shortly before an important presentation. He was quite concerned and, wondering if he should take acyclovir, he immediately spent two hours searching for the highest quality evidence and reviewing the available RCTs. When he began to discuss his remaining uncertainty with his partner, an experienced dentist, she quickly cut short the discussion by exclaiming, "But, my dear, that isn't herpes!"

This story illustrates the necessity of obtaining the correct diagnosis before seeking and applying research evidence in practice, the value of extensive clinical experience, and the fallibility of clinical judgement. The essential skills of obtaining a history and conducting a physical examination and the astute formulation of the clinical problem come only with thorough background training and extensive clinical experience. The clinician makes use of evidence-based reasoning -- applying the likelihood ratios associated with positive or negative physical findings, for instance -- to interpret the results of the history and physical examination. [33] Clinical expertise is further required to define the relevant treatment options before examining the evidence regarding their expected benefits and risks.

Finally, clinicians rely on their expertise to define features that impact on the generalizability of the results to the individual patient. We have noted that, except when clinicians have conducted N of 1 RCTs, they are attempting to generalize (or, one might say, particularize) results obtained in other patients to the individual before them. The clinician must judge the extent to which differences in the treatment (local surgical expertise, or the possibility of patient non-compliance, for instance), the availability of monitoring, or patient characteristics such as age, comorbidity, or concomitant treatment may impact on estimates of benefit and risk that come from the published literature. The clinician must further consider if the available studies have measured all important outcomes, followed patients for sufficiently long, and compared experimental treatment to the most compelling alternatives. While our Users Guide on treatment applicability will help clinicians define the general issues that they need to consider when advising the individual patient [34], nothing can substitute for clinical expertise in determining the specific considerations relevant to that person.

Thus, knowing the tools of evidence-based practice is necessary but not sufficient for delivering the highest quality patient care. In addition to clinical expertise, the clinician requires compassion, sensitive listening skills, and broad perspectives from the humanities and social sciences. These attributes allow understanding of patients' illnesses in the context of their experience, personalities, and cultures. The sensitive understanding of the patient links to evidence-based practice in a number of ways. For some patients, incorporation of patient values for major decisions will mean a full enumeration of the possible benefits, risks, and inconvenience associated with alternative management strategies that are relevant to the particular patient. For some of these patients and problems, this discussion should involve the patients' family. For other problems - the discussion of screening with prostate-specific antigen with older male patients, for instance - attempts to involve other family members might violate strong cultural norms.

Many patients would be uncomfortable with an explicit discussion of benefits and risks, and object to having what they experience as excessive responsibility for decision-making placed on their shoulders. [35] In such patients, who would tell us they want the doctor to make the decision on their behalf, the physician's responsibility is to develop insight to ensure that choices will be consistent with patients' values and preferences. Understanding and implementing the sort of decision-making process patients desire and effectively communicating the information they need requires skills in understanding the patient's narrative, and the person behind that narrative. [36] [37]

Ideally, evidence-based physicians' technical skills and humane perspective will lead them to become effective advocates for their patients both in the direct context of the health system in which they work and in broader health policy issues. This advocacy may involvechanging the system to facilitate evidence-based practice; for example, improving infrastructure for access to high quality information to guide clinicians at the bedside. A continuing challenge for EBM -- and for medicine in general -- will be to better integrate the new science of clinical medicine with the time-honored craft of caring for the sick.


Additional Challenges for EBM

In 1992, we identified skills necessary for evidence-based practice. These included the ability to precisely define a patient problem, and what information is required to resolve the problem, conduct an efficient search of the literature, select the best of the relevant studies, apply rules of evidence to determine their validity, extract the clinical message, and apply it to the patient problem as the skills necessary for evidence-based practice. [1] To these we would now add an understanding of how the patient's values impact on the balance between advantages and disadvantages of the available management options, and the ability to appropriately involve the patient in the decision. Studying the process of eliciting and understanding patient values, and the best ways of incorporating them in the clinical decision-making process, constitutes one important challenge for EBM.

The biggest obstacle to evidence-based practice remains time limitation. Fortunately, new resources to assist clinicians are available, and the pace of innovation is rapid. One can consider a classification of information sources that comes with the mnemonic 4S: the individual study, the systematic review of all the available studies on a given problem, a synopsis of that summary, and systems of information. By systems we mean summaries that link a number of synopses related to the care of a particular patient problem (acute upper gastrointestinal bleeding) or type of patient (the diabetic outpatient) (Table 2).


Table 2

A hierarchy of pre-processed evidence
Primary studies pre-processing involves selecting only studies that are both highly relevant and with study designs that minimize bias and thus permit a high strength of inference
Summaries systematic reviews provide clinicians with an overview of all the evidence addressing a focussed clinical question
Synopses synopses of individual studies or of systematic reviews encapsulize the key methodologic details and results required to apply the evidence to individual patient care
Systems practice guidelines, clinical pathways, or evidence-based textbook summaries of a clinical area provide the clinician with much of the information needed to guide the care of individual patients


Evidence-based selection and summarization is becoming increasingly available at each level. Secondary journals such as ACP Journal Club and Evidence-based Medicine review a large number of primary journals and include only articles that are both relevant and have passed a methodological filter. Clinicians can therefore be confident that any data they gather from these sources is already high on the hierarchy of evidence in Table 1. These secondary journals not only restrict themselves to studies of superior design, but present the information as structured abstracts that provide a synopsis of the individual studies and systematic reviews from the primary journals. The structure of the abstract is crucial: evidence-based synopses provide critical information about a study that are necessary for determining validity and for applying results to individual patients. While not always the case, these synopses often provide most of the information clinicians need to incorporate the results of a new study into their clinical practice.

If there is any chance it may be available, clinicians whose priority is efficient evidence-based practice should seek a high quality systematic review rather than the primary studies addressing their clinical question. For issues of therapy, published systematic reviews, including the Cochrane Collaboration data base, provide a rapidly growing repository of clinically useful summaries.

Clinicians often seek answers to questions about a whole process of care rather than a focussed clinical question. Rather than "what is the impact of digoxin on my CHF patient's longevity?" the clinician may ask "Can I prolong my CHF patient's life?" or even "How can I optimize the management of my CHF patient?" Increasingly, clinicians asking these sort of questions can look to high quality evidence-based practice guidelines or clinical pathways to provide, in effect, a series of synopses that summarize available evidence. The best systems use computer technology to match the patient or problem characteristics with an evidence-based knowledge repository and provide patient-specific recommendations. Evidence suggests that these computerized decision support systems may change clinician behavior and improve patient outcome. [38] At the same time, we must remember that recommendations can only be made for "average" patients, and the circumstances and values of the patient before us may differ. One way of dealing with this might be to bring the tools of decision analysis to the bedside. Whatever the ultimate solution, this exploration remains a frontier for EBM.

These developments emphasize that evidence-based practice involves not only being able to distinguish high from low quality in primary studies, but also in systematic reviews, practice guidelines, and other integrative research focussed on management recommendations. That is the reason the Users' Guides have included articles that show clinicians how to use systematic reviews [26], decision analyses [4] [39], practice guidelines [5] [40], economic analyses [6] [10] and any articles that make treatment recommendations. [8] The summary tables from the each Users' Guide provides a checklist that clinicians can use to ensure that synopses of each type of study include the key information required to assess both validity and applicability to their practice.

The last decade has seen publication of a plethora of high quality systematic reviews and there is no slowing in sight. Most practice guidelines, however, remain methodologically weak. [41] Evidence-based systems have great potential, and are beginning to appear. Efficient production of evidence-based systems of information, increasingly user-friendly synopses, and further advances in easy electronic access to all levels of evidence-based resources should dramatically increase the feasibility of evidence-based practice in the next decade.

This article, and indeed the Users' Guides as a whole, have dealt primarily with decision-making at the level of the individual patient. Evidence-based approaches can also inform health policy-making [42], day-to-day decisions in public health, and systems-level decisions such as those facing managers at the hospital level. In each of these arenas, EBM can support the appropriate goal of gaining the greatest health benefit from limited resources. On the other hand, 'evidence' -- as an ideology, rather than a focus for reasoned debate -- has been used as a justification for many agendas in health care, ranging from crude cost-cutting to the promotion of extremely expensive technologies with minimal marginal returns. In the policy arena, dealing with differing values poses even more challenges than in the arena of individual patient care. Should we restrict ourselves to alternative resource allocation within a fixed pool of health care resources, or be trading off health care services against, for instance, lower tax rates for individuals or lower health care costs for corporations? How should we deal with the large body of observational studies suggesting that social and economic factors may have a larger impact on the health of populations than health care delivery? How should we deal with the tension between what may be best for an individual, or for the society to which that individual belongs? The debate about such issues is at the heart of evidence-based health policy-making, but inevitably has implications for decision-making at the individual patient level.



The Users' Guides to the medical literature provide clinicians with the tools to distinguish stronger from weaker evidence, stronger from weaker syntheses, and stronger from weaker recommendations for moving from evidence to action. Much of the Guides are devoted to helping clinicians understand study results and enumerate the benefits, side effects, toxicity, inconvenience and costs of treatment options, both for patients in general and for individual patients under their care. A clear understanding of the principles underlying evidence-based practice will aid clinicians in applying the Users' Guides to facilitate their patient care. Foremost among these principles are that value judgments underlie every clinical decision, that clinicians should seek evidence from as high in the appropriate hierarchy as possible, and that every clinical decision demands attention to the particular circumstances of the patient. Clinicians facile in using the Guides will complete a review of the evidence regarding a clinical problem with the best estimate of benefits and risks of management options and a good sense of the strength of inference concerning those benefits and risks. This leaves clinicians in an excellent position for the final -- and still inadequately explored -- steps in providing evidence-based care, the consideration of the individual patient's circumstances and values.



1. Guyatt GH, Rennie D. Users' Guides to the Medical Literature: Editorial. JAMA 1993;270:2096-2097.

2. Evidence-based Medicine Working Group. Evidence-based medicine: a new approach to teaching the practice of medicine. JAMA 1992;268:2420-2425.

3. Guyatt GH, Meade MO, Jaeschke RZ, Cook DJ, Haynes RB. Practitioners of evidence based care. Not all clinicians need to appraise evidence from scratch but all need some skills. BMJ. 2000;320:954-5.

4. Hunt DL, Jaeschke R, McKibbon KA. Users' guides to the medical literature: XXI. Using electronic health information resources in evidence-based practice. Evidence-Based Medicine Working Group. JAMA. 2000;283:1875-9.

5. Haynes RB, Sackett RB, Gray JMA, Cook DC, Guyatt GH. Transferring evidence from research into practice: 1. The role of clinical care research evidence in clinical decisions. ACP Journal Club. 1996 Nov-Dec;125:A-14-15.

6. Guyatt GH, Sackett DL, Cook DJ for the Evidence-based Medicine Working Group. Users' guides to the medical literature. II - How to use an article about therapy or prevention. Part B. What were the results and will they help me in caring for my patients. JAMA 1994:271:59-63.

7. Jaeschke R, Guyatt GH, Sackett DL for the Evidence-based Medicine Working Group. Users' guides to the medical literature. III - How to use an article about a diagnostic test. Part B. What are the results and will they help me in caring for my patients. JAMA, 1994;271:703-707.

8. Richardson WS, Detsky AS, for the Evidence-based Medicine Working Group. Users' guides to the medical literature. VII. How to use a clinical decision analysis. Part B. What are the results and will they help me in caring for my patients? JAMA 1995;273:1610-1613.

9. Wilson MC, Hayward R, Tunis SR, Bass EB, Guyatt GH for the Evidence-Based Medicine Working Group. Users' Guides to the Medical Literature: VIII. How to use Clinical Practice Guidelines. B. What are the Recommendations and will they help me in caring for my patients? JAMA 1995;274:1630-1632.

10. O'Brien BJ, Heyland DK, Richardson WS, Levine M, Drummond MF, for the Evidence-Based Medicine Working Group. Users' Guides to the Medical Literature XIII. How to use an article on economic analysis of clinical practice. B. What are the results and will they help me in caring for my patients? JAMA 1997:277:1802?1806.

11. Guyatt GH, Sackett DL, Sinclair J, Hayward RS, Cook DJ, Cook RJ for the Evidence-Based Medicine Working Group. Users' Guides to the Medical Literature. IX. A Method for Grading Health Care Recommendations. JAMA. 1995;274:1800-1804.

12. Guyatt G, Sinclair J, Cook D, Glasziou P. User's guides to the medical literature XVI. How to use a treatment recommendation. JAMA 1999;281(19):1836-1843.

13. McAlister F, Stone S, Guyatt GH, Haynes RB, Sackett DL. Users' Guide to the Medical Literature XX. Applying results to an individual patient.

14. Drummond MF, Richardson WS, O'Brien B, Levine M, Heyland DK, for the Evidence-Based Medicine Working Group. Users' Guides to the Medical Literature XIII. How to use an article on economic analysis of clinical practice. A. Are the results of the study valid? JAMA 1997;277:1552?1557.

15. Feeny DH, Furlong W, Boyle M, Torrance GW. Multi-attribute health status classification systems: health utilities index. Pharmacoeconomics 1995; 7: 490-502.

16. O'Connor AM, Rostom A, Fiset V, Tetroe J, Entwhistle V, Llewellyn-Thomas H, Holmes-Rovner M, Barry M, Jones J. Decision aids for patients facing health treatment or screening decisions: systematic review. BMJ 1999;319:731-734.

17. Nisbett R, Ross L. Human inference. Prentice-Hall Inc, 1980.

18. Takala J, Ruokonen E, Webster NR, Nelsen MS, Zandstra DF, Vundelincx G, Hinds CJ. Increased mortality associated with growth hormone treatment in critically ill adults. N Engl J Med 1999;341:785-792.

19. Hampton JR, van Veldhuisen DJ, Kleber FX, et. al. for the Second Prospective Randomized Study of Iboapmine on Mortality and Efficacy (PRIME II) Investigators. Randomised study of effect of Ibopamine on survival in patients with advanced severe heart failure. Lancet 1997;349:971-7.

20. Califf RM, Adams KF, McKenna WJ, Gheorghiade M, Uretsky B, McNulty SE, Darius H, Shulman K, Zannad F, Thurmond HE, Harell F, Wheeler W, Soler-Soler J, Swedberg K. A randomized controlled trial of epoprostenol therapy for severe congestive heart fialure: the Flolan International Randomized Survival Trial (FIRST0. Am Heart J 1997;134:44-54.

21. Rapola JM, Virtamo J, Ripatti S, et al. Randomised trial of a-tocopherol and b-carotene supplements on incidence of major coronary events in men with previous myocardial infarction. Lancet. 1997 Jun 14;349:1715-20.

22. CIBIS-II Investigators and Committees. The Cardiac Insufficiency Bisoprolol Study II (CIBIS- II): a randomised trial. Lancet 1999;353:9-13.

23. Guyatt GH, Sackett DL, Taylor DW, et. al. Determining optimal therapy - randomized trials in individual patients. N Engl J Med 1986;314:889-892.

24. Guyatt GH, Keller JL, Jaeschke R, et. al. Clinical usefulness of N of 1 randomized control trials: three year experience. Ann Int Med. 1990;112:293-299.

25. Larson EB, Ellsworth AJ, Oas J. Randomized clinical trials in single patients during a 2 year period. JAMA 1993;270:2708-12

26. Mahon J, Laupacis A, Donner A, Wood T. Randomised study of n of 1 trials versus standard practice. BMJ 1996;312:1069-74.

27. Guyatt GH, DiCenso A, Farewell V, Willan A, Griffith L. Randomized Trials versus Observational Studies in Adolescent Pregnancy Prevention. J Clin Epidemiol. In press.

28. Kunz R, Oxman AD. The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials. BMJ. 1998;317:1185-90.

29. Guyatt GH, Sackett DL, Cook DJ for the Evidence-based Medicine Working Group. Users' guides to the medical literature. II - How to use an article about therapy or prevention. Part A. Are the results of the study valid? JAMA 1993;270:2598-2601.

30. Levine M, Walter S, Lee H, Haines T, Holbrook A, Moyer V for the Evidence-based Medicine Working Group. Users' guides to the medical literature. IV. How to use an article about harm. JAMA 1994;271:1615-1619.

31. Oxman AD, Cook DJ, Guyatt GH for the Evidence-based medicine working group. Users' guides to the medical literature. VI: How to use an overview. JAMA 1994;272:1367-1371

32. Hulley S, Grady D, Bush T, Furberg C, Herrington D, Riggs B, Vittinghoff E. Randomized trial of estrogen plus progestin for secondary prevention of coronary heart disease in postmenopausal women. Heart and Estrogen/progestin Replacement Study (HERS) Research Group. JAMA. 1998;280:605-13

33. Sackett DL. A primer on the precision and accuracy of the clinical examination. JAMA 1992;267:2645-2648.

34. Dans AL, Dans LF, Guyatt GH, Richardson S, for the Evidence-based Medicine Working Group. User's guides to the medical literature XIV. How to decide on the applicability of clinical trial results to your patient. JAMA. 1998;279(7):545-549

35. Sutherland HJ, Llewellyn-Thomas HA, Lockwood GA, Tritchler DL, Till JE. Cancer patients: their desire for information and participation in treatment decisions. J Royal Soc Med 1989;82:260-263.

36. Greenhalgh T. Narrative based medicine: narrative based medicine in an evidence basedworld. BMJ 1999;318:323-5

37. Greenhalgh T, Hurwitz B. Narrative based medicine: why study narrative? BMJ. 1999;318:48-50, 1999.

38. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA 1998;280:1339-46.

39. Richardson WS, Detsky AS, for the Evidence-based Medicine Working Group. Users' guides to the medical literature. VII. How to use a clinical decision analysis. Part A. Are the results of the study valid? JAMA 1995;273:1292-1295.

40. Hayward R, Wilson MC, Tunis SR, Bass EB, Guyatt GH, and the Evidence-Based Medicine Working Group. Users' guides to the medical literature. VIII. How to use clinical practice guidelines. Part A. Are the Recommendations Valid? JAMA 1995;274:570-574.

41. Shaneyfelt TM., Mayo-Smith MF, Rothwangl J. Are guidelines following guidelines? The methodological quality of clinical practice guidelines in the peer-reviewed medical literature. JAMA. 1999;281:1900-5

42. Muir Gray FA, Haynes RB, Sackett DL, Cook DJ, Guyatt GH. Transferring Evidence from Research into Practice: III. Developing Evidence-based Clinical Policy. ACP Journal Club, 1997;A14


© 2001 Evidence-Based Medicine Informatics Project

© 2004 Centre for Health Evidence. Site last updated: July 11, 2005. Disclaimer.
Best viewed with Internet Explorer 4.0 and up.