Centre for Health Evidence Home Page

You are here: CHE Home » Users' Guides » Clinical Decision Analysis


CHE News

About Us

    A Brief History
    What We Do & Why
    Where We're Going

Users' Guides to EBP





    Annual Progress Report

Site Map

Contact Us

How to Use a Clinical Decision Analysis

W. Scott Richardson, Allan S. Detsky, for the Evidence-Based Medicine Working Group

Based on the Users' Guides to Evidence-based Medicine and reproduced with permission from JAMA. (1995 Apr 26;273(16):1292-1295) and (1995 May 24;273(20):1610-1613). Copyright 1995, American Medical Association.

Clinical Scenario

You are the attending physician on an inpatient service where a 51 year old man is admitted with congestive heart failure of recent onset. You find he has a dilated cardiomyopathy, the etiology of which remains unknown after a thorough evaluation. He is in sinus rhythm. The team's resident asks you whether the patient should be anticoagulated with warfarin, enough to keep his INR 2.0-3.0, in order to prevent systemic emboli, even though his echocardiogram does not show left ventricular thrombus. You are not sure about the evidence concerning this issue, so you admit your shared knowledge gap and resolve to search together for the relevant information.


The Search

In the hospital library, the two of you search the MedLine system using several search terms, such as cardiomyopathy,dilated cardiomyopathy,congestive and heart failure, congestive crossed with warfarin anticoagulation and thromboembolism. Despite several attempts, you retrieve no randomized trials of warfarin used for this purpose. Even after enlisting the help of the librarian, you are unable to locate any clinical trials about this question. You do come across an editorial calling for a clinical trial of your question [1]. You also retrieve two review articles, one that recommends anticoagulation for such patients [2], and the other that recommends no anticoagulation [3]. The latter review cites a decision analysis on this issue [4], which you retrieve, hoping to find further guidance for your decision.



Decision making involves choosing an action after weighing the risks and benefits of the alternatives. While all clinical decisions are made under conditions of uncertainty, the degree of uncertainty decreases when the medical literature includes directly relevant, valid evidence. When the published evidence is scant, or less valid, uncertainty increases.

Decision analysis is the application of explicit, quantitative methods to analyzing decisions under conditions of uncertainty. Decision analysis allows clinicians to compare the expected consequences of pursuing different strategies. The process of decision analysis makes fully explicit all of the elements of the decision, so that they are open for debate and modification. While a decision analysis will not solve your clinical problems, it can help you explore the decision [5] [6] [7].

We will use the term clinical decision analyses to include studies that analyze decisions that clinicians face in the course of patient care, such as deciding whether to screen for a condition, choosing a testing strategy or selecting a treatment. While such analyses can be undertaken to inform a decision for an individual patient (Should I recommend warfarin to this 51 year-old man with idiopathic dilated cardiomyopathy?), they are more widely undertaken to help inform a decision about clinical policy [8] (Should I routinely recommend warfarin to patients in my practice with dilated cardiomyopathy?). The study retrieved by the search for our scenario is an example of this latter type, while an example of the former is the analysis by Wong et al of whether to recommend cardiac surgery for an elderly woman with aortic stenosis [9].

Decision analysis can also be applied to more global questions of health care policy, analyzed from the perspective of society or a national health authority. Examples include analyses of whether or not to screen for prostate cancer [10] and comparing different policies for cholesterol screening and treatment [11]. While decision analyses in health services research share many attributes with clinical analyses [12], they are sufficiently different that they are beyond the scope of these articles.

In helping you understand decision analysis, we will review some of the anatomy and physiology of decision models. This is not meant to be an article on how to perform decision analysis; if you wish to read about that, you should look elsewhere [13] [14].


The Framework for the Users' Guides

We will approach articles on clinical decision analysis using the same framework introduced in earlier articles in this series:

  • Are the results valid?
    This question addresses whether the strategy recommended by the analysis is truly likely to be the better one for patients. Just as with other types of studies, the validity of a decision analysis is largely determined by the strength of the methods used.
  • What are the results?
    The users guides under this second question consider the size of the expected net benefit from the recommended strategy, and our confidence in this estimate of net benefit.
  • Will the results help me in caring for my patients?
    If the decision analysis yields valid and important results, you should examine whether these results can be generalized to the patients in your practice.

Table 1 summarizes the specific guides you should use when addressing these three questions. We will explore the guides by applying them to the study we found in our search. This article will deal with the validity guides, while the next in the series will address the results and applicability.


Table 1: A Decision Analysis

I. Are the results of the study valid?

II. What are the results?

III. Will the results help me in caring for my patients?


I. Are the results valid?

1. Were all important strategies and outcomes included?

At issue here is how well the structure of the model fits the clinical decision you face. Most clinical decision analyses are built as decision trees, and the articles will usually include one or more diagrams showing the structure of the decision tree used for the analysis. Reviewing these diagrams will help you understand the model. You must then judge whether the model fits the clinical problem well enough to be valid.

Figure 1 shows a diagram of a much simplified version of the decision tree for the anticoagulation problem. The clinician has two options for patients with cardiomyopathy, either to offer no prophylaxis or to prescribe warfarin. Either way, patients may or may not develop embolic events. Prophylaxis lowers the chance of embolism but can cause bleeding in some patients. As seen in Figure 1, decision trees are displayed graphically, oriented from left to right, with the decision to be analyzed on the left, the compared strategies in the center and the clinical outcomes on the right. The decision is diagrammed by a square, termed a decision node. The lines emanating from the decision node represent the clinical strategies being compared. Chance events are diagrammed with circles, called chance nodes, and outcome states are shown as triangles or as rectangles.


Figure 1
The CHE regrets that we are unable to supply this graphic image. Please refer to the printed version.


To explore more fully how the model's structure affects its validity, we will highlight two aspects here.

a. Were all of the realistic clinical strategies compared?

In a decision analysis, a strategy is defined as a sequence of actions and decisions that are contingent upon each other. For instance, the strategy of anticoagulating someone includes not only the prescription and the monitoring, but also the adjustment of the warfarin dose for changes in prothrombin time. The authors should specify which decision strategies are being compared (at least two, otherwise there is no decision). Further, the clinical strategies included should be described in enough detail to recognize them as separate and realistic choices. You should satisfy yourself that the clinical strategies you consider important are included in the analysis.

For example, in a decision analysis of the management of suspected herpes encephalitis, the authors included the three strategies available to clinicians then: brain biopsy, empirical vidarabine, or neither [15]. At that time, this model represented the clinical decision well. Since then, however, acyclovir has become available and has been widely used for this disorder. Because the original model did not include an acyclovir strategy, it would no longer accurately portray the decision.

In the anticoagulation example, the analysts studied two clinical strategies, warfarin and no warfarin. This fits quite well the clinical decision you face in the scenario. Note that the decision model does not include a third strategy of using aspirin instead of warfarin. If, when considering the treatment options for this patient, you would seriously consider the use of aspirin instead of warfarin, then you would judge this model as incomplete.

b. Were all clinically relevant outcomes considered?

To be useful to clinicians and patients, the decision model should include the outcomes of the disease that matter to patients. Generally speaking, these include not only the quantity of life but also its quality, in measures of disease and disability. Obviously, the specific disorder in question determines which outcomes are clinically relevant. For an analysis of an acute, life-threatening condition, life expectancy might be appropriate as the main outcome measure. But in an analysis of diagnostic strategies for a nonfatal disorder, more relevant outcomes would be discomfort from testing or days of disability avoided. By examining the outcomes used in the analysis, you can discover the viewpoint from which the analyst built the decision model. Clinical decision analyses should be built from the perspective of the patient, that is, should include all the clinical benefits and risks of importance to patients (They can include other considerations as well).

Also, by comparing the outcomes between strategies, you can discover the trade-offs built into the model. Most clinical dilemmas are dilemmas because they include trade-offs between competing benefits and competing risks. For instance, when deciding how best to manage small abdominal aortic aneurysms, one must weigh reducing the risk of aneurysm rupture against the chance of unnecessary surgery in patients who would have died from other causes before rupture [16].

For a decision analysis to be worth doing, i.e. for the clinical decision to be difficult enough, the choice of strategies should be balanced on one or more of such trade-offs. You should satisfy yourself that these important trade-offs are represented well in the model's structure.

For the anticoagulation example, the authors decision model includes all of the clinical events of interest to patients (stroke, other emboli, hemorrhage, etc.). The outcomes are measured as quality-adjusted life expectancy, a scale that combines information about both the quantity and the quality of life. This metric fits your clinical decision well, for you can expect warfarin might affect both the quantity and quality of life. By reviewing the tree diagram, you can see that the authors have included the principal trade-off in the decision: the warfarin strategy offers the benefits of preventing systemic arterial embolism causing stroke and preventing pulmonary embolism, while it could cause the harm of bleeding.

2. Was an explicit and sensible process used to identify, select and combine the evidence into probabilities?

To assemble the large amount of information necessary for a decision analysis, the analyst searches the published literature and interviews experts and patients. Just as with other integrative studies like overviews [17], authors of clinical decision analyses should search and select the literature in an explicit and unbiased way, and then appraise the validity, effect size and homogeneity of the studies in a reproducible fashion. Ideally, they would judge study quality by applying criteria akin to those in the other articles in this series, whether for primary studies of therapy [18] [19], diagnosis [20] [21], harm [22], prognosis [23], or for other integrative studies, such as overviews [17]. In other words, the authors should perform as comprehensive a literature review as is required for a meta-analysis.

Once gathered, the information must be transformed into quantitative estimates of the likelihood of events, or probabilities. The scale for probability estimates ranges from 0 (impossible) to 1.0 (absolutely certain). Probabilities must be assigned to each branch emanating from a chance node, and for each chance node, the sum of probabilities must add to 1.0.

For example, looking at Figure 1, note that the no anticoagulation strategy (the upper branch coming from the decision node) has one chance node, at which two possible events could occur, either an embolism or no embolism (labelled no embolism). To assign a probability to these two branches from the chance node, the analyst tracks down all relevant evidence about the rates of systemic emboli in patients with cardiomyopathy. If the best estimate of the rate were found to be 5%, then the analyst would assign 0.05 to the embolism branch and 0.95 to the no embolism branch.

Usually, rates from clinical studies can be directly translated into probabilities, as in this example. In other instances, the data must be transformed first, such as when analysts must adjust 5 year survival data to fit an analysis concerned with only the first 3 years. Analysts should report which data were used and how the data were transformed.

In the anticoagulation example, the authors describe vigorous efforts to obtain the correct values for probabilities from the published literature and from experts, although they don't provide the search terms they used. The authors do highlight the limited data available and its methodological limits. Also, they tabulate the evidence they use and mention the transformations needed for the model.

3. Were the utilities obtained in an explicit and sensible way from credible sources?

Utilities represent quantitative measurements of the value to the decision maker of the various outcomes of the decision. Several methods are available to measure these values directly [5] [7] [24] [25], and which method is best remains controversial. Different methods use different scales; a commonly used utility scale ranges from 0 (worst outcome, usually death) to 1.0 (excellent health). Whatever the measurement method used, the authors should report the source of the ratings. In a decision analysis built for an individual patient, the most (and probably only) credible ratings are those measured directly from that patient. For analyses built to inform clinical policy, credible ratings could come from three sources: a. direct measurements from a large group of patients with the disorder in question and to whom to results of the decision analysis could be applied; b. from published studies of quality of life ratings by such patients, as was done in a recent analysis of strategies for chronic atrial fibrillation [26]; or, c. from an equally large group of people representing the general public. Whoever provides the rating must understand the outcomes they are asked to rate; the more the raters know about the condition, the more credible are their utility ratings.

The authors of the anticoagulation example They obtained values from several internists familiar with the clinical disorder and with the treatments. While physician raters were undoubtedly familiar with the outcomes of systemic emboli and major hemorrhage, only a small number of physicians made ratings, and their values may not represent those of either patients or the general public.

4. Was the potential impact of any uncertainty in the evidence determined?

Much of the uncertainty in clinical decision making arises from the lack of valid evidence in the literature. This lack of data hampers both clinical decision making and formal decision analysis. Even when it is present, published evidence is often imprecise, with wide confidence intervals around estimates for important variables. For instance, in a decision analysis concerning the management of polymyalgia rheumatica, the analysts searched the literature for the test sensitivity of temporal artery biopsy for giant cell arteritis [27]. The reported test sensitivity ranged from about 60% to 100%. In the decision analysis, these analysts set the baseline value equal to 83%, but repeated the analysis for values between 60 and 100%.

Decision analysts use this systematic exploration of the uncertainty in the data, known as sensitivity analysis, to see what effect varying estimates for risks, benefits and values have on the expected clinical outcomes, and therefore on the choice of clinical strategies. Sensitivity analysis asks the question: is the conclusion generated by the decision analysis affected by the uncertainties in our estimates of the likelihood or value of the outcomes? Estimates can be varied one at a time, termed one-way sensitivity analyses, or two or three at a time, known as multi-way sensitivity analyses. You should look for a table listing which variables were included in the sensitivity analyses, what range of values were used for each variable and which variables, if any, altered the choice of strategies. Satisfy yourself that all of the clinically important variables were examined.

Generally, all of the probability estimates should be tested using sensitivity analyses. The range over which they should be tested will depend on the source of the data. If the estimates come from large, high quality randomized trials with narrow confidence limits, the range of estimates tested can be narrow. The less valid the methods, or the less precise the estimates, the wider the range that must be included in the sensitivity analyses.

Utility values should also be tested with sensitivity analyses, with the range of values again determined by the source of the data. If large numbers of patients or knowledgeable and representative members of the general public gave very similar ratings to the outcome states, a narrow range of utility values can be used in the sensitivity analyses. If the ratings came from a small group of raters, or if individuals varied widely in their values, then investigators should use a wider range of utility values in the sensitivity analyses.

In the anticoagulation example, the authors responded to the poor quality of their evidence by varying all of the important variables over wide ranges. They report the results from several, although not all, of these sensitivity analyses, including the effect of higher bleeding risk while on warfarin.

You recall that your patient is a middle-aged man with heart failure from an idiopathic dilated cardiomyopathy. You are trying to decide whether to recommend anticoagulation with warfarin to prevent systemic or pulmonary thromboembolism. Your literature search showed that no randomized clinical trials of warfarin for this use have been published. The search did discover a clinical decision analysis [4], and in the first article, we showed you how to evaluate its validity. In this article, we will show you how to interpret the results and generalizability of a clinical decision analysis (See Table 1).

As shown in Figure 1, decision trees are displayed graphically, oriented from left to right, with the decision to be analyzed on the left, the compared strategies in the center and the clinical outcomes on the right. The square box, termed a decision node, represents the decision to be made, and the lines emanating from this decision node represent the clinical strategies being compared. Circles, or chance nodes, represent chance events and outcome states are shown as triangles on the far right. Numbers beside the strategies (if they were present) would be probabilities, the likelihood of events, while the numbers by the outcome states would be utilities, or the value of these events [13] [14].


II. What are the results?

1. In the baseline analysis, does one strategy result in a clinically important gain for patients? If not, is the result a toss-up?

For a clinical decision analysis that compares two clinical strategies, there are three possible results: the first strategy is better than the second, the second strategy is better than the first, or both strategies are equally good (or equally bad), a result known as a toss-up or a close call [28]. For instance, in an analysis of the management of solitary pulmonary nodules, the analysts found the choice of strategies to be a close call in terms of expected gains in life expectancy [29]. The larger the number of strategies compared in an analysis, the larger the number of possible results, but always with the same idea: any one strategy can win or two or more strategies could tie. The terms baseline or base case refer to the set of numbers for probability that the analyst believes are closest to the actual state of affairs.

One chooses between strategies in a decision tree by comparing the overall benefits expected from pursuing each strategy, termed its expected utility, and then selecting the strategy with the highest value of expected utility. Some controversy remains as to when exceptions to this rule are legitimate or desirable [30]. To calculate expected utility, one starts at the right-most branches of the tree, multiplies the probability for each by its utility, and sums these products for each chance node. One repeats this calculation moving leftward, a process known as folding back, until one has calculated the expected utility value for each strategy.

For example, consider the topmost chance node in Figure 1, with its two branches. Imagine that the no embolism and embolism branches have probabilities of .95 and .05 and utilities of 1.0 and .9, respectively. The expected utility for this chance node would be the sum of the product of each of the probabilities times the utilities, in this case (.95 x 1.0) + (.05 x .9), which equals .995.

The decision analyst chooses the scale on which these expected utilities are measured to fit the clinical problem. For instance, in an analysis of strategies that could reduce death, the analyst might choose to measure utility as the number of lives saved or the average gain in remaining life expectancy, both measures of the quantity of life. Other utility scales can be used to report on the quality of life. Both quantity and quality can be combined into a single measure, such as quality adjusted life years [31] or healthy-years equivalence [32]. For instance, suppose one strategy in a decision analysis yielded an average remaining life expectancy of 5 years, but that all five years were lived in a state of health rated by patients to have a utility value of 0.8. The quality adjusted life expectancy would be 5 x 0.8 or 4 years.

Now that you understand where the results of the decision analysis come from, you must decide if any difference between strategies is clinically important. In making this judgment, consider that the differences presented will be average differences rather than differences that you can expect for every patient. Some patients will gain considerably more, while others will gain considerably less. This is no different than interpreting average differences between groups in randomized trials. You may not, however, be familiar with differences in life expectancy, the output of a many decision analyses. Keep in mind that a gain in life expectancy does not occur just at the end of a person life; it may occur at the beginning or be spread over the course of time. [33]

How large must a gain in remaining life expectancy be to be important? Probably smaller than you might think, although the answer to this question depends on judgments about several variables, and this controversial area has not yet been fully addressed by empirical research. In some recent studies, decision analysts have translated the results of clinical trials into life expectancy gains, for various widely accepted clinical interventions. [33] [34] These studies suggest that a gain in life expectancy or quality-adjusted life expectancy of two or more months or ought to be considered an important gain, while a gain of a few days would represent a toss-up.

In the anticoagulation for dilated cardiomyopathy example, the decision analysis finds warfarin to be the preferred strategy for all patients 35 to 75 years of age. The average gain in quality-adjusted life expectancy for 55 year-olds (similar to your 51 year-old patient) is 115 days, or almost 3 months. From the above, you can see that this gain in life expectancy is probably important. Since the analysts explicitly considered both the reduction of emboli and the risk of bleeding, this 3 month gain in life expectancy represents the net clinical benefit you could expect from recommending anticoagulation to your patient.

2. How strong is the evidence used in the analysis?

The probabilities used in clinical decision analyses are estimates, taken mostly from the published literature, and while they may represent the best available evidence, they are nonetheless subject to potential error. The best defense against such error is for the analysts to base probability estimates on studies of high methodological quality, after a thorough and unbiased search for all relevant studies. The analysts should explain how they judged the quality of these primary studies. One way to do this would be to judge study quality by applying criteria akin to those in the other articles in this series, whether for primary studies of therapy [18] [19], diagnosis [20] [21], harm [22], prognosis [23], or for integrative studies, such as overviews [17].

As with other integrative studies, the overall strength of the result of a clinical decision analysis depends on the strength of inference possible from the primary studies. Ideally, every probability estimate at every node in the tree is supported by precise estimates from primary and integrative studies of high methodological quality, but such idealized analyses are rare. Good decision analyses can still be performed with some imprecise or ambiguous data, as long as most of the data are of good quality and the analysts explain any limitations and plan their sensitivity analyses accordingly. The fewer the probabilities that can be precisely estimated from high quality primary studies, i.e. the weaker the evidence used in the analysis, the weaker the overall inference one can make from the results.

In the anticoagulation example, the authors describe vigorous efforts to obtain the correct values for probabilities from the published literature and from experts. They highlight the limited methodological quality of the primary literature and acknowledge the weakened inference. In particular, there are no randomized trials to tell you whether patients with cardiomyopathy will live longer, or have fewer morbid events, if given anticoagulants.

3. Could the uncertainty in the evidence change the result?

For any clinical variable such as the probability of bleeding, or the value that patients place on avoiding a stroke, the decision analyst can calculate the value, or threshold, above which the results favour one strategy, and below which the results favour another strategy. For multi-way sensitivity analyses the analyst can show two-dimensional graphs of the variables, with the thresholds displayed as a line (two-way analyses) or a series of lines (three-way analyses) separating zones of strategy preference. While perhaps at first daunting, these tables and graphs provide the most clinically useful information from a decision analysis.

If the result of the analysis (one strategy is preferred or a toss-up is found) would change by choosing different values for one of the variables, the result is said to be sensitive to that variable. On the other hand, if changing the variable throughout its plausible range of values doesn't change the result, the analysis result is said to be robust to the sensitivity analysis. As you might guess, the more robust the result is, the more confident you can be that the recommended strategy should indeed be preferred. If the result was a toss-up and that indifference proves robust to sensitivity analyses, you can be confident that the strategies are equivalent.

The analysts of the anticoagulation example found the preference for the warfarin strategy to be robust to the sensitivity analyses they completed, with two exceptions (we will return to one of these, the bleeding risk on warfarin).

For the other exception, the analysts assumed in the base case that patients quality of life was not impaired by the inconvenience and anxiety associated with taking warfarin (i.e., a utility value of 1.0 on a 0 to 1.0 scale). When testing this assumption, by adjusting downward the utility rating for quality of life while on warfarin, the analysts discovered that the choice of strategies would change substantially. For 55 year-olds, the threshold utility value was 0.92. In other words, if patients rated their quality of remaining life on warfarin as 0.93 or greater, then anticoagulation would be preferred. For a utility rating of exactly 0.92, the two strategies would be equally preferred, while for utility ratings below 0.92 no anticoagulation would be preferred.

To put this result in perspective, recall that utility represents the value to the patient of remaining expected life, and that a rating of 0.92 is 8% less than normal. In other words, a utility threshold of 0.92 means that your patient feels he would be willing to sacrifice 8% of his remaining life to avoid taking warfarin. On a time scale, this means that a year on warfarin would have to be worth only approximately eleven months of life off warfarin, in order for him to choose to not take it.


III. Will the results help me in caring for my patients?

1. Do the probability estimates fit my patients' clinical features?

This first issue of applicability concerns whether the clinical characteristics of patients for whom the analysis was intended are similar to your patients. For a decision analysis built for an individual patient, look for the description of that patient's condition; if the patient is well described, you should be readily able to judge how closely your patient resembles her or him. An article reporting a decision analysis built for a group of patients should have an analogous portion of the text, detailing the clinical characteristics of patients to whom the results of the analysis are to be applied. You should satisfy yourself that your patient would be included in this group.

You could be confident that the probabilities fit your patients if the estimates were taken from one or more rigorous clinical studies whose patient samples included patients similar to yours. If the authors don't describe the samples, you could track down the references and review the inclusion and exclusion criteria to see whether your patient would fit.

If the analysis was intended for patients different from yours, review the results of the sensitivity analyses. The clinical variables used for these analyses should be detailed enough for you to locate where your patient would fit, and thus what net benefit your patient might expect from the clinical strategies. If you still can't tell, ask yourself whether the clinical characteristics of the intended patients are so different from yours that you should discard the results. If not, you can proceed, with some caution, to use them.

In the anticoagulation example, most of the probabilities fit your dilated cardiomyopathy patient, including the rates of systemic and pulmonary emboli and the estimated mortality. The baseline average annual risk of major hemorrhage on warfarin was estimated to be 4.5%. If you worried that your patient's risk of bleeding on warfarin could be higher than average, you should examine the sensitivity analyses for this variable. These sensitivity analyses show that anticoagulation with warfarin remains the preferred strategy until the annual bleeding risk reaches 15%, more than triple the baseline estimate. Above this value, no anticoagulation became the preferred strategy.

When a clinical decision analysis shows that the preferred strategy is sensitive to a given variable, you will need to gauge where your patient fits on the scale of that variable. Thus, when deciding how to use the results of the anticoagulation decision analysis for your particular patient, you will need to estimate his annual risk of bleeding on warfarin therapy. While a full discussion of estimating the bleeding risk is beyond the scope of this article, we offer a few suggestions.

First, look in the text for the authors' description of their systematic review of the literature. Ideally, they will have found one or more original articles or systematic reviews of high methodological quality from which they obtained their baseline estimate, and from which you could obtain an individualized estimate for your patient. Alternatively, you could do your own search for this information, using the tactics introduced in the first article in this series [35].

If you did so you would find a systematic review of this topic [36], wherein the authors cite the average annual frequencies of fatal and major hemorrhage on warfarin as 0.6% and 3.0%, respectively. You might also find a study of warfarin use in atrial fibrillation [37], wherein the incidence of major or fatal bleeding was 2.5%. If these numbers are close to the truth, then by using somewhat higher figures in the anticoagulation decision analysis, the analysts would have overestimated the risk of harm and might have obscured a net benefit. Despite this, the warfarin strategy still resulted in a clinically important expected gain in life expectancy, suggesting that the true net benefit might be somewhat larger than reported. Note also that these published estimates are substantially lower than the 15% threshold value for annual bleeding risk, above which the no warfarin strategy would be preferred.

Your search would also turn up a retrospective analysis of thromboembolism rates in two randomized trials of other treatments (not anticoagulants) for heart failure.[38]. During the approximately two and a half years average follow-up, the trials showed thromboembolic events occurred in 4.7% and 5.2% of patients. After transformation to comparable event rates, these results may be a little over half of the values used in the anticoagulation decision analysis. By using somewhat higher estimates, the analysts could have overestimated the benefit of warfarin.

2. Do the utilities reflect how my patients would value the outcomes of the decision?

Since the utility ratings for the value of outcomes has a strong influence on the choice of strategies, you must consider whether your patients' values are similar to those used in the decision analysis. In a decision analysis built for an individual patient, the utilities are usually measured directly from that patient, so while those values should be quite believable for that patient, they may not necessarily fit your patient. Alternatively, utilities measured from a large group of patients or members of the general public would probably include a set of values similar to those of your patient, but the range of values might be so broad that you are left uncertain as to which values to use. If you encounter such difficulties, you should examine the one-way and multi-way sensitivity analyses that use a wide range of utility estimates, to see how your patients' values will affect the final decision.

If you were to ask your patient to rate the outcome states using the rating instrument in the article, you would know exactly what utility values to use. However, most clinicians won't have the time or inclination to do this. Fortunately, you can still make some judgment about this question, by asking your patient about values in non-quantitative terms. For instance, one patient may be extremely averse to regular monitoring, while another may not mind. Disabling stroke might devastate one patient, whereas another might be more resilient.

As mentioned above, in the anticoagulation example the utility rating for life while taking warfarin had a substantial influence on the preference of strategies. The authors highlight the importance of this variable and urge that investigators examine patients' reactions to taking warfarin and undergoing monitoring, so that subsequent recommendations about anticoagulation can be better informed.


Resolution of the Scenario

Without a randomized trial of anticoagulation in patients with dilated cardiomyopathy in sinus rhythm, your overall confidence in a decision to anticoagulate your patient will be limited. In the absence of trial data, experts have recommended that the decision to use warfarin in this setting be made on an individual basis [2] [3] [39]. How are you to individualize the treatment decision for your middle-aged man with dilated cardiomyopathy? The anticoagulation decision analysis suggests that if he has a low or moderate bleeding risk, and a ready acceptance of anticoagulation monitoring, he is likely to be better off taking warfarin. Thus, the decision analysis identifies the few clinical variables on which the decision depends, and estimates the size and likelihood of net clinical benefit you could expect from the alternative courses of action. While the better therapy may still be unproved, you should now be much more informed about the choice and better prepared to decide with the patient what is to be done.



1. Falk RH. A plea for a clinical trial of anticoagulation in dilated cardiomyopathy. Am J Cardiol. 1990;65:914-5.

2. Dec GW, Fuster V. Idiopathic dilated cardiomyopathy. N Engl J Med. 1994;331:1564-75.

3. Baker DW, Wright RF. Management of heart failure: IV. Anticoagulation for patients with heart failure due to left ventricular systolic dysfunction. JAMA. 1994;272:1614-8.

4. Tsevat J, Eckman MH, McNutt RA, Pauker SG. Warfarin for dilated cardiomyopathy: a bloody tough pill to swallow? Med Decis Making. 1989;9:162-9.

5. Keeney RL. Decision analysis: an overview. Operations Research. 1982;30:803-38.

6. Eckman MH, Levine HJ, Pauker SG. Decision analytic and cost-effectiveness issues concerning anticoagulant prophylaxis in heart disease. Chest. 1992;102:538S-49S.

7. Kassirer JP, Moskowitz AJ, Lau J, Pauker SG. Decision analysis: a progress report. Ann Intern Med. 1987;106:275-91.

8. Eddy DM. Clinical decision making: from theory to practice. Designing a practice policy. Standards, guidelines and options. JAMA. 1990;263:3077,3081,3084.

9. Wong JB, Salem DN, Pauker SG. You're never too old. N Engl J Med. 1993;328:971-5.

10. Krahn MD, Mahoney JE, Eckman MH, Trachtenberg J, Pauker SG, Detsky AS. Screening for prostate cancer: a decision analytic view. JAMA. 1994;272:773-80.

11. Krahn M, Naylor CD, Basinski AS, et al. Comparison of an aggressive (U.S.) and a less aggressive (Canadian) policy for cholesterol screening and treatment. Ann Intern Med. 1991;115:248-55.

12. Goel V. Decision analysis: applications and limitations. The Health Services Research Group. Can Med Assoc J. 1992;147:413-7.

13. Weinstein MC, Fineberg HV, Elstein AS, Frazier HS, Neuhauser D, Neutra RR, McNeil BJ. Clinical Decision Analysis. Philadelphia, PA: W.B. Saunders Company. 1980.

14. Sox HC, Blatt MA, Higgins MC, Marton KI. Medical Decision Making.Boston, MA: Butterworths. 1988.

15. Barza M, Pauker SG. The decision to biopsy, treat, or wait in suspected herpes encephalitis. Ann Intern Med. 1980;92:641-9.

16. Katz DA, Littenberg B, Cronenwett JL. Management of small abdominal aortic aneurysms: early surgery vs. watchful waiting. JAMA. 1992;268:2678-86.

17. Oxman AD, Cook DJ, Guyatt GH. Users' guides to the medical literature: VI. How to use an overview. Evidence-Based Medicine Working Group. JAMA. 1994;272:1367-71.

18. Guyatt GH, Sackett DL, Cook DJ. Users' guides to the medical literature: II. How to use an article about therapy or prevention: A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA. 1993;270:2598-601.

19. Guyatt GH, Sackett DL, Cook DJ. Users' guides to the medical literature: II. How to use an article about therapy or prevention: B. What are the results and will they help me in caring for my patients? Evidence-Based Medicine Working Group. JAMA. 1994;271:59-63.

20. Jaeschke R, Guyatt G, Sackett DL. Users' guides to the medical literature. III. How to use an article about a diagnostic test. A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA. 1994;271:389-91.

21. Jaeschke R, Guyatt GH, Sackett DL. Users' guides to the medical literature. III. How to use an article about a diagnostic test. B. What are the results and will they help me in caring for my patients? Evidence-Based Medicine Working Group. JAMA. 1994;271:703-7.

22. Levine M, Walter S, Lee H, Haines T, Holbrook A, Moyer V. Users' guides to the medical literature: IV. How to use an article about harm. Evidence-Based Medicine Working Group. JAMA. 1994;271:1615-9.

23. Laupacis A, Wells G, Richardson WS, Tugwell P. Users' guides to the medical literature: V. How to use an article about prognosis. Evidence-Based Medicine Working Group. JAMA. 1994;272:234-7.

24. Llewellyn-Thomas H, Sutherland HJ, Tibshirani R, et al, Ciampi A, Till JE, Boyd NF. The measurement of patients' values in medicine. Med Decis Making. 1982;2:449-62.

25. Dolan JG, Isselhardt BJ, Jr., Cappuccio JD. The analytic hierarchy process in medical decision making: a tutorial. Med Decis Making. 1989;9:40-50.

26. Disch DL, Greenberg ML, Holzberger PT, et al. Managing chronic atrial fibrillation: a Markov decision analysis comparing warfarin, quinidine, and low-dose amiodarone. Ann Intern Med. 1994;120:449-57.

27. Buchbinder R, Detsky AS. Management of suspected giant cell arteritis: a decision analysis. J Rheumatol. 1992;19:1220-8.

28. Kassirer JP, Pauker SG. The toss-up. N Engl J Med. 1981;305:1467-9.

29. Cummings SR, Lillington GA, Richard RJ. Managing solitary pulmonary nodules: the choice of strategy is a 'close call'. Am Rev Respir Dis. 1986;134:453-60.

30. Deber RB, Goel V. Using explicit decision rules to manage issues of justice, risk, and ethics in decision analysis: when is it not rational to maximize expected utility? Med Decis Making. 1990;10:181-94.

31. Torrance GW, Feeny D. Utilities and quality-adjusted life years. Internat J Technol Assess Health Care. 1989;5:559-75.

32. Mehrez A, Gafni A. Quality-adjusted life years, utility theory and healthy-years equivalence. Med Decis Making 1989;9:142-9.

33. Naimark D, Naglie G, Detsky AS. The meaning of life expectancy: what is a clinically significant gain? J Gen Intern Med. 1994;9:702-7.

34. Tsevat J, Weinstein MC, Williams LW, Tosteson AN, Goldman L. Expected gains in life expectancy for various coronary heart disease risk factor modifications. Circulation. 1991;83:1194-201.

35. Oxman AD, Sackett DL, Guyatt GH. Users' guides to the medical literature. I. How to get started. The Evidence-Based Medicine Working Group. JAMA;270:2093-5.

36. Landefeld CS, Beyth RJ. Anticoagulant-related bleeding: clinical epidemiology, prediction and prevention. Am J Med. 1993;95:315-28.

37. Connolly SJ, Laupacis A, Gent M, Roberts RS, Cairns JA, Joyner C, CAFA Study Coinvestigators. Canadian atrial fibrillation anticoagulation (CAFA) study. J Am Coll Cardiol. 1991;18:349-55.

38. Dunkman WB, Johnson GR, Carson PE, Bhat G, Farrell L, Cohn JN, et al. Incidence of thromboembolic events in congestive heart failure. The V-HeFT VA Cooperative Studies Group. Circulation. 1993;87:VI94-101.

39. Kubo SH, Cohn JN. Approach to treatment of the patient with heart failure in 1994. Adv Intern Med. 1994;39:485-515.


© 2001 Evidence-Based Medicine Informatics Project

© 2004 Centre for Health Evidence. Site last updated: July 11, 2005. Disclaimer.
Best viewed with Internet Explorer 4.0 and up.