Expired activity
Please go to the PowerPak homepage and select a course.

Module 6. Practicing Evidence-Based Medicine

The following evidence-based medicine topics are discussed within this module:

  • Clinical trial design
  • Literature evaluation
  • Clinical practice guidelines
  • Evidence-based clinical decision making

Introduction

The most commonly cited definition of evidence-based medicine (EBM) was published in 1996.1 According to this definition, EBM is “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.” According to a slightly more recent definition, EBM is “a systematic approach to clinical problem-solving which allows the integration of the best available research evidence with clinical expertise and patient values.”2 Therefore, the practice of EBM involves finding and considering the highest quality evidence available when making patient-care decisions. Critics of the EBM approach believe that EBM reduces medical decision making to “cookbook”-like adherence to standard treatment protocols, which may not be appropriate for all patients and decrease the freedom of individual practitioners. In reality, EBM relies on the ability of each provider to obtain, analyze, and integrate clinical evidence with clinical expertise and judgment. Without clinical expertise and consideration of the patient’s specific situation, it is unlikely that evidence will be appropriately applied to individual patients.

In addition to the widespread acceptance of EBM strategies among the medical community, there are specific economic, regulatory and legal implications of evidence-based pharmacy and nursing practice.3,4 Payer groups, such as the Centers for Medicare and Medicaid Services (CMS), are increasingly adopting pay for performance strategies in which reimbursement for care depends on the provision of quality evidence-based care.3 Accrediting bodies, including The Joint Commission, emphasize evidence-based performance standards. For example, the 2019 National Patient Safety Goals for ambulatory health care accreditation require evidence-based practices in several therapeutic areas.4 Lack of adherence to these standards can compromise the ability of health care organizations to maintain accreditation, which also affects reimbursement potential and financial viability. Providers may also take on increased liability when deviating from widely accepted standards of care unless their alternative treatment strategies are also evidence-based.

Clinical Trial Design

The practice of EBM involves the identification and analysis of pertinent, high-quality clinical evidence. For pharmacists and APRNs, this typically requires a strong understanding of all types of primary literature, including clinical trials of drug efficacy and safety.

There are many potential approaches to the design of medical studies. For example, studies vary in their perspective (prospective vs retrospective), time frame (longitudinal vs a single point in time), and whether an intervention was made (experimental vs observational). Observational studies, in which patient outcomes are described without the provision of a specific intervention, generally fall into 3 categories: cohort, case-control, and cross-sectional studies.5 Case series are also usually considered observational reports. Cohort (prospective or historical) and case-control (retrospective) studies seek to establish relationships between risk factors and outcomes (eg, occurrence of disease) by looking forward and backward in time, respectively. Historical cohort studies examine data that existed prior to initiation of the study and look forward in time within the existing data. In contrast, cross-sectional studies evaluate a sample of patients at a single point in time. All of these observational study designs can be used to report data related to a study exposure, including drug exposure (eg, prevalence of use, doses used, adverse effects reported), but observational studies cannot lead to definitive conclusions regarding causality of any drug-related effects.

Experimental studies, often called clinical trials, can be either controlled or uncontrolled.5 Inclusion of a control group strengthens the study design because the presence of the control group allows the investigator to determine with greater certainty whether an observed difference is due to the study treatment or due to other factors. Studies that provide treatment and simultaneously collect data on both the intervention and control groups are called parallel; most clinical trials involving medications are conducted in a parallel manner. Other options for control groups include crossover studies, in which patients serve as their own controls by switching treatment groups during the study after an appropriate washout period, and use of historical or external controls (eg, findings from another trial). Factorial trials compare individual treatments to each other, placebo, and a combination of the treatments. Factorial trial designs allow for multiple treatment comparisons with a smaller number of patients than would be required for multiple individual trials, but they also have a greater potential for statistical error than trials with fewer comparisons.

Randomized controlled trials

Randomized controlled trials (RCTs) are widely accepted as the preferred study design for evaluating drug efficacy because this study design provides the strongest evidence that the study drug caused the observed effect.5 In fact, the strengths of RCTs have led to the US Food and Drug Administration (FDA) requirement that manufacturers conduct this type of study prior to submission of a New Drug Application.6 The FDA new drug approval process is summarized in Figure 1.

Figure 1. FDA New Drug Approval Process7

Randomized trials are strongly preferred for Phase II and Phase III drug efficacy trials because they reduce the potential for both types of study design error: bias and confounding.5 With randomization, patients are assigned to treatment groups in a random manner. Since each patient has an equal chance of being assigned to either the treatment or control groups, the potential for selection bias is minimized. Also, randomization ensures that any between-group differences in patient characteristics that could potentially affect the study results (confounders) are only present due to chance, thus minimizing the potential effect of differences between the groups.

In addition to their strengths, RCTs have some limitations.5 The rigorous nature of their conduct, including patient selection methods, emphasis on patient adherence, and prospective data collection for both efficacy and safety outcomes, may limit the applicability of RCT results to real-world practice where patient care is less systematic. The high cost and time required to conduct RCTs can prevent or delay results from being disseminated and implemented in practice. Another concern is that the length of follow-up and small sample sizes may not be sufficient for detection of long-term treatment effects, particularly those related to medication safety.

Randomization

Accurate interpretation of published RCTs requires a basic familiarity with the major elements of this type of trial design, including randomization methods, choice of control groups, and blinding techniques.8 Randomization occurs after patients meet the inclusion and exclusion criteria and have consented to participation in the study. Simple randomization uses basic probability to ensure that all patients have an equal chance of being assigned to any treatment group (eg, coin toss). One disadvantage of simple randomization is the potential for unequal patient numbers in each group. Block randomization overcomes this issue by randomizing an equal number of patients to each group within a specified block of patients. For example, within each set (block) of 20 consecutively randomized patients, 10 are allocated into each patient group using random methods. Prior to randomization, patients can also be stratified according to specific baseline characteristics (eg, age, race, disease severity, use of certain medications). Stratified randomization can provide further assurance that potentially confounding patient factors are equally distributed among groups. Regardless of the method used, randomization should occur in an unbiased manner. Examples of unbiased randomization techniques include having blinded, nonstudy personnel perform the randomization, use of computer-generated randomization schedules, or use of an online or telephone randomization service.

Controls

As previously discussed, use of a control group is a major contributor to strong clinical trial design; however, the control group must be carefully chosen.8 An inappropriate control group can bring into question the quality and applicability of the trial’s results. Options for control groups in interventional studies include placebo, another active treatment, or a historical control. Historical controls involve a comparison with existing data from similar patients, usually those who meet the same inclusion and exclusion criteria as the patients enrolled in the trial. Control patients may be matched to study patients according to certain baseline characteristics, which can increase similarity between the groups. Placebo controls are only used when it is ethically acceptable to provide no active treatment, which is often in settings with no currently available treatment or standard of care. Active controls are typically preferred for trials in disease states with an existing standard of care since it would not be ethical to deny treatment to any patients in this case. The most clinically useful trials include an active control that corresponds to the drug, dose, and frequency that is most commonly used in clinical practice; however, some investigators, particularly those conducting studies that are sponsored by pharmaceutical manufacturers, may be hesitant to use appropriate active controls out of a concern that the study drug may show only marginal (or no) benefit.

Blinding

The last major element of clinical trial design is blinding.8 Blinding is the standard technique used to minimize bias in clinical trials. In the context of clinical trial design, bias is a systematic error within a study that skews the results in a certain direction. Many types of biases in clinical trials exist, but blinding is used to specifically reduce detection bias. When individuals involved in the study are unaware of treatment assignments (ie, blinded), it is much less likely that differences between the treatments will be detected and more likely that all participants will be treated the same. Clinical trials can be single-blind (either the patient or investigator is aware of treatment assignment, but not both), double-blind (both the patient and investigator are unaware of treatment assignment), or triple-blind (the patient, investigator, and study personnel who are involved in patient assessment are unaware of treatment assignment). Results of unblinded trials, in which treatment allocation is fully known, are less robust because there is a greater potential for bias. Effectively blinded trials ensure that treatment assignment is not detected at the time of randomization, study drug administration, or in the course of patient monitoring and assessment. Blinding during administration requires that all patients receive identical-appearing treatments at the same dosing frequency, which often requires use of matched placebos or techniques to make active treatments appear the same (such as encapsulating tablets in identical capsule shells). Use of identical-appearing treatments is called double-dummy blinding.

Statistical Trial Design

Prior to initiating a clinical trial, investigators must design the study’s statistical analysis plan. There are two main approaches to analyzing clinical trial results: descriptive statistics or inferential statistics.9 Descriptive statistics are limited to organizing and summarizing the data and do not involve any statistical calculations beyond percentages or other basic measures of frequency. Inferential statistics are used when the investigator desires to generalize the results from the study sample to the entire population of interest. Because inferential statistics go beyond simply describing the results, more complex mathematical calculations are needed to generate the statistical results.

Hypothesis testing in clinical trials can use 3 potential approaches: superiority, equivalence, and noninferiority.10 Superiority testing uses 2-sided statistical tests to identify whether one treatment is superior to the comparator group(s). Equivalence testing seeks to establish that one treatment is at least as good as another. In contrast, noninferiority testing uses 1-sided statistical comparisons to demonstrate that one treatment is not worse than another. Superiority testing is historically the most popular approach but noninferiority comparisons have become more common in recent years. In general, noninferiority trials are less straightforward to apply in clinical practice, but they may be appropriate when differences in efficacy are expected to be minimal and the study treatment is more convenient, safer, or less costly than the comparator.11

Another key decision in statistical trial design is the population that will be analyzed. The most basic options are to analyze data from all study patients (intention-to-treat analysis) or only patients who successfully meet a predetermined standard for study completion (per protocol analysis).10 In general, an intention-to-treat analysis is preferred because it maintains the original randomization scheme and mimics the imperfect patient compliance/follow-up that occur in actual clinical practice. However, per protocol analysis might be preferred for studies in which a high degree of compliance is desirable or necessary (eg, treatment of infection).

Investigators must also choose appropriate limits for Type I and II error prior to study commencement. Type I error is a false positive, in which a difference between groups is found when there actually is no difference.12 The alpha value corresponds to the acceptable potential for a Type I error, which is generally 5% or alpha = 0.05. The alpha value is the threshold for interpreting P-values, which is why P < 0.05 is commonly considered a significant difference. Type II error is a false negative, in which no difference between groups is found when there actually is a difference. Because Type II errors are less likely to inappropriately affect clinical practice, a higher threshold is allowed (eg, beta value of 20%). Since Type II error is related to study power (ie, the ability to find a statistical difference if one exists) according to the equation Power = 1-beta, an acceptable degree of power is usually 80%.

Finally, investigators must choose the actual statistical tests that will be used to analyze the data.9 The test used for a given variable is determined by the level of data that will be used to express the results for that variable. A description of common levels of data is given in Table 1, and common statistical tests encountered in the clinical literature are described in Table 2.

Table 1. Levels of Data9
Level of data Description Measure of Central Tendency Measure of Variability
Continuous Can take on any value within a defined range (eg, age) Mean Standard deviation
Nominal Category with no implied rank or order (eg, gender) Mode None
Ordinal Category with an implied rank or order (eg, pain scores) Median Confidence interval
Table 2. Common Statistical Tests Encountered in Clinical Trials9,13,14
Statistical Test Comments
Parametric tests (normally distributed continuous data)
Unpaired t test Compares the means of 2 independent samples
t test for matched or paired data Compares the means of 2 matched or paired samples
ANOVA Compares means of ≥ 3 independent samples
Repeated-measures ANOVA Compares means of ≥ 3 matched or paired samples
Nonparametric Tests (ordinal, nominal, or non-normally distributed data)
Mann-Whitney U test (also called Wilcoxon rank-sum test)a Compares 2 independent samples
Wilcoxon signed rank testa Similar to paired t test above, gives a rank order of differences between 2 matched or paired samples
Chi square or Fisher’s exact testb Compares 2 independent samples
McNemar’s testb Compares 2 matched or paired samples
Mantel-Haenszel testb Compares 2 independent samples that have been stratified prior to analysis
Kruskal-Wallis test Compares means of ≥ 3 independent samples
Friedman test Compares means of ≥ 3 matched or paired samples
Other Common Statistical Tests
Kaplan-Meier method Survival analysis
Cox proportional hazards model Survival analysis that considers differences in baseline or other characteristics between groups
Correlation analysis Determines whether there is an association between 2 variables and the strength of the association
Regression analysis Mathematically describes the association between 2 variables, allowing for predictions of the occurrence of one variable based on the presence of another variable
a Used for ordinal or non-normally distributed data only.
b Used for nominal data only.
Abbreviation: ANOVA, analysis of variance.

Literature Evaluation

In clinical practice, pharmacists should be able to use and evaluate a variety of literature types including tertiary (drug information resources, textbooks), secondary (newsletters, MEDLINE database), and primary literature (published clinical studies). Skill in evaluating RCTs is important since RCTs are common within the primary literature. The overall quality of published RCTs has increased in recent years due to widespread adoption of clinical trial reporting standards such as the Consolidated Standards of Reporting Trials (CONSORT) statement and the International Committee of Medical Journal Editors (ICMJE) recommendations (previously known as the Uniform Requirements for Manuscripts Submitted to Biomedical Journals).15,16 Both of these reporting standards establish minimum elements that should be included in clinical trial manuscripts, such as author conflicts of interest; methods for patient selection, randomization, and blinding; study interventions; statistical methods used; patient disposition at the end of the study; all relevant efficacy and safety outcomes; and study limitations.

How to Evaluate a Randomized Controlled Trial

All elements of an RCT can be subject to reader evaluation.8 In addition to the information below, use of a checklist or standard set of assessment questions can help the reader perform an efficient, systematic, and complete review of the article.8,10,12,17,18

The abstract may be structured (with headings) or unstructured (without headings) depending on the journal’s requirements. A brief and accurate summary of the article’s main points should be provided, including the study purpose, design, population, interventions, major results, and conclusion. Other information that may be present at the beginning of the article are author names and affiliations, funding source and study sponsor involvement, and author conflicts of interest. Each of these elements should be carefully considered by the reader to identify potential sources of bias.8,19 For example, a study that was funded by a pharmaceutical manufacturer but had no other evidence of sponsor involvement likely has a low potential for bias.

Clear statements regarding the purpose, objective, and rationale of the study should be provided in the introduction.8 These statements should include a description of prior studies/literature on the topic. The reader may benefit from looking at the results of prior studies and performing their own literature search to confirm whether additional pertinent articles exist. The introduction section usually ends with the study purpose or main objective, which also serves as a transition statement into the study design and methods.

Based on the study’s stated objective, the reader should determine whether the most appropriate study design was used.8 For example, a study comparing the efficacy of 2 medications that is not randomized, controlled, or blinded may be subject to both bias and confounding. This type of study design, in which bias and confounding are not minimized, has low internal validity. Conversely, studies that employ strong randomization and blinding techniques have higher internal validity. The reader can make an overall assessment of study validity by considering both study design and other specific elements of the study methods as described below.

The methods section should provide an explicit description of the exact procedures used in conducting the trial.8,10 Enough detail should be provided that another investigator could duplicate the study. Patient screening and recruitment procedures should be evaluated for potential sources of bias or unethical practices. The reader should evaluate inclusion and exclusion criteria so that the study’s external validity can be determined. External validity is an assessment of whether the study results can be generalized to the entire population. Stringent inclusion and exclusion criteria limit application of the study results outside of the exact population studied and likely decrease the external validity of the study. By considering the severity of disease and prior treatment regimens used by the study population, the reader can also assess whether the study drug and controls (including placebo) were ethically chosen and given at appropriate doses. For example, it is likely inappropriate to compare the efficacy of a new medication to placebo in a study of first-line therapy for a disease state with an established standard of care. The reader should also evaluate how potential confounders were addressed by the study procedures. Confounders may include concurrent disease states, medication changes during the study (eg, concurrent or rescue therapies), lifestyle factors, adherence, or lack of a washout phase.

Study outcomes should be carefully assessed for clinical relevance.8,10 Ideally, primary and secondary outcomes (also referred to as endpoints) should be useful and compelling (also known as “hard” outcomes). In contrast, “soft” or “surrogate” outcomes have less clinical meaning but may be preferred by investigators because they are easier and less costly to assess. For example, it is easier to find a significant difference in blood pressure (a surrogate outcome) than in the incidence of acute stroke (a hard outcome). Surrogate outcomes may be appropriate if prior studies have sufficiently correlated the outcome to a related hard outcome (eg, data linking changes in hemoglobin A1c with progression to end-stage renal disease among patients with diabetes). Multiple outcomes can also be combined into a predetermined composite outcome, which can demonstrate the overall effect of the intervention rather than focusing on a single measure of efficacy. In this case, results for each component of the composite outcome should also be provided. Study duration should be evaluated to ensure that patients were followed for a long enough time period to observe changes in the primary and secondary outcomes.8,10,19 Most RCTs are not long enough to assess long-term safety outcomes, but the methods section should contain explicit information about how adverse events and patient safety were measured. Use of data safety monitoring boards and/or interim safety analyses are effective ways to proactively monitor study results and protect patient safety.

The statistics portion of the methods section should be evaluated to determine whether appropriate or optimal statistical analysis methods were used.8 The reader should determine whether the study enrolled enough patients to detect a difference in the primary outcome if a difference truly existed (ie, whether the study had adequate power) by examining the authors’ description of their sample size calculation. This is especially important if no difference in the primary outcome was found since lack of an observed difference could be due to inadequate power (Type II error). The major approach to hypothesis testing (ie, superiority, noninferiority, equivalence) should be stated with the criteria for significance explicitly described (eg, alpha value, noninferiority criteria).8,10 The reader should determine whether the intent to treat or per protocol population (or both) was analyzed and evaluate which is the most appropriate population to analyze based on the study outcomes. Finally, the information in Tables 1 and 2 can be used to assess whether the statistical tests used match the type of data and number of comparison groups for each outcome of interest.

Per the CONSORT statement, the results section of clinical trial reports should describe the disposition of all patients at the end of the study, including those who completed, dropped out, were lost to follow-up, and died.15 Information about patient disposition should be carefully considered by the reader.8,10 A large number of dropouts could affect the efficacy results if only a per protocol analysis is performed or could falsely represent the safety results (especially if many patients dropped out due to adverse effects). Baseline demographics of the study population are also important to consider since this affects external validity. Differences between the groups at baseline can be noted but may not be a major concern if the groups are appropriately randomized, since any differences in this case are present due to random chance. Of greater concern is whether the study population matches the real-world population. Specific baseline characteristics that should be considered include age, race, gender, duration of disease, disease severity, concurrent disease states, prior medications used, concurrent medications, and lifestyle factors (eg, obesity, smoking).

Results for the primary, secondary, and safety outcomes should be completely and clearly provided, with supporting statistical results.8P-values or confidence intervals, as appropriate, should be compared to the significance criteria provided in the methods section so that the reader can confirm whether the results are statistically significant. Confidence intervals describe the degree of certainty (confidence) in the accuracy of the result in addition to the result’s significance. Some results are expressed as measures of risk such as odds ratios, risk ratios, or hazard ratios.8,9,20 The reader should carefully evaluate whether risk differences are expressed as relative (ie, difference in rates between groups divided by the rate in the control group) or absolute risk reductions (ie, raw difference in rates between groups). The relative risk reduction (RRR) calculation typically yields larger numbers than absolute risk reduction (ARR), which can be misleading. For example, if the treatment group had a mortality rate of 3% and the control group had 5% mortality, the RRR would be (5% - 3%)/5% = 40% relative risk reduction. However, the ARR between treatments would be 5% - 3% = 2% absolute risk difference.

In addition to statistical significance, clinical significance of the results should be considered.8,20 In particular, very large studies may report statistically significant, but very small (and thus, clinically unimportant) results. While there is no standard method for determining clinical significance, one useful tool is the numbers needed to treat and harm. The number needed to treat (NNT) is calculated as 1/ARR, which corresponds to the number of patients who would need to receive treatment in order to observe or avoid one outcome of interest. The NNT can be compared to the number needed to harm (NNH; same calculation), which expresses the number of patients who would need to receive treatment before observing one adverse event of interest. Numerically large values for NNT or NNH suggest limited clinical impact of the treatment, while small values for NNT or NNH suggest a larger clinical impact.

Other results that should be evaluated include actual treatment doses received (if not fixed), adherence rates, and subgroup analyses.8 Ideally, subgroup analyses should be preplanned and fully described in the statistics portion of the methods section.8,19 Subgroup or posthoc analyses that the investigators decide to perform after the study is in progress or complete are more likely to be biased than those that are preplanned.

The authors’ conclusion should be unbiased and clearly supported by the study results, particularly the primary outcome and safety results.8 Subgroup or posthoc analyses should be hypothesis-generating and should not be a major focus of the conclusion.8,19 The authors may also provide comments on study limitations, which simply reflect their opinions about the study.8

Drawing your own conclusion

After considering the study design, methods, results, and the authors’ conclusion, the reader should develop their own conclusion about the study’s overall quality and the applicability of the results.8 This conclusion should incorporate an assessment of study strengths, limitations, and internal and external validity. The clinical context of the study also may be relevant, particularly for disease states with existing clinical practice guidelines or previously published studies that address similar clinical questions. Ongoing clinical trials can be accessed on the Clinicaltrials.gov website.8,21

Clinical Practice Guidelines

Many practitioners, including pharmacists, rely heavily on clinical practice guidelines as tools for incorporating therapeutic recommendations into clinical practice.22 Clinical practice guidelines have been defined as “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.”23 Therefore, the purpose of clinical practice guidelines is to provide clinicians with information that will lead to evidence-based patient care decisions.

A basic understanding of the guideline development process allows pharmacists and APRNs to evaluate the overall quality of individual guidelines.22 Whether the developing organization is a payer group, professional organization, or government entity there are two main approaches to guideline development: evidence-based or expert consensus. Expert consensus guidelines are developed by expert panels, provide little evidence to support their recommendations, and often do not disclose details of how the recommendations were developed. This type of guideline has generally fallen out of favor due to its inherent limitations. In contrast, evidence-based guidelines are developed with an explicit and strict process to ensure full consideration of the highest quality evidence available (see Table 3).

Table 3. Summary of Evidence-Based Clinical Guideline Development Process22
  1. Choose a guideline topic
  2. Recruit a multidisciplinary guideline development panel
  3. Define the clinical questions to be addressed by the guideline
  4. Establish criteria for the evidence that will be considered (eg, studies, meta-analyses)
  5. Conduct a systematic search for the qualifying evidence
  6. Perform a systematic evaluation and grading of the evidence
  7. Synthesize the evidence
  8. If there is insufficient evidence for decision-making, develop a procedure for a consensus process for making recommendations
  9. Formulate and grade recommendations based on the grade of evidence
  10. Write the guideline document
  11. Peer review and pilot test the guideline document
  12. Revise the guideline as appropriate
  13. Create tools for implementation of the guideline
  14. Establish a plan for follow-up and periodic updating of the guideline

Grading of evidence quality is a key feature of robust guideline development.22 Many grading schemes exist, but one of the most common is the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE).24 This schema promotes randomized trials as the highest quality evidence, followed by observational studies. All other types of evidence are low quality. Other specific strengths and weaknesses of the individual study are determined prior to final categorization as high, moderate, low, or very low-quality evidence. Grading of recommendations is another important guideline feature that should be explicitly described.22 For example, the GRADE schema suggests strength of recommendation ratings ranging from “do it” (ie, strong recommendation for the intervention) to “do not do it” (ie, strong recommendation against the intervention).22,24,25

Other aspects of clinical practice guideline development that should be evaluated by the reader include whether the development methods are clearly and fully described, the potential for bias including funding source and conflicts of interest of the authors/panel members, quality of the guideline peer review process, and guideline currency/frequency of update.22,23 Checklists for evaluating guideline quality, such as the Appraisal of Guidelines Research and Evaluation (AGREE) II instrument and Conference on Guideline Standardization (COGS) checklist, are available.26,27

Clinical practice guidelines are often freely available on the developing organization’s website.22 Many guidelines are also published in the medical literature and can be found by searching the National Library of Medicine MEDLINE/PubMed database and limiting the search to “practice guidelines.”28 The ECRI Guidelines Trust and Trip websites are additional sources for finding guidelines.29,30

Evidence-Based Clinical Decision-Making

As stated earlier, EBM is the process of using high-quality evidence when making patient care decisions rather than decision making based on clinical experience alone. A systematic process for making evidence-based decisions has been widely accepted since the 1990s and is summarized in Table 4.31 This straightforward process can be implemented by individual practitioners in the daily practice of making patient care decisions, but can also be used on a larger scale when making policy decisions for groups of patients.

Table 4. The Evidence-based Medicine Decision-Making Process24,31,32
Process steps Comments
1. Define the question Should be clear, directly applicable to the current situation, and focused
2. Perform a systematic search Use high quality tertiary sources (eg, textbooks, review articles) and databases (eg, MEDLINE/PubMed, The Cochrane Library) to identify primary literature sources
3. Evaluate the literature As described for RCTs earlier in this module
4. Determine the quality of the evidence Use GRADE (or similar) criteria
5. Synthesize evidence from all quality sources to make a recommendation Recommendation should include the patient’s current clinical situation, patient preferences and actions, and the provider’s clinical expertise in addition to the evidence
6. Follow-up Evaluate whether the desired clinical outcome is achieved
Abbreviations: GRADE, Grades of Recommendation, Assessment, Development, and Evaluation; RCTs, randomized controlled trials.

Barriers to implementing the EBM process in clinical practice include lack of knowledge about EBM, lack of literature searching and evaluation skills, lack of access to resources (eg, primary literature) upon which to base clinical decisions, and insufficient time for keeping current with the literature.22,32,34 Most of these barriers can be minimized or overcome with education and training. Practitioners can also develop skills in keeping current with the literature. Pharmacists and APRNs can subscribe to summary services such as the New England Journal of Medicine (NEJM) Journal Watch, Pharmacist’s Letter/Prescriber’s Letter, or PNN Pharmacotherapy News Network.35-37 Other online resources for keeping current include journal table of content updates, FDA MedWatch safety alerts, or professional organization email updates. Attending professional meetings, completing effective continuing education activities, and appropriately using mobile applications (“apps”) can also help pharmacists and APRNs stay up to date with the highest quality literature.

Case:

Upon review of a new patient’s medical record for medication therapy management (MTM), the pharmacist learns that the patient has rheumatoid arthritis (RA). The pharmacist would like to increase their familiarity with the management of this disease state by reviewing RA clinical practice guidelines.

How can the pharmacist find the most recent RA clinical practice guideline?

The pharmacist can use the search term “rheumatoid arthritis” and the “practice guideline” limit within the MEDLINE/PubMed database to find guidelines published in the medical literature. If the pharmacist already knows that the standard for US practice is to follow the American College of Rheumatology (ACR) guideline, then the ACR website can be checked. Websites with freely available guideline collections (such as ECRI Guidelines Trust or TRIP) are other potential sources for finding guidelines.

What elements should the pharmacist evaluate to determine the quality of the RA guideline?

The overall development process should be considered, including whether a systematic search was performed to identify pertinent literature, how the available evidence was evaluated, how recommendations were graded, and how potential conflicts of interest were disclosed and managed. The pharmacist could consider using a checklist such as the AGREE II or COGS instruments to simplify the guideline evaluation process.

If the patient’s current medication regimen for RA is not consistent with the guideline recommendations, what could the pharmacist do to investigate further?

It may be appropriate for the pharmacist to search for recent literature, especially if the guideline is older. New RCTs that may have changed practice would not be reflected in older guideline recommendations. If new studies are found, the pharmacist should carefully evaluate each study’s strengths and limitations. If no studies are found, the pharmacist should consider patient-specific factors prior to recommending a switch in therapy. Some of these factors include the degree of symptom control, financial constraints, drug interactions, adherence, contraindications to the therapies recommended by the guidelines, prior therapies that the patient has tried, and patient preferences.

Conclusion

The effective practice of EBM requires skills such as literature searching and evaluation, use of clinical practice guidelines, and clinical decision making that considers patient needs and preferences in addition to the evidence.38,39 Since practicing in an evidence-based manner is an expectation for all pharmacists and APRNs, development of these skills allows pharmacists to provide the highest quality evidence-based care possible.

References

  1. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71-72.
  2. Sackett DL, et al. Evidence-based Medicine: How to Practice and Teach EBM. Edinburgh, Scotland: Churchill Livingstone; 2000.
  3. Hospital Value-Based Purchasing. Medicare.gov website. https://www.medicare.gov/hospitalcompare/data/hospital-vbp.html. Accessed May 3, 2019.
  4. Ambulatory Health Care: 2019 National Patient Safety Goals. The Joint Commission. https://www.jointcommission.org/ahc_2017_npsgs/. Accessed May 3, 2019.
  5. Study Designs in Medical Research. In: Dawson B, Trapp RG, eds. Basic & Clinical Biostatistics. 4th ed. New York, NY: McGraw-Hill; 2004. https://accesspharmacy.mhmedical.com/book.aspx?bookid=356. Accessed May 3, 2019.
  6. Guidance for Industry: Providing Clinical Evidence of effectiveness for human drug and biological products. US Food and Drug Administration website. https://www.fda.gov/media/71655/download. Accessed May 3, 2019.
  7. West-Strum D. Introduction to pharmacoepidemiology. In: Yang Y, West-Strum D, eds. Understanding Pharmacoepidemiology. New York, NY: McGraw-Hill; 2011. https://accesspharmacy.mhmedical.com/book.aspx?bookid=515. Accessed May 3, 2019.
  8. Freeman MK, Kendrach M, Hughes PJ, Slaton RM. Drug literature evaluation I: controlled clinical trial evaluation. In: Malone PM, Malone MJ, Park SK, eds. Drug Information: A Guide for Pharmacists. 6th ed. New York, NY: McGraw-Hill; 2018. https://accesspharmacy.mhmedical.com/book.aspx?bookid=2275. Accessed May 3, 2019.
  9. Kier KL. Biostatistical applications in epidemiology. Pharmacotherapy. 2011;31(1):9-22.
  10. Glasser SP, Howard G. Clinical trial design issues: at least 10 things you should look for in clinical trials. J Clin Pharmacol. 2006;46(10):1106-1115.
  11. Walker E, Nowacki AS. Understanding equivalence and noninferiority testing. J Gen Intern Med. 2011;26(2):192-196.
  12. Reading the medical literature. In: Dawson B, Trapp RG. Basic and Clinical Biostatistics. 4th ed. New York, NY: Lange Medical Books/McGraw-Hill; 2004:332-361.
  13. Appendix I: Flow chart to determine appropriate statistical test to analyze data. In: Waning B, Montagne M, eds. Pharmacoepidemiology: Principles and Practice. New York, NY: McGraw-Hill; 2001. https://accesspharmacy.mhmedical.com/book.aspx?bookid=438. Accessed May 3, 2019.
  14. Gaddis GM, Gaddis ML. Introduction to biostatistics: Part 5, statistical inference techniques for hypothesis testing with nonparametric data. Ann Emerg Med. 1990;19(9):1054-1059.
  15. CONSORT: Transparent Reporting of Trials. http://www.consort-statement.org/. Accessed May 3, 2019.
  16. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. International Committee of Medical Journal Editors. http://www.icmje.org/recommendations/. Updated December 2018. Accessed May 3, 2019.
  17. Urschel JD. How to analyze an article. World J Surg. 2005;29(5):557-560.
  18. Worksheet to evaluate drug studies. Pharmacist’s Letter website. https://pharmacist.therapeuticresearch.com. June 2005. Accessed May 3, 2019.
  19. Mansi IA, Thompson JC, Banks DE. Final tips in interpreting evidence-based medicine. South Med J. 2012;105(3):173-180.
  20. Applying study results to patient care: relative risk, absolute risk, and number needed to treat. Pharmacist’s Letter/Prescriber’s Letter website. https://pharmacist.therapeuticresearch.com. October 2017. Accessed May 3, 2019.
  21. U.S. National Institutes of Health. Clinicaltrials.gov website. https://clinicaltrials.gov/. Accessed May 3, 2019.
  22. Moores KG, Abrons JP. Evidence-based clinical practice guidelines. In: Malone PM, Malone MJ, Park SK, eds. Drug Information: A Guide for Pharmacists. 6th ed. New York, NY: McGraw-Hill; 2018. https://accesspharmacy.mhmedical.com/book.aspx?bookid=2275. Accessed May 3, 2019.
  23. Clinical Practice Guidelines We Can Trust. Institute of Medicine. https://www.ncbi.nlm.nih.gov/books/NBK209539/. Accessed May 3, 2019.
  24. Atkins D, Best D, Briss PA, et al. GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ. 2004;328(7454):1490.
  25. Guyatt GH, Oxman AD, Kunz R, et al. GRADE Working Group. Going from evidence to recommendations. BMJ. 2008;336(7652):1049-1051.
  26. AGREE II Instrument. Appraisal of Guidelines Research and Evaluation (AGREE) Enterprise website. https://www.agreetrust.org/wp-content/uploads/2017/12/AGREE-II-Users-Manual-and-23-item-Instrument-2009-Update-2017.pdf. Updated December 2017. Accessed May 3, 2019.
  27. Shiffman RN, Shekelle P, Overhage JM, et al. Standardized reporting of clinical practice guidelines: a proposal from the Conference on Guideline Standardization. Ann Intern Med. 2003;139(6):493-498.
  28. U.S. National Library of Medicine. PubMed website. http://www.ncbi.nlm.nih.gov/pubmed. Accessed May 3, 2019.
  29. ECRI Institute. Guidelines Trust. https://guidelines.ecri.org/. Accessed May 24, 2019.
  30. Trip medical database. https://www.tripdatabase.com/. Accessed May 24, 2019.
  31. Medical Student Resources. Centre for Evidence Based Medicine. http://www.cebm.net/medical-student-resources/. Accessed May 3, 2019.
  32. Tilburt JC. Evidence-based medicine beyond the bedside: keeping an eye on context. J Eval Clin Pract. 2008;14(5):721-725.
  33. Sadeghi-Bazargani H1, Tabrizi JS, Azami-Aghdash S. Barriers to evidence-based medicine: a systematic review. J Eval Clin Pract. 2014;20(6):793-802.
  34. Zwolsman S, te Pas E, Hooft L, et al. Barriers to GPs’ use of evidence-based medicine: a systematic review. Br J Gen Pract. 2012;62(600):e511-e521.
  35. NEJM Group. NEJM Journal Watch website. http://www.jwatch.org/. Accessed May 3, 2019.
  36. Pharmacist’s Letter/Prescriber’s Letter. https://pharmacist.therapeuticresearch.com/Home/PL/. Accessed May 3, 2019.
  37. PNN Pharmacotherapy News Network. http://pharmacotherapynewsnetwork.com/. Accessed May 3, 2019.
  38. Weaver RR. Reconciling evidence-based medicine and patient-centred care: defining evidence-based inputs to patient-centred decisions. J Eval Clin Pract. 2015;21(6):1076-1080.
  39. Chang S, Lee TH. Beyond evidence-based medicine. N Engl J Med. 2018;379(21):1983-1985.

Back to Top