Evidence-Based Dentistry: Part V. Critical Appraisal of The Dental Literature: Papers About Therapy
Evidence-Based Dentistry: Part V. Critical Appraisal of The Dental Literature: Papers About Therapy
R O F E S S I O N A L
S S U E S
Evidence-based Dentistry: Part V. Critical Appraisal of the Dental Literature: Papers About Therapy
A b s t r a c t
Evidence-based dentistry involves dening a question focused on a patient-related problem and searching for reliable evidence to provide an answer. Once potential evidence has been found, it is necessary to determine whether the information is credible and whether it is useful in your practice by using the techniques of critical appraisal. In this paper, the fth in a 6-part series on evidence-based dentistry, a framework is described which provides a series of questions to help the reader assess both the validity and applicability of an article related to questions of therapy or prevention.
MeSH Key Words: dental research/methods; evidence-based medicine; human research design
J Can Dent Assoc 2001; 67(8):442-5 This article has been peer reviewed.
he need for valid and current information for answering everyday clinical questions is growing. Ironically, the time available to seek the answers seems to be shrinking. In addition, a surprising amount of published research belongs in the bin.1 Critical appraisal can be used to rapidly assess and discard reports of research studies that are irrelevant or of poor quality. The purpose of the next 2 papers in this series is to introduce the tools used to critically appraise papers according to the type of clinical question addressed by the study. These concepts and tools were developed by the evidence-based medicine group at McMaster University2,3 and are used worldwide in the practice of evidence-based care in many of the health sciences professions. In this paper, techniques to evaluate research studies related to questions of therapy will be discussed. In the nal paper in the series, critical appraisal techniques will be presented for the evaluation of papers about diagnostic tests, causation and predicting prognosis.
treatments. The RCT is the strongest design for a clinical study because randomization of patients to the comparison groups minimizes bias by ensuring that the patients in each group are as similar as possible in all respects, except for the treatment under investigation. As more RCTs studying a particular question become available, it is more difficult for the reader to process and synthesize all of the information to nd the answer to a clinical question. Systematic reviews (sometimes called secondary publications or integrative research) summarize, analyze and report the combined results of a number of RCTs. They are done with the same rigour that is expected of primary studies, but the unit of analysis is the individual study rather than the individual patient.
Evidence-based Dentistry: Part V. Critical Appraisal of the Dental Literature: Papers About Therapy
group decided completely by chance, by the ip of a coin or by some other similar method? This assignment helps to ensure that people in the treatment and the control groups are similar at the outset and that differences at the end of the trial are due to the intervention and not to some selection factor. Look for words like random allocation, randomly assigned or randomized trial in the title or abstract. If absent, go on to the next title. In the methods section, look for a description of the way randomization was done. If this was done by the ip of a coin, coded and sealed envelopes, random number tables or a computergenerated sequence, randomization was appropriate. Any method of allocation where the sequence could be guessed by anyone is inappropriate. Unfortunately, randomization methods are not often described and you are left to wonder about the details. When reading these papers, you might want to remember that research has shown that inadequate randomization can exaggerate the estimate of treatment effect by 41% and that even if the paper states that the study is randomized, but the description of the randomization methods is unclear, the estimate of the effect is exaggerated, on average, by 30%.6
Were the groups similar at the outset and treated equally throughout the study?
Randomization does not always create groups that are balanced for known prognostic factors, especially in small studies. The investigators should present baseline data on all patients in each group and if there are signicant differences, assure the reader that these differences were adjusted for in the statistical analysis. Co-interventions are additional treatments other than those being investigated that are used by or given to patients. Co-interventions are problematic if they are given differentially to either the treatment and the control group and are much less of a problem in double-blind studies. It is helpful to the reader if allowed co-interventions are described in the methods section and if the extent of use of non-permissible co-interventions is documented in the results. The success of blinding can be assessed by the investigators by asking both clinicians and patients after completion of the study what group they thought the patient was in and comparing the answers with the actual allocation. If the results of this analysis show that more patients or clinicians guessed correctly than one would expect by chance (say, more than one person in 20 guessed correctly, or p > 0.05), then the methods used for blinding didnt really work.
Were all the patients who entered the trial accounted for and analyzed at the end of the study?
It is not uncommon to read a study which began with a certain number of patients and ended with a lesser number, with a mere statement that a particular number of patients were not available for follow-up. The reasons for loss to follow-up may be extremely important. In fact, patients who do not complete trials may provide more information about the intervention than those who do. Patients may have dropped out because of side effects (even to the placebo) or perhaps because they beneted from the intervention and with the resolution of their problem or condition, chose not to return for follow-up. Even when loss to follow-up is accounted for and explained in the paper, follow-up of less than 80% of the patients enrolled at the beginning is generally considered unacceptable.3 It is also important that patients be analyzed in the group to which they were originally randomly allocated, even if they switched groups or were noncompliant with either the experimental or the control treatment. This is the intention to treat principle and it serves to preserve the powerful function of randomization; factors we cannot know about will remain reasonably equally distributed between the 2 groups. This consistency prevents the intervention from appearing to be effective when it is not and makes the results of the study more conservative and more believable.
Journal of the Canadian Dental Association
443
Sutherland
Systematic Reviews
Systematic reviews (also known as overviews or as metaanalyses if results of the primary studies can be combined mathematically) differ from traditional journal or textbook reviews.8 Systematic reviews have most often been done for questions relating to therapy, although they can and have been done for all types of questions. While widely accepted standards have been developed9 for the conduct of systematic reviews for issues related to therapeutic questions, agreed-upon standards and critical appraisal techniques for reviews which synthesize the results of observational studies remain undeveloped at this time. The following guidelines will enable you to judge the validity and usefulness of a systematic review10,11 of RCTs addressing issues of therapy.
Evidence-based Dentistry: Part V. Critical Appraisal of the Dental Literature: Papers About Therapy
7. Fleming TR, DeMets DL. Surrogate end points in clinical trials: are we being misled? Ann Intern Med 1996; 125(7):605-13. 8. Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med 1997; 126(5):376-80. 9. Cook DJ, Sackett DL, Spitzer WO. Methodologic guidelines for systematic reviews of randomized controlled trials in health care from the Potsdam Consultation on meta-analysis. J Clin Epidemiol 1995; 48(1):167-71. 10. Oxman AD, Guyatt GH. Guidelines for reading literature reviews. CMAJ 1988; 138(8):697-703. 11. Oxman AD, Cook DJ, Guyatt GH. Users guides to the medical literature. VI. How to use an overview. Evidence-Based Medicine Working Group. JAMA 1994; 272(17):1367-71. 12. Felson DT. Bias in meta-analytic research. J Clin Epidemiol 1992; 45(8):885-92.
Conclusion
A well-designed randomized controlled trial is the strongest research design for clinical trials. The systematic review is a powerful way to assemble multiple studies and synthesize their ndings. In both cases, however, the credibility of the research needs to be determined through the use of critical appraisal techniques. In the nal paper in this series, critical appraisal methods and their application to studies related to other types of clinical questions commonly encountered in dental practice questions related to diagnostic tests, to etiology, causation or harm, and to prognosis will be discussed. C
Dr. Sutherland is a full-time active staff member of the department of dentistry at the Sunnybrook and Womens College Health Sciences Centre, University of Toronto. Correspondence to: Dr. Susan E. Sutherland, Department of Dentistry, Sunnybrook and Womens College Health Sciences Centre, 2075 Bayview Ave., Toronto, ON M4N 3M5. E-mail: [email protected] The views expressed are those of the author and do not necessarily reect the opinion or official policies of the Canadian Dental Association.
References
1. Greenhalgh T. How to read a paper: getting your bearings (deciding what the paper is about). BMJ 1997; 315(7102):243-6. 2. Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical epidemiology: a basic science for clinical medicine. 2nd ed. Boston: Little, Brown and Company; 1991. 3. Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidencebased medicine: how to practice and teach EBM. London: Churchill Livingstone; 1997. 4. Guyatt GH, Sackett DL, Cook DJ. Users guides to the medical literature. II. How to use an article about therapy or prevention. A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA 1993; 270(21):2598-601. 5. Guyatt GH, Sackett DL, Cook DJ. Users guides to the medical literature. II. How to use an article about therapy or prevention. B. What were the results and will they help me in caring for my patients? Evidence-Based Medicine Working Group. JAMA 1994; 271(1):59-63. 6. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodologic quality associated with estimates of treatment effects in controlled trials. JAMA 1995; 273(5):408-12.
Journal of the Canadian Dental Association September 2001, Vol. 67, No. 8
445