The role that research evidence should play in day-to-day practice has been one of the most formidable issues the O&P profession has addressed over the past few decades. The increased focus on outcome measurement, pressures from reimbursement organizations, increased funding for research in response to military conflicts, and public interest in high-tech rehabilitation have all contributed to greater attention to and improvements in the level of research evidence supporting daily practice. Principles of the evidence-based medicine movement have been applied to O&P since they were first articulated in the early 1990s, with corresponding calls for more and higher-level evidence to support our clinical practices. Despite improvements in research education over that same period, many clinicians struggle with understanding how research knowledge fits into day-to-day clinical care. An emphasis on specific research strategies that require a high level of expertise and methodological rigor can leave clinicians with the impression that research is not relevant to their clinical decision making. Is it possible that the rules of evidence-based practice (EBP) do not apply to us?
By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice. Increased expertise is reflected in many ways, but especially in more effective and efficient diagnosis and in the more thoughtful identification and compassionate use of individual patients’ predicaments, rights, and preferences in making clinical decisions about their care.1
Integrating All Three Elements of EBP
One the most quoted descriptions of evidence-based medicine was written in 1996 by David Sackett: “Evidence-based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.”1 Sackett’s definition included an emphasis on the “patients’ predicaments, rights, and preferences” as an element of clinical expertise.1 (See text above.)
While the integration of formal, peer-reviewed research evidence into practice is the clear emphasis of EBP, the prominent mention of clinical experience and the perspectives of patients in more recent publications brings focus to an important tension within daily practice.2,3 How, exactly, can practitioners integrate these three aspects of EBP when making decisions, particularly when established practices are not evidence based? More importantly, what role does new evidence play in clinical decision making when high-quality research is often inadequate to support a specific decision or course of treatment? Barriers to implementing EBP into O&P relate to limitations in availability of, access to, and expertise in evaluating research. These and other issues related to EBP have been addressed by O&P researchers in peer-reviewed journals, and many of these articles are freely available online.4-9
High levels of evidence exist in some areas of clinical practice in O&P. As examples, the value of orthotic management of idiopathic scoliosis and infantile positional cranial deformities has been clearly demonstrated, and the evidence provides valid guidance in specific treatment decisions, such as when orthotic management should begin and end. However, that high level of evidence is not available in many other practice areas. The way research is reviewed and reported presents a barrier to the implementation of EBP because some evidence that may be relevant to clinicians is not considered sufficiently rigorous by EBP purists to form the basis for practice.
How Research Evidence is Reported
One of the more respected methods for reviewing evidence is to use a formal, structured process known as a systematic review. When performing a systematic review, “evidence is searched for, evaluated, and synthesized in clearly defined steps, following a protocol that has been written before the review begins” using “a hierarchy of research designs to sort stronger evidence from weaker….”10 Unfortunately, the rigorous methodology inherent in this process and restricting the scope of reviewed literature to only high-level evidence often means that these reviews have limited value for guiding clinical practice. Clinical practice involves complexities that are not replicated in studies considered to have high-level evidence (which requires, among other things, that variables be limited as much as possible). When discussing how EBP can be implemented into prosthetic practice, van Twillert et al. point out that “exclusions made by researchers to prevent bias entering the research setting in order to produce methodological sound and generalizable results, do not resolve the complexity of the clinical decision process in prosthetic rehabilitation….”11 Systematic reviews often conclude by recommending more research rather than a specific clinical decision or course of action, and this is of little value to a practitioner who must make a decision regarding a specific case. O&P is not a theoretical discipline—practitioners must make decisions and implement treatments even when no options are supported by high-level evidence.
Many topics of interest to practicing O&P clinicians lack sufficient published evidence to form the basis of a structured systematic review. Narrative (also called qualitative or nonsystematic) reviews involve a less rigorous methodology than structured reviews. Narrative reviews “may lack a focused question, rarely develop a methodology that is peer reviewed, seldom use forms for abstracting data or have independent abstraction of evidence by two or more reviewers, and may go well beyond the evidence in the literature in making recommendations.”12 However, narrative reviews should not be dismissed, as they often are by proponents of EBP, since “reviews play a number of roles in scientific research and professional practice…. For some of these purposes, systematic reviews are better; for others, a narrative review is more suitable.”12
It is important to consider all types of evidence when making clinical decisions, even when it is not based on the most rigorous methodology, or conversely, does not result in immediate, definitive answers to clinical problems. An awareness of research results is an important part of a clinician’s professional responsibility, and that knowledge can inform daily practice in less tangible ways than providing an exact blueprint for clinical decisions.10 According to Dijkers, “The real question is not, ‘What is the most rigorous research design?’ but ‘At this time, what is the best research design for the research question or practical problem at issue?’ ‘Rigorous’ and ‘best’ are not the same.”10 Dijkers also comments that “there may be beneﬁt to a review article in which an experienced clinician offers a conceptual understanding of the problem, makes suggestions for treatment based on analogies with other, better understood problems, and offers guidance for assessment and management.”12 Narrative reviewers should make their values, preferences, and assumptions clear to address the increased risk of bias when selecting and synthesizing the evidence using a less rigorous methodology.