<img style="float: right;" src="https://opedge.com/Content/OldArticles/images/2008-12_03/3-1.jpg" hspace="4" vspace="4" /> To help researchers conduct studies that advance evidence-based practice (EPB) in O&P, the American Academy of Orthotists and Prosthetists (the Academy) has developed State-of-the-Science Evidence Report Guidelines. These guidelines propose a series of steps that should be used to evaluate the quality of research studies conducted on a particular topic. These guidelines also can help O&P researchers design better experiments. Central to assessing the quality of the research is identifying threats to the internal and external validity of a study design. Internal validity concerns whether or not the design has properly demonstrated a cause-and-effect relationship. External validity concerns whether or not the demonstrated cause-and-effect relationship can be generalized to larger populations. I recently used the guidelines to complete a state-of-the-science evidence report on the alignment of transtibial prostheses. The guidelines helped me to identify a number of problems among the articles. The questions below reflect the lessons I learned. <h4>Internal Validity</h4> <b><i>Can the experiment be replicated?</i></b> Frequently found threats include the failure to measure key variables and describe completely the experimental protocol. In experimental studies of the alignment of transtibial prostheses, the initial "acceptable" alignment was rarely quantified, and the manner in which subjective "acceptability" of an alignment was determined was rarely part of the reported protocol. The lack of quantification and a method to operationally define key variables renders the experiments unrepeatable. Similarly, an inadequate description of key aspects of experimental protocols, such as the number of trials, prevents repeatability. Scientific studies should be designed and reported in a manner that allows for replication. <b><i>Can tests of statistical significance be carried out and reported?</i></b> A second threat was the failure to conduct and report tests of statistical significance, especially in studies in which this was feasible. An effort should be made to determine whether observed trends are statistically significant. This piece of information is crucial to the development of scientific knowledge. <b><i>Have inclusion and exclusion criteria for subjects been thought out carefully and applied?</i></b> Failure to apply criteria with respect to subject recruitment and selection opens the possibility that the results obtained will not be due to the variable being manipulated experimentally, but due to some characteristic of the subject. For example, poorly fitting sockets might invalidate any findings about the effects of alignment perturbation on gait. It is important to formalize inclusion and exclusion criteria in a protocol and then report it in the article. <h4>External Validity</h4> <b><i>Does the research study use accepted clinical terminology?</i></b> In the review of articles on transtibial alignment, failure to use commonly accepted clinical terminology, combined with poor writing and editing, made it difficult to determine how alignments had been perturbed in several of the studies. For example, the clinician should not have to mentally juxtapose graphs and tables to determine that the effect of a perturbation reported in the experiment as socket flexion was actually socket extension. <b><i>Does the research reflect clinical practice?</i></b> Most of the research studies I reviewed began with an "acceptable" alignment and then examined the effect of perturbations. In the clinic, such procedures are reversed; the clinician begins with a bench alignment and attempts to produce an "acceptable" alignment. No one has developed evidence that such symmetry exists and the direction of change does not matter, but greater fidelity to clinical practices would occur in studies that begin with bench alignments and progress toward acceptable alignments. The researcher should be aware of how procedures are carried out in the clinic. <b><i>Does the research employ subjects who are representative of relevant clinical populations?</i></b> In the reviewed articles, subjects typically were experienced amputees, some of whom were reported to exhibit skill at adapting to alignment perturbations, rather than new amputees who might be in search of an acceptable alignment. No studies were found concerning new amputees for whom alignment might be crucial to the rehabilitation progress. Care should be exercised to recruit subjects who are truly relevant to the hypotheses of the study and the problems faced by clinicians. Many of the other deficiencies I noted could have been avoided with more careful experimental design. Reviews of other topics might produce a different list of experimental issues, and the guidelines cover a large number of potential validity concerns. Large samples and randomization may be difficult to achieve in O&P research. However, the quality of many O&P studies could be improved by closer scrutiny of other issues related to internal and external validity. Any researcher contemplating conducting an O&P study would benefit from reviewing their research design as if it were being scrutinized within the framework provided by the Academy. The chances are good that eventually it will be. <i>Edward S. Neumann, PhD, PE, CP, is director of the Center for Disability and Applied Biomechanics, professor of civil and environmental engineering, and adjunct professor of kinesiology at the University of Nevada, Las Vegas.</i> <i> The State-of-the Science Evidence Report Guidelines are available at <a href="https://opedge.dev/3172" target="_blank" rel="noopener noreferrer">www.oandp.org/grants/masteragenda/aaop_evidencereportguidelines.pdf</a></i>
<img style="float: right;" src="https://opedge.com/Content/OldArticles/images/2008-12_03/3-1.jpg" hspace="4" vspace="4" /> To help researchers conduct studies that advance evidence-based practice (EPB) in O&P, the American Academy of Orthotists and Prosthetists (the Academy) has developed State-of-the-Science Evidence Report Guidelines. These guidelines propose a series of steps that should be used to evaluate the quality of research studies conducted on a particular topic. These guidelines also can help O&P researchers design better experiments. Central to assessing the quality of the research is identifying threats to the internal and external validity of a study design. Internal validity concerns whether or not the design has properly demonstrated a cause-and-effect relationship. External validity concerns whether or not the demonstrated cause-and-effect relationship can be generalized to larger populations. I recently used the guidelines to complete a state-of-the-science evidence report on the alignment of transtibial prostheses. The guidelines helped me to identify a number of problems among the articles. The questions below reflect the lessons I learned. <h4>Internal Validity</h4> <b><i>Can the experiment be replicated?</i></b> Frequently found threats include the failure to measure key variables and describe completely the experimental protocol. In experimental studies of the alignment of transtibial prostheses, the initial "acceptable" alignment was rarely quantified, and the manner in which subjective "acceptability" of an alignment was determined was rarely part of the reported protocol. The lack of quantification and a method to operationally define key variables renders the experiments unrepeatable. Similarly, an inadequate description of key aspects of experimental protocols, such as the number of trials, prevents repeatability. Scientific studies should be designed and reported in a manner that allows for replication. <b><i>Can tests of statistical significance be carried out and reported?</i></b> A second threat was the failure to conduct and report tests of statistical significance, especially in studies in which this was feasible. An effort should be made to determine whether observed trends are statistically significant. This piece of information is crucial to the development of scientific knowledge. <b><i>Have inclusion and exclusion criteria for subjects been thought out carefully and applied?</i></b> Failure to apply criteria with respect to subject recruitment and selection opens the possibility that the results obtained will not be due to the variable being manipulated experimentally, but due to some characteristic of the subject. For example, poorly fitting sockets might invalidate any findings about the effects of alignment perturbation on gait. It is important to formalize inclusion and exclusion criteria in a protocol and then report it in the article. <h4>External Validity</h4> <b><i>Does the research study use accepted clinical terminology?</i></b> In the review of articles on transtibial alignment, failure to use commonly accepted clinical terminology, combined with poor writing and editing, made it difficult to determine how alignments had been perturbed in several of the studies. For example, the clinician should not have to mentally juxtapose graphs and tables to determine that the effect of a perturbation reported in the experiment as socket flexion was actually socket extension. <b><i>Does the research reflect clinical practice?</i></b> Most of the research studies I reviewed began with an "acceptable" alignment and then examined the effect of perturbations. In the clinic, such procedures are reversed; the clinician begins with a bench alignment and attempts to produce an "acceptable" alignment. No one has developed evidence that such symmetry exists and the direction of change does not matter, but greater fidelity to clinical practices would occur in studies that begin with bench alignments and progress toward acceptable alignments. The researcher should be aware of how procedures are carried out in the clinic. <b><i>Does the research employ subjects who are representative of relevant clinical populations?</i></b> In the reviewed articles, subjects typically were experienced amputees, some of whom were reported to exhibit skill at adapting to alignment perturbations, rather than new amputees who might be in search of an acceptable alignment. No studies were found concerning new amputees for whom alignment might be crucial to the rehabilitation progress. Care should be exercised to recruit subjects who are truly relevant to the hypotheses of the study and the problems faced by clinicians. Many of the other deficiencies I noted could have been avoided with more careful experimental design. Reviews of other topics might produce a different list of experimental issues, and the guidelines cover a large number of potential validity concerns. Large samples and randomization may be difficult to achieve in O&P research. However, the quality of many O&P studies could be improved by closer scrutiny of other issues related to internal and external validity. Any researcher contemplating conducting an O&P study would benefit from reviewing their research design as if it were being scrutinized within the framework provided by the Academy. The chances are good that eventually it will be. <i>Edward S. Neumann, PhD, PE, CP, is director of the Center for Disability and Applied Biomechanics, professor of civil and environmental engineering, and adjunct professor of kinesiology at the University of Nevada, Las Vegas.</i> <i> The State-of-the Science Evidence Report Guidelines are available at <a href="https://opedge.dev/3172" target="_blank" rel="noopener noreferrer">www.oandp.org/grants/masteragenda/aaop_evidencereportguidelines.pdf</a></i>