Going to PROM? Reviewing Patient-Reported Outcome Measures (PROMs) in the Management of Neurologic Hemiplegia
January 2016 Issue
When working with multiple individuals from a single patient population, it can be easy to overlook that rehabilitation is a customized experience that should be focused on the unique abilities, challenges, and goals of the person being treated. In today's healthcare climate of accountability, there is an increasing need to quantify the value of the interventions we provide to such individuals. However, while most performance measures, such as the Berg Balance Scale (BBS) or the ten-meter walk test, are both objective and responsive, they are also quite artificial. They are assessed in a structured clinical environment and fail to represent day-to-day function in the individual's daily routines.
By contrast, rehabilitation can be more customized with the use of patient-reported outcome measures (PROMs).1 Like performance measures, these are standardized, validated instruments. However, in place of physical performance across a series of highly controlled activities, PROMs are generally questionnaires that invite patients to report, and ultimately measure, their own perceptions of their functionality and well-being. By reporting patients' native experiences in their daily lives, these measures facilitate a more patient-centered focus on the outcomes obtained with a given intervention.
Choosing Your "Date"
A nearly universal challenge in administering performance measures and PROMs begins with selecting the most appropriate measure. A number of them have been developed and described, which can make the selection process feel overwhelming. Fortunately, in a recent systematic review, Ashford et al. report on a collection of PROMs that have been validated for use in the assessment of lower-limb function following stroke or brain injury.1 In doing so, they present a number of considerations including a given measure's practicality (i.e., time to complete, burden, and readability), its validity and reliability (i.e., content and construct validity, test-retest reliability, and floor or ceiling effects), and its responsiveness to change (the extent to which changes in function are represented by changes in measure scores).1
Administrative Burden: One of the advantages of PROMs is that, as questionnaires, they do not impose a significant time burden on clinical staff. However, patients value their time as well. Thus, the time required to complete them should generally be kept modest. In addition, once they are completed, PROMs need to be scored. Scoring metrics can be variable, covering a broad spectrum from summing responses to measuring across visual analogue scales to more complicated formula-based scoring techniques. PROMs that are quick to complete and score reduce the burden on patient and practitioner alike and are generally preferred.1
Validity: There are multiple types of validity that can be considered for a given measure. Chief among these are content and construct validity. Content validity represents the extent to which the content of the PROM canvasses concepts and domains that are relevant to the population in question. PROMs that have been influenced by patients, their caregivers, or experienced clinicians tend to have higher levels of content validity. Construct validity refers to the extent to which a measure evaluates what it purports. This is usually verified by confirming correlation between outcomes known to measure changes in a given population.Floor and Ceiling Effects: The patients we treat have a spectrum of abilities and limitations. However, the ability of a given PROM to reflect that spectrum is variable. If the items on the PROM are too easy, and too many subjects obtain the maximum score, a
ceiling effect has been observed. If they are too difficult and too many subjects obtain the minimum score, a floor effect exists. Ashford et al. ascribe these effects to a measurement tool any time more than 15 percent of the sampled population achieves the highest or lowest possible scores, respectively.1
Reproducibility: For a PROM score to be meaningful, it should be similar when administered on two separate occasions. This consideration is often referred to as test-retest reliability and reported as the intraclass correlation coefficient (ICC). The higher the ICC, the more likely scores will be consistent as the PROM is readministered, assuming the patient's condition has not changed.
Responsiveness: A corollary to a measure that does not change when it should not is one that does change when it should. This metric can also be defined as sensitivity. A responsive PROM is sensitive enough to capture differences in functionality or performance as they develop.
Interpretability: Ashford et al. describe this as "the degree to which qualitative meaning can be assigned to quantitative scores." PROMs are interpretable when they have underlying metrics. Scores are more meaningful if we know the mean and standard deviation values for a given PROM within a given patient population. This interpretability also increases if we know how a given PROM score is likely to be affected by certain comorbidities or therapeutic treatments. More advanced metrics, like minimal detectable change, inform us of the amount of change that needs to be observed with a given PROM to know that the change was real rather than a product of normal variability.
Ashford et al. ultimately report about eight PROMs that are useful in gauging active functionality in patients with unilateral, neurologic lower-limb hemiplegia (i.e., stroke and brain injury). The results constitute a veritable alphabet soup when referred to by their abbreviations, the BICRO, CSQ, HAP, LEFS, N-ADL, RMI, SIP, and SIS. For the purposes of this article, two leading PROMs are discussed in greater detail, the Rivermead Mobility Index (RMI) and the Lower Extremity Functional Scale (LEFS), as these are the two measures that focus specifically on lower-limb function across a range of activities.1
The PROM Queen: RMI
The authors are not shy with their recommendations of the RMI. "The RMI is a practical and clinically applicable measure of mobility in neurologically impaired...patients. The RMI has robust psychometric measurement properties, some of which have been replicated in a number of studies."1 The RMI was originally described in 1991, and is one of the more commonly used tools in the rehabilitation community to assess function during the acute and subacute phases of stroke recovery.2 However, even a cursory review of its 15 items begins to suggest that despite its general popularity, it may not be the best measure to assess changes in function associated with lower-limb orthotic management. Several items are unlikely to be affected by the presence or absence of a lower-limb orthosis; some items, such as bathing, require the removal of an orthosis; and one item penalizes a patient for the use of an orthosis.
The measure is openly accessible through the website www.rehabmeasures.org. Patients rate each of the following items as yes (1) or no (0), with a maximum score of 15.
The rehabilitation community knows a lot about the RMI. They know that when patients begin their inpatient rehabilitation admission, their RMI score is about 4, and when they leave, their score is about 9. They know that the standard error of measurement is 0.8 points, and the minimal detectable change is 2.2 points. They know that the instrument is both reliable and valid in the population of patients who have suffered strokes and brain injuries.3
However, they also know that "[t]he RMI does not assess mobility gained through environmental modifications, such as the use of assistive devices."3 In other words, it tells you about the individual, but it does not tell you much about the impact an assistive device, like an orthosis, can have on that individual. In addition, it has a ceiling effect, such that higher-functioning patients may require a different PROM.1 Thus, for all of its established credentials, its value to the O&P community may be limited.
PROM Princess: LEFS
Of the eight measures Ashford et al.
report on, seven were developed within
the neurorehabilitation community. The
eighth, the LEFS, was originally developed for
patients with musculoskeletal problems,4 and
has since been used in populations of patients
who have neurological compromises. The LEFS
can be accessed at www.mccreadyfoundation.org/
While new to the neurorehabilitation community, including clinical orthotists, this latecomer may represent the most appropriate PROM for tracking changes in functionality associated with use of a lower-limb orthosis across a spectrum of abilities. Indeed, the only criticism of the LEFS that Ashford et al. express is that while it had apparently been used in neurologic patient populations, there were no published reports of such utilization at the time the Ashford review was conducted.1
However, this concern is allayed by the aptly titled publication, "Reliability, Validity, and Sensitivity to Change of the Lower Extremity Functional Scale in Individuals Affected by Stroke."5 While the first two considerations of reliability and validity are fundamentally important, the third consideration of sensitivity to change may be what gives this PROM its edge.
In the study in question, the authors assembled an intentionally heterogeneous convenience sample of individuals recovering from stroke. The 43 individuals presented with an average age of 70 but ranged from 32 to 95 years old. The duration of impairment ranged from three days to more than a year. Individuals in both inpatient and outpatient settings were included who had functional dependence ranging from slight to significant. All of the participants underwent structured physical therapy for eight weeks. A host of outcome measures were administered at baseline, four weeks, and eight weeks. These included the LEFS, the Short Form-36 (SF-36) physical function scale, the BBS, the six-minute walk test (6MWT), the five-meter walk test (5MWT), and the Timed Up and Go (TUG) test. The baseline assessment of the LEFS was repeated two to three days later to provide reliability data.5
The data was encouraging. The ICC of the LEFS was an impressive 0.97, indicating that the test was reliable, with little variation between the two baseline administrations. The correlations between the LEFS scores and other outcome measures were generally high, indicating good validity.5 These correlations were unsurprising. As patients participated in physical therapy, their functionality improved across all of the outcome assessments. It was the relative sensitivity of the LEFS that appeared to set it apart from the other measures.
The authors examined the mean scores of each outcome at both baseline and eight-weeks post intervention. They then examined the amount of change between the two scores in terms of the number of standard deviations. This data is summarized in Table 1.
Table 1: Sensitivity to measuring improvements in standard deviations.5
All six outcome measures reflected the improvements experienced by individuals in the study cohort. What differed was the extent to which each outcome measure could quantify that change. For the TUG test, the change was 0.62 standard deviations. The 6MWT was more discrete, measuring a change of 0.86 standard deviations. The 5MWT was slightly better, measuring a change of 0.90 standard deviations, and the BBS was still more discrete, measuring an improvement of 1 standard deviation. However, the greatest sensitivity was observed with the LEFS, measuring an improvement of 1.2 standard deviations. To the extent that patients experienced improvements, the LEFS was the most likely instrument to capture those changes.
Going to PROM?
As the culture of healthcare accountability continues to grow, clinicians will increasingly find themselves needing to select and administer outcome measures to quantify the effects of their interventions. In doing so, a number of performance measures and PROMs will be available. These will vary in their practicality, reliability, validity, and responsiveness. By inquiring about functionality in each individual's daily routines and environments, PROMs appear to facilitate a more patient-centric approach to outcomes collection. While the RMI is well established in the rehabilitation community, the newly validated LEFS may prove to better capture differences in patients' perceptions of their functionality associated with lower-limb orthotic management.
Phil Stevens, MEd, CPO, FAAOP, is in clinical practice with Hanger Clinic, Salt Lake City. He can be reached at .
- Ashford, S. A., S. Brown, and L. Turner-Stokes. 2014. Systematic review of patient-reported outcome measures (PROMS) for functional performance in the lower limb. Journal of Rehabilitation Medicine 47 (1):9-17.
- Collen, F. M., D. T. Wade, G. F. Robb, and C. M. Bradshaw. 1991. The Rivermead Mobility Index: A further development of the Rivermead Motor Assessment. International Disability Studies 13 (2):50-4.
- Rehabilitation Measures Database. Rehab Measures: Rivermead Mobility Index. www.rehabmeasures.org/lists/rehabmeasures/dispform.aspx?id=926
- Binkley, J. M., P. W. Stratford, S. A. Lott, and D. L. Riddle. 1999. The Lower Extremity Functional Scale (LEFS): Scale development, measurement properties, and clinical application. North American Orthopaedic Rehabilitation Research Network. Physical Therapy 79 (4):371-83.
- Verheijde, J. L., F. White, J. Tompkins, P. Dahl, J. G. Hentz, M. T. Lebec, and M. Cornwall. 2013. Reliability, Validity, and Sensitivity to Change of the Lower Extremity Functional Scale in Individuals Affected by Stroke. PM & R: The Journal of Injury, Function, and Rehabilitation 5 (12):1019-25.