Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Sep 17.
Published in final edited form as: Arch Phys Med Rehabil. 2016 Nov 25;98(4):659–664.e1. doi: 10.1016/j.apmr.2016.10.019

Tele-Assessment of the Berg Balance Scale: Effects of Transmission Characteristics

Kavita Venkataraman a, Michelle Morgan b, Kristopher A Amis c, Lawrence R Landerman b, Gerald C Koh a, Kevin Caves d, Helen Hoenig c,e
PMCID: PMC7498033  NIHMSID: NIHMS1627696  PMID: 27894732

Abstract

Objective:

To compare Berg Balance Scale (BBS) rating using videos with differing transmission characteristics with direct in-person rating.

Design:

Repeated-measures study for the assessment of the BBS in 8 configurations: in person, high-definition video with slow motion review, standard-definition videos with varying bandwidths and frame rates (768 kilobytes per second [kbps] videos at 8, 15, and 30 frames per second [fps], 30 fps videos at 128, 384, and 768 kbps).

Setting:

Medical center.

Participants:

Patients with limitations (N=45) in ≥1 of 3 specific aspects of motor function: fine motor coordination, gross motor coordination, and gait and balance.

Interventions:

Not applicable.

Main Outcomes Measures:

Ability to rate the BBS in person and using videos with differing bandwidths and frame rates in frontal and lateral views.

Results:

Compared with in-person rating (7%), 18% (P=.29) of high-definition videos and 37% (P=.03) of standard-definition videos could not be rated. Interrater reliability for the high-definition videos was .96 (95% confidence interval, .94–.97). Rating failure proportions increased from 20% in videos with the highest bandwidth to 60% (P<.001) in videos with the lowest bandwidth, with no significant differences in proportions across frame rate categories. Both frontal and lateral views were critical for successful rating using videos, with 60% to 70% (P<.001) of videos unable to be rated on a single view.

Conclusions:

Although there is some loss of information when using videos to rate the BBS compared to in-person ratings, it is feasible to reliably rate the BBS remotely in standard clinical spaces. However, optimal video rating requires frontal and lateral views for each assessment, high-definition video with high bandwidth, and the ability to carry out slow motion review.

Keywords: Postural balance, Rehabilitation, Telerehabilitation


The need for physical rehabilitation services is increasing rapidly across countries because of aging populations and the growing burden of chronic diseases.1 Rehabilitation services are currently constrained by manpower limitations to reach out to a wider pool of people requiring such services.2 In this context, the emergence of telerehabilitation offers a promising way forward to increase access to needed services with a relatively small pool of health professionals and therapists, especially after discharge to home and in high-risk groups.3-5 Modalities for providing remote rehabilitation include phone support, videoconferencing, and wearable devices with or without face-to-face contact.

Unlike other telehealth services, telerehabilitation has unique needs that make the effective design of telerehabilitation challenging. Therapists typically need to accurately assess patient performance of specific tasks, which requires detailed observation of the patient, and their interaction with their surroundings, while engaged in the task.

Varying Internet connection speeds could influence the quality of the video transmission and the ability to accurately assess patient performance, thereby affecting the utility of telerehabilitation. Research in telerehabilitation has tended to focus on high-bandwidth Internet connections for service delivery,6,7 or used multimodal interventions, to overcome limitations of Internet bandwidth.8,9 Such use limits the scope of telerehabilitation to communities and regions with high-Internet broadband connectivity, or in relative proximity to health care facilities. A little less than half of the world’s population had access to Internet and Internet-based services in 2015, a 7-fold increase from 2000, with a significant proportion of this increase due to a 12-fold increase in the availability of low-speed 2G mobile broadband connections.10 Even when fixed-line broadband connections are available, there are massive variations in broadband speeds, from 256kbits/s (≈32 kilobytes per second [kbps]) to >10Mbits/s (≈1250kbps).10 Hence, telerehabilitation and the video technology used therein need to be tested and validated under low-bandwidth connectivity conditions to allow for the expansion and effective use of such services.

In low-bandwidth settings, standard compression algorithms used by Internet providers can reduce the resolution of the image in each frame and/or the number of frames transmitted per second to reduce the amount of data transferred. Evaluation of the patient-environment interaction and fine deviations may be compromised by low resolution, while lower frame rates could make detection of intermittent difficulties challenging. In addition, most telehealth settings are equipped with a single camera, which can limit the assessment of 3-dimensional movements. Indeed, our preliminary work indicates that the reliability and validity of both fine and gross motor movement measures are affected by Internet bandwidth.11

Validation of tele-assessment should prioritize tests that are criterion standard for specific assessments. We therefore chose to examine the Berg Balance Scale (BBS), which is one of the most common tests used to assess balance and postural stability in rehabilitation12-16 and is a core outcome measure recommended for the assessment of standing balance.17 It is a 14-item test, with each item covering a different movement task rated on the extent to which balance is maintained when performing the task. Although the BBS has excellent intra- and interrater reliability when administered face to face,18 few studies19,20 have evaluated the BBS when administered remotely, and no study has considered connectivity and camera characteristics in relation to remote assessment of the BBS. Unlike other timed balance tests such as the timed Up and Go test19,21 and the single leg stance test,6 the BBS measures movement quality and is therefore a little more complex to administer remotely. This increases the likelihood of erroneous test interpretation and variability in assessment due to issues such as field and angle of view and video and audio quality. Hence, we compared BBS ratings using videos with differing connectivity and video settings with direct in-person ratings, with the ultimate aim of establishing the validity of BBS assessment in real-life telerehabilitation situations.

Methods

The study was designed as a repeated-measures study to compare physical performance as measured under different technological configurations. During the enrollment period, veterans were screened from the occupational/physical therapy, neurology, and rheumatology clinics of the Durham VA Medical Center. We deliberately selected participants who were identified clinically as having limitations in ≥1 of 3 specific aspects of motor function: fine motor coordination, gross motor coordination, and gait and balance.

Study inclusion criteria were as follows: (1) use of a cane/orthosis/prosthesis for ambulation ≥1times/wk; (2) a score of ≥6 out of 10 on the Short Portable Mental Status Questionnaire.22 We excluded participants who were (1) unable to ambulate ≥10ft (≥3m) without human help; (2) unable to rise from sit to stand without human help; (3) unable to stand for ≥10 seconds without human help; and (4) unable to provide informed consent. Of the 360 veterans screened, 76 (21%) met the eligibility criteria and agreed to participate, and 45 (59%) with complete data sets were included in this analysis (fig 1). Ethics review and approval was provided by the institutional review board at the Durham VA Medical Center.

Fig 1.

Fig 1

Study flow diagram.

Rater selection and training

Raters were physical therapists (8), occupational therapists (4), physical therapist assistants (1), and an experienced rehabilitation research assistant (1) in the Physical Medicine & Rehabilitation Service of Durham VA Medical Center.

Formal training was provided to ensure consistency in rating. Each rater was provided with a telerater training guide with the following instructions: (1) watch a training video to familiarize oneself with the 14 items of the BBS; (2) review definitions of the BBS’s rating system (ie, independent, moderately independent, supervision, minimal assistance, moderate assistance, maximal assistance, and total assistance); (3) review the BBS scoresheet and familiarize oneself with its formatting; and (4) start interactive training.

The interactive training consisted of (1) watching a demonstration subject in both frontal and lateral views to familiarize oneself to subject videos; (2) assessing the balance of 2 practice participants as many times as needed in preparation for a final test; and (3) completing the final test. Only 1-time viewing was permitted for the final test, and the video could be paused only between views (not during the task itself). Two raters were deemed expert raters a priori. The others were deemed certified raters if their final test score fell within an acceptable range, with the center being the mean score of the 2 expert raters (the range was calculated as being plus/minus the square root of the known interrater reliability times the SD squared, resulting in a score of 39–51 for the final test patient).

Video recording and editing

Participants were instructed by the research assistant to perform each task of the BBS assessment. Video was captured both frontally and laterally, with participants positioned 12ft away from the camera to capture their entire body. Distance from the subject was based on an approximate height of 5ft 11in and a HDC-SD40K cameraa set at maximum zoom out atop a standard tripod of 42.6-in height. Video recording was conducted in the department in designated clinical spaces of varying sizes (110–441sq ft).

Videos were downloaded and copied onto the VA Network Server and onto a DVD. Irrelevant footage (eg, demonstration of tests of participants or setup for individual tasks) was deleted using Final Cut Pro software on an iMac desktop.b This software was also used to reorder the sequence of the tests such that every video rated by a specific rater was in the same sequence. Compressor softwareb was used to manipulate the number of frames per second (fps) and/or bandwidths according to the configurations described below. The size and aspect ratio of the video were the same under all conditions.

The following 8 configurations were used to assess participant performance on the BBS:

  • 1.

    In person

  • 2.

    High-definition video with an optional slow motion review

  • 3–5.

    Fixed bandwidth (768kbps) standard-definition video with variable frame rates of 8, 15, and 30fps

  • 6–8.

    Fixed frame rate (30fps) standard-definition video with variable bandwidths of 128, 384, and 768kbps

Each configuration, except in person, was rated by 2 raters, with raters randomized across participants and conditions.

The 6 configurations under items 3 to 8 were to be viewed only at real-time speed.

Rating protocol

Raters were asked to rate all items of the BBS under the conditions listed above by using the following protocol:

  1. View the video a single time without pausing between view 1 and view 2 unless a rater was identified a priori as a “slow motion” rater

  2. Rate the video alone and without input from another person

If the audio portion of the video recording was poor (eg, noisy environment) or muted during processing, the rater was informed whether verbal cueing was provided for specific items.

Video recordings were rated in the frontal view distinct from the lateral view (with the order randomized across raters) and then a combined rating provided by the rater given their observations on both views.

Statistical methods

To examine differences in the proportion unable to rate a given video, we presented the proportion unable to rate by treatment group (in-person, high-definition video, standard-definition video) and by camera characteristics (view, bandwidth, frame rate). To test for significant differences across levels of our predictor on a dichotomous outcome (unable to rate vs able to rate), we used SAS PROC GLIMMIX (SAS/STAT software, version 9.4c) with a logistic link to estimate generalized linear mixed models.23 As ratings are clustered within 14 raters, each rater was treated as a random component in a model of the following form: Y=α1X1++e, where Y is a dichotomy, X1 is a classification factor (typically 3 categories), γ is a random component, Z is a categorical variable representing rater, and e is an error term. This model provides an overall F test for any significant differences across categories as well as tests for substantively relevant contrasts. Next, we used SAS PROC VARCOMP to estimate reliability using the methods described by Streiner and Norman,24 treating rater as a random component. Seven and a half percent of observations were missing. Differences in the proportion missing across raters and camera characteristics were small and nonsignificant. We therefore used listwise deletion in our analyses.

Results

The mean age of the participants included in this analysis was 61.7±11.3 years (table 1). Participants were overwhelmingly men, with a slightly higher proportion of whites than African Americans. Most had multiple medical conditions.

Table 1.

Subject characteristics (N=45)

Characteristic Value
Age (y) 61.7±11.3
Race
 White 26 (57.8)
 African American 19 (42.2)
Sex
 Male 36 (80)
 Female 9 (20)
No. of conditions 3.2±1.9
Total no. of diagnoses ≥3 27 (60)
Cardiac diagnosis 2 (4.4)
Neurologic diagnosis 9 (20)
Arthritis diagnosis 30 (66.6)
Diabetes diagnosis 1 (2.2)
Cancer diagnosis 3 (6.8)

NOTE. Values are mean ± SD or n (%).

Table 2 presents the proportions of tests in each treatment group (in-person rater vs high-definition [with optional slow motion] video vs standard-definition video) that raters were unable to rate. In multilevel regression analysis, the proportion unable to rate was nonsignificantly higher in the high-definition group (18%; P=.29) and significantly higher in the standard-definition video group (37%; P=.03) as compared with in-person rating (7%). Proportions on the individual items suggest that these overall differences were largely due to 5 items: transfer, stand with eyes closed, stand with feet together, reach forward, and place alternate feet on stool. When a subscale composed of all BBS items except these items was regressed on treatment group, there were no significant differences, with proportion unable to rate reduced to 6% in the high-definition group and 12% in the standard-definition video group.

Table 2.

Proportion of all ratings scored as “unable to rate” by treatment group

Individual BBS Item In-Person
(n=45)
High-Definition Video
(With Optional Slow
Motion Review) (n=91)
Standard-Definition Video
(Low Frame Rate and/or
Bandwidth) (n=455)
Total score (>1 subtest “unable to rate”) 7% 18% (P=.29)* 37% (P=.03)*
Sit to stand 0% 0% 0%
Stand unsupported 2% 1% 3%
Sit unsupported 0% 0% 1%
Stand to sit 0% 0% 1%
Transfer 0% 9% 5%
Stand eyes closed 0% 2% 13%
Stand feet together 4% 3% 4%
Reach forward 0% 2% 14%
Pick up object 0% 1% 2%
Look over shoulders 0% 1% 2%
Turn 360° 0% 0% 1%
Place alternate feet on stool 0% 3% 4%
Tandem stance 2% 0% 3%
Stand on 1 leg 2% 1% 2%
Berg frontal view 7% 62% (P<.001) 73% (P<.001)
Berg lateral view 7% 60% (P<.01) 71% (P<.001)
*

P from generalized linear mixed models with in-person rating as the reference category.

When only 1 view was allowed, there was a marked increase in the proportion unable to rate in both the video groups and no change in the in-person tests. An examination of the individual items indicated that 4 items (transfer, stand with eyes closed, stand with feet together, reach forward) accounted for most of the inability to rate (supplemental table S1, available online only at http://www.archives-pmr.org/).

We next examined the effect of bandwidth and frame rate on the inability to rate among those viewing the standard-definition video (table 3). Only 20% of the videos could not be rated at maximal bandwidth, increasing to 60% at the lowest bandwidth. An examination of the individual items indicated that decreases in bandwidth increased the inability to rate on transfer, stand with eyes closed, and reach forward. However, changes in frame rate resulted in nonsignificant changes in proportions across frame rate categories.

Table 3.

Proportion of all ratings scored as “unable to rate” by bandwidth and frame rate

Varied Bandwidth With Frame Rate 30fps Bandwidth 768kbps (n=81) Bandwidth 384kbps (n=83) Bandwidth 128kbps (n=85)
20% 51% (P<.001)* 60% (P<.001)*
Varied Frame Rate With Bandwidth 768kbps Frame Rate 30fps (n=81) Frame Rate 15fps (n=86) Frame Rate 8fps (n=81)
20% 31% (P=.11)* 20% (P=.91)*
*

P from generalized linear mixed models with maximal bandwidth/frame rate as the reference category.

Based on 69 high-definition videos with optional slow motion review observed, interrater reliability was .96 (95% confidence interval, .94–.97).

Discussion

We found that using videos to rate the BBS led to some loss of information compared to in-person ratings, irrespective of the video quality and characteristics. This loss was minimal when high-definition videos with slow motion review were used, but substantial when using low-bandwidth/frame rate videos. Bandwidth was more critical for successful rating compared to frame rate, with only 40% of videos successfully rated at the lowest bandwidth. Notably, across all transmission modes, successful rating was less likely with a single view (ie, frontal or lateral) compared to ratings based on both a frontal and a lateral view. Indeed, more than half of the participants could not be rated with a single view.

Only 1 previous study has compared remote versus in-person rating of the full BBS. Cabana et al20 found some degree of agreement between both settings for the BBS in patients with total knee arthroplasty, with a Krippendorff α of .76. However, these results were obtained using the best possible Internet connectivity, and high-end videoconferencing infrastructure, including pan tilt and zoom cameras.20 The interobserver agreement between the 2 high-definition video raters in our study was also high. Therefore, our results underscore the importance of video capture and transmission characteristics.

Our findings highlight the importance of having both frontal and lateral views to rate participants remotely. This poses challenges when the telehealth setup available is a single camera with a fixed view, as participants will need to perform the tasks twice, to ensure both views are captured.

Four individual BBS items were mainly responsible for the failure rate with videos, namely, transfer, stand with eyes closed, stand with feet together, and reach forward. Transferring from one seat to another is a complex movement, needing assessment of the extent of hand use and verbal cueing or supervision to successfully rate this item. Missing audio in some videos and the angle of views may have affected raters’ ability to assess this item on video. Thus, remote raters may need to request additional views and/or ensure optimal audio communication to enhance rating accuracy.

The high failure rates for eyes closed, feet together, and reach forward were due to the distance at which the cameras were placed relative to the participant to obtain a whole-body view. Finer details, such as whether eyes were closed or feet were in the correct position, could not be evaluated at this distance. In addition, comments from raters and review of videos showed that light reflecting off the ruler made it difficult for raters to assess functional reach distance. Possible solutions for these issues include having an onsite assistant who can verify these details, using nonglare rulers with large fonts, and using back-end software to enhance clarity and allow zooming in of videos to examine these details. Palsbo et al25 tested the agreement between in-person rating and video rating of functional reach using an enlarged paper yardstick that the remote assessor could easily see and found no significant differences between in-person and video-rated results. Similarly, no significant differences were found between remote and in-person ratings of the BBS functional reach test in a study of volunteers without balance problems with simulated difficulties by Durfee et al,19 where the remote assessor could pan and zoom the camera to view the readings.

The BBS has been found to have excellent intra- and interrater reliability in various settings and populations.13,18,26-33 However, reliability statistics for individual items suggest that interrater reliability for some items may be lower than for others. Conradsson et al27 reported low agreement between raters for standing on 1 leg, turning behind/looking over shoulders, and standing with eyes closed, whereas de Figueiredo et al33 reported low agreement for turning behind/looking over shoulders and retrieving objects from floor. Our study adds to the literature by highlighting particular items that may be challenging to assess remotely and the particular real-world circumstances that may affect remote assessment.

Study strengths

The key strength of our study is the assessment of real-world settings, in terms of clinical space, video views, and transmission characteristics, and their effect on the ability to rate a criterion standard balance assessment tool—the BBS.

Study limitations

Limitations include the inability to compare the interrater reliability of video assessment with in-person rating under different conditions, because of the high missing value rates on several items in the BBS. Although we identified a way of optimizing remote assessment (ie, using high-definition videos with slow motion review and frontal and lateral views), this study did not examine diverse methods (eg, use of Kinect technology34) to compensate for the suboptimal circumstances (ie, low bandwidth or frame rate, single view, etc). In addition, to ensure internal validity, the in-person rater was restricted to the same views and distance from the patient as were the video cameras. We also required that items be tested in a specific order without exception and without repetition. Raters were not allowed to use “best judgment” if unable to assess a specific item well enough to rate it. These conditions do not exist in standard clinical practice. Nevertheless, the restrictions due to video technology are the same when used in clinical situations, making our findings relevant to clinical care using video technology.

Conclusions

The present study found that it is feasible to reliably rate the BBS remotely in standard clinical spaces, but that optimal video rating requires frontal and lateral views for each assessment, high-definition video with high bandwidth, and the ability to carry out slow motion review.

Supplementary Material

Supplemental Table S1. Proportions “unable to rate” by video view—individual BBS items

Acknowledgments

We thank Carol Edmonds, BA, and Robert Kerns, BA, for their assistance with video editing.

Supported by Veterans Health Administration Rehabilitation Research and Development Service (grant no. RR&D F0900-R) and the Duke Older Americans Independence Center (grant no. NIA AG028716). Funders had no role in study design; in the collection, analysis, and interpretation of data; in the writing of the report; and in the decision to submit the manuscript for publication.

List of abbreviations:

BBS

Berg Balance Scale

fps

frames per second

kbps

kilobytes per second

Footnotes

Suppliers

a.

HDC-SD40K; Panasonic.

b.

Final Cut Pro, iMac, and Compressor; Apple.

c.

SAS/STAT version 9.4; SAS Institute Inc.

Disclosures: none.

References

  • 1.World Health Organization. World report on disability. Geneva: World Health Organization; 2011. [Google Scholar]
  • 2.Gupta N, Castillo-Laborde C, Landry MD. Health-related rehabilitation services: assessing the global supply of and need for human resources. BMC Health Serv Res 2011;11:276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Laver KE, Schoene D, Crotty M, George S, Lannin NA, Sherrington C. Telerehabilitation services for stroke. Cochrane Database Syst Rev 2013;(12):CD010255. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Khan F, Amatya B, Kesselring J, Galea M. Telerehabilitation for persons with multiple sclerosis. Cochrane Database Syst Rev 2015;(4):CD010508. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Rogante M, Kairy D, Giacomozzi C, Grigioni M. A quality assessment of systematic reviews on telerehabilitation: what does the evidence tell us? Ann Ist Super Sanita 2015;51:11–8. [DOI] [PubMed] [Google Scholar]
  • 6.Russell TG, Hoffmann TC, Nelson M, Thompson L, Vincent A. Internet-based physical assessment of people with Parkinson disease is accurate and reliable: a pilot study. J Rehabil Res Dev 2013;50:643. [DOI] [PubMed] [Google Scholar]
  • 7.Schein RM, Schmeler MR, Holm MB, Pramuka M, Saptono A, Brienza DM. Telerehabilitation assessment using the Functioning Everyday with a Wheelchair-Capacity instrument. J Rehabil Res Dev 2011;48:115–24. [DOI] [PubMed] [Google Scholar]
  • 8.Chumbler NR, Quigley P, Li X, et al. Effects of telerehabilitation on physical function and disability for stroke patients: a randomized, controlled trial. Stroke 2012;43:2168–74. [DOI] [PubMed] [Google Scholar]
  • 9.Russell TG, Buttrum P, Wootton R, Jull GA. Internet-based outpatient telerehabilitation for patients following total knee arthroplasty: a randomized controlled trial. J Bone Joint Surg Am 2011;93:113–20. [DOI] [PubMed] [Google Scholar]
  • 10.International Telecommunication Union. ICT facts and figures 2015. Geneva: International Telecommunication Union; 2015. [Google Scholar]
  • 11.Hoenig H, Tate L, Dumbleton S, et al. A quality assurance study on the accuracy of measuring physical function under current conditions for use of clinical video telehealth. Arch Phys Med Rehabil 2013;94:998–1002. [DOI] [PubMed] [Google Scholar]
  • 12.Yelnik A, Bonan I. Clinical tools for assessing balance disorders. Neurophysiol Clin 2008;38:439–45. [DOI] [PubMed] [Google Scholar]
  • 13.Blum L, Korner-Bitensky N. Usefulness of the Berg Balance Scale in stroke rehabilitation: a systematic review. Phys Ther 2008;88:559–66. [DOI] [PubMed] [Google Scholar]
  • 14.Neuls PD, Clark TL, Van Heuklon NC, et al. Usefulness of the Berg Balance Scale to predict falls in the elderly. J Geriatr Phys Ther 2011;34(1):3–10. [DOI] [PubMed] [Google Scholar]
  • 15.Brotherton SS, Williams HG, Gossard JL, Hussey JR, McClenaghan BA, Eleazer P. Are measures employed in the assessment of balance useful for detecting differences among groups that vary by age and disease state? J Geriatr Phys Ther 2005;28:14–9. [DOI] [PubMed] [Google Scholar]
  • 16.Tan D, Danoudis M, McGinley J, Morris ME. Relationships between motor aspects of gait impairments and activity limitations in people with Parkinson’s disease: a systematic review. Parkinsonism Relat Disord 2012;18:117–24. [DOI] [PubMed] [Google Scholar]
  • 17.Sibley KM, Howe T, Lamb SE, et al. Recommendations for a core outcome set for measuring standing balance in adult populations: a consensus-based approach. PLoS One 2015;10:e0120568. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Downs S, Marquez J, Chiarelli P. The Berg Balance Scale has high intra- and inter-rater reliability but absolute reliability varies across the scale: a systematic review. J Physiother 2013;59:93–9. [DOI] [PubMed] [Google Scholar]
  • 19.Durfee WK, Savard L, Weinstein S. Technical feasibility of tele-assessments for rehabilitation. IEEE Trans Neural Syst Rehabil Eng 2007;15:23–9. [DOI] [PubMed] [Google Scholar]
  • 20.Cabana F, Boissy P, Tousignant M, Moffet H, Corriveau H, Dumais R. Interrater agreement between telerehabilitation and face-to-face clinical outcome measurements for total knee arthroplasty. Telemed J E Health 2010;16:293–8. [DOI] [PubMed] [Google Scholar]
  • 21.Sprint G, Cook DJ, Weeks DL. Toward automating clinical assessments: a survey of the timed Up and Go. IEEE Rev Biomed Eng 2015;8:64–77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Fillenbaum GG. Comparison of two brief tests of organic brain impairment, the MSQ and the short portable MSQ. J Am Geriatr Soc 1980;28:381–4. [DOI] [PubMed] [Google Scholar]
  • 23.McCullagh P, Nelder JA. Generalized linear models. 2nd ed. London: Chapman & Hall/CRC Pr; 1989. [Google Scholar]
  • 24.Streiner D, Norman G. Health measurement scales: a practical guide to their development and use. 3rd ed. Oxford: Oxford Univ Pr; 2003. [Google Scholar]
  • 25.Palsbo SE, Dawson SJ, Savard L, Goldstein M, Heuser A. Televideo assessment using Functional Reach Test and European Stroke Scale. J Rehabil Res Dev 2007;44:659–64. [DOI] [PubMed] [Google Scholar]
  • 26.Newstead AH, Hinman MR, Tomberlin JA. Reliability of the Berg Balance Scale and balance master limits of stability tests for individuals with brain injury. J Neurol Phys Ther 2005;29:18–23. [DOI] [PubMed] [Google Scholar]
  • 27.Conradsson M, Lundin-Olsson L, Lindelof N, et al. Berg Balance Scale: intrarater test-retest reliability among older people dependent in activities of daily living and living in residential care facilities. Phys Ther 2007;87:1155–63. [DOI] [PubMed] [Google Scholar]
  • 28.Wirz M, Muller R, Bastiaenen C. Falls in persons with spinal cord injury: validity and reliability of the Berg Balance Scale. Neurorehabil Neural Repair 2010;24:70–7. [DOI] [PubMed] [Google Scholar]
  • 29.Major MJ, Fatone S, Roth EJ. Validity and reliability of the Berg Balance Scale for community-dwelling persons with lower-limb amputation. Arch Phys Med Rehabil 2013;94:2194–202. [DOI] [PubMed] [Google Scholar]
  • 30.Toomey E, Coote S. Between-rater reliability of the 6-minute walk test, Berg Balance Scale, and handheld dynamometry in people with multiple sclerosis. Int J MS Care 2013;15:1–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Muir-Hunter SW, Graham L, Montero Odasso M. Reliability of the Berg Balance Scale as a clinical measure of balance in community-dwelling older adults with mild to moderate Alzheimer disease: a pilot study. Physiother Can 2015;67:255–62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Telenius EW, Engedal K, Bergland A. Inter-rater reliability of the Berg Balance Scale, 30 s chair stand test and 6 m walking test, and construct validity of the Berg Balance Scale in nursing home residents with mild-to-moderate dementia. BMJ Open 2015;5:e008321. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.de Figueiredo KM, de Lima KC, Cavalcanti Maciel AC, Guerra RO. Interobserver reproducibility of the Berg Balance Scale by novice and experienced physiotherapists. Physiother Theor Pract 2009;25:30–6. [DOI] [PubMed] [Google Scholar]
  • 34.Tan KK, Narayanan AS, Koh CH, Caves K, Hoenig H. Extraction of spatial information for low-bandwidth telerehabilitation applications. J Rehabil Res Dev 2014;51:825–40. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Table S1. Proportions “unable to rate” by video view—individual BBS items

RESOURCES