Skip to main content
Journal of Visualized Experiments : JoVE logoLink to Journal of Visualized Experiments : JoVE
. 2014 Feb 11;(84):51205. doi: 10.3791/51205

A Standardized Obstacle Course for Assessment of Visual Function in Ultra Low Vision and Artificial Vision

Amy Catherine Nau 1, Christine Pintar 1, Christopher Fisher 1, Jong-Hyeon Jeong 2, KwonHo Jeong 2
PMCID: PMC4122199  PMID: 24561717

Abstract

We describe an indoor, portable, standardized course that can be used to evaluate obstacle avoidance in persons who have ultralow vision. Six sighted controls and 36 completely blind but otherwise healthy adult male (n=29) and female (n=13) subjects (age range 19-85 years), were enrolled in one of three studies involving testing of the BrainPort sensory substitution device. Subjects were asked to navigate the course prior to, and after, BrainPort training. They completed a total of 837 course runs in two different locations. Means and standard deviations were calculated across control types, courses, lights, and visits. We used a linear mixed effects model to compare different categories in the PPWS (percent preferred walking speed) and error percent data to show that the course iterations were properly designed. The course is relatively inexpensive, simple to administer, and has been shown to be a feasible way to test mobility function. Data analysis demonstrates that for the outcome of percent error as well as for percentage preferred walking speed, that each of the three courses is different, and that within each level, each of the three iterations are equal. This allows for randomization of the courses during administration.

Abbreviations: preferred walking speed (PWS) course speed (CS) percentage preferred walking speed (PPWS)

Keywords: Medicine, Issue 84, Obstacle course, navigation assessment, BrainPort, wayfinding, low vision


Download video file (25.9MB, mp4)

Introduction

Low vision rehabilitation assessments must determine whether intervention results in improvement in function. Performance metrics typically involve computer based reading or functional assessments1-9 as well as quality of life questionnaires10-15. Being able to also assess the low vision patient's ability to navigate around obstacles might also provide clues to functional improvements18 particularly in the case of artificial vision devices. Geruschat et al. recently published navigation outcomes with a retinal implant chip, highlighting the need for a standard metric in this area17. Currently there are no widely accepted, objective, validated, and comprehensive standards for determining capacity for obstacle avoidance.

Development of a functional test that would correlate to navigation performance for persons with low vision or "ultra low vision" as produced by artificial vision would be desirable, but has remained an elusive goal. The burgeoning field of artificial vision devices such as retinal implant chips18-24 or sensory substitution devices such as the BrainPort 25 and The vOICe26, necessitates a test of obstacle avoidance that might correlate to increased navigational abilities conferred by these devices. Such an assessment would not only allow subjects to understand their own limitations as they traverse their surroundings, but might provide a means for measuring improvement with orientation and mobility training or between iterations of vision enhancement prototype devices. Ideally, there might be some ability to assess an individual's risk for fall accidents27.

Our goal was to create an obstacle course that would be useful for evaluation of navigation ability in patients using artificial vision devices and transferrable to the field of low vision in general. A review of the published literature on obstacle courses and visual impairment was undertaken using the PubMed database. There have been numerous attempts at creating standardized obstacle courses16,17,28-31,34. Most of these are not portable in the sense that it would be difficult to exactly reproduce the setting, particularly for outdoor courses. Maguire et al. describe obstacle course which is used to show mobility performance in patients with Leber's Congenital Amaurosis. This course has the benefit of being portable and small, but it is not clear whether different iterations have been made available to prevent memorization effects, nor are there any provisions for obstacles which are not on the floor, texture changes, or stepovers. Leat provides an incisive description of potential pitfalls in designing a course and puts forth a description of an outdoor course which unfortunately would not be able to be reproduced exactly in an alternative location30. Velikay-Parel et al. described a mobility test for use with retinal implant chips. This design had the benefit of being portable and simple to execute. While this course could be reproduced at an alternative site, no specific details on course construction are provided. Moreover, and more concerning was that they showed the learning effect reached asymptotic levels due to course familiarity, therefore being able to prevent course memorization altogether might eliminate the concern for loss of learning effect over time18. None of the courses described so far have been widely adopted by the low-vision or rehabilitation communities.

The authors subsequently consulted with a team of six low vision occupational therapists and orientation and mobility specialists from the Western Pennsylvania School for Blind Children (Pittsburgh, PA) and the Blind and Vision Services Rehabilitation of Pittsburgh (Homestead, PA) regarding proposed course design. Desirable attributes of a functional obstacle course identified included: Portability for easy assembly/disassembly and storage, flexibility to test under both dim and bright lighting conditions, and to mirror "real life" situations by including obstacles that represent objects in a patient's home environment that are sturdy enough to withstand repeated collision while being ductile in order to prevent patient injury. In addition it was deemed necessary to have several types of environments designed in such as way so that when administered in a randomized order prevents course memorization. In addition, the course should demonstrate reproducible results in multiple settings, have strong inter and intra rater reliability and be an objective measure of spatial awareness.

The culmination of this effort was development of an obstacle course which could reasonably be expected to be reproduced in a standard institutional hallway. The course is designed to test different skill sets, all important for navigation. Each level of the course attempts to focus several particular types of obstacles encountered in everyday navigation activities. The first course evaluates the ability to navigate through relatively high contrast targets that are all placed on the floor, but requires a large number of turns. The second course evaluates the ability to navigate through obstacles that are both high and low contrast, floor texture changes, and objects suspended in air. The final course evaluates the ability to navigate Styrofoam obstacles that are low contrast, surface glare changes on the floor, the addition of nonStyrofoam obstacles (fabric), floor tile color changes, obstacles that must be stepped over, and obstacles that are not on the floor. The courses are labeled 1, 2, and 3 for ease of labeling, but this designation should not be construed as increasing in level of difficulty. Within each level, there are three versions of the course, which can be randomized to prevent course memorization.

Protocol

1. Course Construction

  1. Install course floor. Course dimensions are 40 ft long by 7 ft wide consisting of 280 1 ft2 portable floor tiles (beige event floor tiles). Place with black trim around perimeter only (Figure 1).

  2. Paint the adjacent walls to match the floor tiles creating a somewhat monochromatic environment. Colors we used with greyscale values are provided (Table 5). If the specific colors are not available, we recommend taking a tile to a hardware store for color matching.

  3. Install lighting according to the lighting template (Figure 1). Connect lights to dimmer switch.

  4. Paint obstacles according to painting instructions (Figure 2).

2. Prepare Testing Area

  1. Adjust lighting to desired condition and check with light meter at the beginning, middle, and end of the hallway containing the course.

  2. Make sure that the video camera is set to record and that placement of the camera is appropriate to capture the subjects as they walk through the course. We recommend a ceiling mount or alternatively the camera can be hand held.

3. Record Preferred Walking Speed PWS

  1. Position subject in center of walkway (course column "D"). Note: toes should be behind border of walkway. Read instructions to the subject (Figure 3).

  2. Begin stopwatch once foot crosses black border and onto pathway. Stop time once foot crosses black border at other end of walkway. Record time as PWS1. Turn subject around and repeat procedure in opposite direction. Record time as PWS2. Average PWS1 and PWS2 and record as final PWS.

4. Obstacle Course Navigation

  1. From the randomization scheme, set up the first course (Figure 4). Floor tiles should be used as the grid upon which the obstacles are mapped. Refer to the provided diagram for correct mapping of the obstacles. It is helpful to number the tiles along the vertical and horizontal axis with an indelible marker to permit easy placement of the obstacles. It is also helpful to label the obstacles according to the provided diagrams in an inconspicuous location.

  2. Guide subject to start of 40 ft walkway. Read instructions to the subject (Figure 3). Subject should be positioned in center of walkway (column "D") with toes behind border. Begin stopwatch once foot crosses black border and onto pathway. Stop time once foot crosses black border at other end of walkway. Record this time as Course Speed (CS).

  3. Record when obstacles are hit, grading the severity of the hit on a 3 point scale. The course run should be videotaped for later confirmation by an independent observer.

5. Obstacle Identification

  1. Upon completion of course navigation task, turn subject around to face the course and position at center of course (column D). Note: make sure that any obstacles that require repositioning in order to view correct color from end of course is rotated. Read instructions to subject (Figure 3). At this time the first object identification task will be administered. Ask the subject to turn around and tell the Research Assistant the total number of objects they can discern within 30 sec. This number should be recorded.

  2. Tell the subject to walk back through the course and point to each obstacle they can see. It does not matter if they collide with the obstacle. The number of obstacles they can see is recorded. It is helpful to record which obstacles they are able to detect. This is not timed.

Items 4 and 5 should be repeated for each course version that is run.

Representative Results

Subjects

Six sighted blindfolded, sighted, and 36 completely blind but otherwise healthy adult (age range 19-85 years), male (n=29) and female (n=13) subjects were enrolled in one of three studies involving testing of the BrainPort sensory substitution device (Wicab, Madison WI). All studies were approved by the University of Pittsburgh IRB and all subjects signed an approved informed consent document. All studies were a within subjects, repeated measures design such that each subject acted as their own control. Sighted subjects were blindfolded to simulate a newly blind condition for all testing procedures. Visual acuity of light perception or worse for those with blindness was confirmed with BaLM light perception test and the FrACTSnellen score of < 2/5,000 and an eye exam prior to enrollment. All subjects completed the entire obstacle course at baseline and then again after a 15-20 hr structured training protocol with the BrainPort device. This protocol is designed to confer basic proficiency with the device and includes approximately 2 hr of ambulation/mobility training within the office environment (locating doors, windows, chairs, etc.). The primary outcome for course navigation is measured by percentage preferred walking speed (PPWS), which is a gold standard for mobility research. This is calculated by dividing CS by PWS (see instructions). Our secondary outcome is percent error, defined as the percentage of possible collision with obstacles on the course.

The first 16 subjects were sent through all 9 course iterations in both bright and dim lighting situations for a total of 18 runs through the obstacle course per subject. Means and standard deviations were calculated across control types, courses, lights, and visits. To adjust for random effects among repeated measurements in each nested cluster within a subject, the linear mixed effects model was used to compare different categories in the PPWS and percent error data. The nested clustering was in the order of subject identification number, visit (pretraining and post training), light (dim and bright), and course level (1, 2, and 3). A preliminary analysis of the first 16 subjects showed that there were no statistically significant differences between the individual versions of the course within each level of difficulty. Therefore, in order to minimize subject burden, the remaining subjects were randomized to one version of course 1, 2, and 3 in dim light and another version of course 1, 2, and 3 in bright light. This reduced the time to complete the course from 3 hr to just less than 1 hr. Both the order of the courses and lighting conditions were randomized to prevent the potentially negative effects of waning concentration and/or fatigue.

Data for all subjects is presented in Table 1 for PPWS and Table 2 for Percent Error. The data is arranged as follows in each table: All (pre and post training data combined), Pre training (no BrainPort) and Post training (with BrainPort), respectively. Note for pretraining values, subjects are without vision, which tended to result in larger standard errors for this condition. All reported p-values are two-sided and statistical analyses were done using Stata/IC12.1. We found that for the outcomes of PPWS and Percent Error, the three course levels (1, 2, and 3) were not equal. We also found that the levels of the nine sub courses were not equal. Our results showed that the three subcourse iterations (a, b, and c) for level 1 were equal , as were three subcourse iterations (a, b, and c) for level 3 for all conditions. However for level 2, the sub courses were shown to be equal when using the BrainPort, but not at baseline (no BrainPort /pretraining condition), which affected results for the combined condition.

Figure 6 is a histogram showing our results for PPWS, which demonstrates that subjects using the BrainPort walked more slowly than without it (PPWS 1.90 for No BrainPort Condition vs. 3.92% for the BrainPort Condition, p=0.001) Review of the video cameras was particularly helpful in explaining this result. When the blind subjects walk through the course at baseline, they walk at their normal pace but hit anything in their path as they have no means for detecting obstacles. However, subjects using the device engaged in visual scanning, a behavior which was absent without the BrainPort, and reflects in an increase in PPWS values (see video).

Figure 7 shows our baseline versus BrainPort condition results for the outcome of percent possible errors. Using the BrainPort, subjects had a trend towards a reduced number of collisions with obstacles compared to the no BrainPort condition. A deficiency of current artificial vision devices is lack of depth perception, so although they might detect an obstacle, it is quite difficult to estimate its distance. This is due to limited resolution capabilities of the BrainPort and use of a single camera system. In order to provide additional insight into the obstacle avoidance capabilities of the BrainPort, two visual identification tasks are conducted during the performance trial. The first takes place at the completion of the course when the subject is asked to turn around and tell the examiner the number of objects in the total course that he/she can discern. We found that the resolution of the BrainPort was not sufficient to perform this task, but it remains to be tested in a low vision cohort. The second identification task involves an untimed walk through one version of each level of the course and asking the subject point to obstacles that they can detect. This visual identification tasks is conducted separately from the timed course navigation tasks so as not to influence walking speed34. In addition, for the obstacle detection task collisions are not recorded. The "no BrainPort " or pretraining condition was not tested as none of our blind subjects would have been able to complete this task. Table 4 shows our results for the obstacle detection task using the BrainPort in dim light and bright light. We were able to further analyze this by the color of the obstacle detected. This is important for artificial vision, which is heavily dependent on contrast. Overall, we found that subjects were able to detect the presence of any obstacle about 48% of the time whether the obstacle was high or low contrast. Generally, high contrast obstacles were detected more easily than low contrast obstacles irrespective of lighting condition (56.25% versus 40%, respectively). Obstacle detection did not vary significantly between lighting conditions, likely due to the presence of luminance averaging software on the BrainPort device.

graphic file with name jove-84-51205-0.jpgTable 1. Representative summary of results which compares percent preferred walking speed at baseline to post - BrainPort training values. The Kruskall-Wallis test was used to compare baseline values (no BrainPort condition, or pretraining) to those obtained after one week of BrainPort training (BrainPort condition, or post training). Click here to view larger image.

graphic file with name jove-84-51205-1.jpgTable 2. Representative summary of results which compares percent possible errors at baseline to post BrainPort training values. The Kruskall-Wallis test was used to compare baseline values (no BrainPort condition, or pretraining) to those obtained after one week of BrainPort training (BrainPort condition, or post training). Click here to view larger image.

graphic file with name jove-84-51205-2.jpgTable 3. Detailed description of obstacles used for the course.Click here to view larger image.

graphic file with name jove-84-51205-3.jpgTable 4. Percentage of light and dark objects identified in both dim and bright lighting during the obstacle detection task.Click here to view larger image.

graphic file with name jove-84-51205-4.jpgTable 5. Detail of the materials required for obstacle course construction. Click here to view larger image.

graphic file with name jove-84-51205-5.jpgFigure 1. Flooring and lighting set-up template.Click here to view larger image.

graphic file with name jove-84-51205-6.jpgFigure 2. Obstacle painting instructions.Click here to view larger image.

graphic file with name jove-84-51205-7.jpgFigure 3. Instructions for staff when administering course.Click here to view larger image.

graphic file with name jove-84-51205-8.jpgFigure 4. Illustration of each of the 9 course iterations grouped by course level including a description of number of turns and path width. Vertical path refers to number of open tiles to be traversed in the forward direction, Horizontal path refers to number of open tiles to be traversed in the right or left direction. Turn refers to when the subject must change orientation or direction to avoid an obstacle. Click here to view larger image.

graphic file with name jove-84-51205-9.jpgFigure 5. Illustration of the idealized path trajectory through each course.Click here to view larger image.

graphic file with name jove-84-51205-10.jpgFigure 6. Percent Preferred walking speed at Baseline and Post Training. Click here to view larger image.

graphic file with name jove-84-51205-11.jpgFigure 7. Percentage of Possible errors made at Baseline and Post Training.Click here to view larger image.

Discussion

We describe an indoor, portable, easily reproducible, and relatively inexpensive course that can be used to evaluate obstacle avoidance in persons who are blind or have low vision. Most current obstacle course designs and tests (i.e. TUGS) are difficult to compare across sites and observers, or are permanent instillations which cannot readily be performed at alternate locations16,17,30. Our goal was to create a course that could be standardized for use at different locations and with different observers, and which would provide some predictive ability as to whether an intervention (i.e. artificial vision device or mobility training) had any effect.

We constructed a portable obstacle course measuring 40 ft long by 7 ft wide consisting of 280 1 ft2 portable floor tiles. The floor tiles flanking the perimeter of the course are also beige, but do have a 1 in darker border at the outside edge, which serves to delineate the border of the course. The adjacent walls are painted to match the floor tiles creating a somewhat monochromatic environment. This serves to reduce ambient contrast and render the obstacles more prominent. A total of 16 obstacles representing objects encountered in the day to day environment such as chairs, desks, trash cans, etc. were identified by the orientation and mobility consultants. We recreated the objects in representative block shapes out of Styrofoam, (see Table 3 for exact specifications), with 10 obstacles being used for any given course iteration. These obstacles are located either on the floor or are hung from the ceiling at a height of 63 in from ground level, as the average height of American females is 63.8 in vs. average American male height of 69.3 in 33. Styrofoam obstacles were manufactured according to custom specifications. The sides of the obstacles are painted darker or lighter than the ambient color to vary contrast. Other obstacles include a dark pile of fabric, changes in floor color and changes in floor texture, the latter created by placing a carpeted mat on the obstacle course. These were added at the suggestion of the occupational therapists, who noted that fall accidents often occur when glare or other texture changes are misconstrued as an obstacle. Ambient illumination is controlled and measured with a light meter. The total cost for all course related materials including all outcome measurement is approximately $5,200 USD.

Obstacles are arranged in three prespecified levels, with 3 subcourses or iterations for each level. Each course level contains the same set of obstacles arranged in 1 of 3 configurations. Course levels are determined by number of turns and path width, as well as type and placement of obstacles. Each course is color coded and mapped onto a grid (floor tiles) for rapid and easy reproducibility (Figure 4). Each of the 3 course permutations within each level of difficulty is designed with a similar, if not identical, number of path widths and turns between obstacles (Figure 5). Courses can be run in both photopic (light) and mesopic (dim) illumination. All runs through the course are videotaped. Each course takes approximately 0.5-5 min to navigate, depending on baseline navigation skills, preferred walking speed, and course level. For timed assessments, subjects are instructed to find their way through the obstacle course as quickly as possible using normal gait while avoiding objects to the best of their ability.

The primary outcome for course navigation is measured by looking at percentage preferred walking speed (PPWS). PPWS is widely used in balance and gait research and is an ideal measure because it offers the advantage of allowing subjects to act as their own controls, thus normalizing results for physical factors such as height and weight as well as for sex and age32. Using this metric has the added advantage of negating any effect of previous mobility training between subjects.

While use of percent PPWS is a good primary outcome measure to determine a difference between baseline and post intervention performance (i.e. low vision rehabilitation or artificial vision device use), it is only one of several assessments that we used. As the subject is walking the course, the number of errors or "collisions" is also recorded. Errors are quantified on a 3 point scale as first described by Marron and Bailey31. Errors were scored as 1 point if the subject made contact with an obstacle but was able to correct in ≤5 sec, 2 points if the subject took 5-15 sec to correct errors, and 3 points if the subject took >15 sec to self-correct or required the assistance of one of the research assistants to correct the error31. We also have two object identification tasks, both of which are untimed. The first requires the subject to view the course just completed and count the number of obstacles they can detect. The second requires the subjects to navigate the course and point to objects they think are in their path.

We found PPWS to be an appropriate primary outcome measure to determine a difference between baseline and post intervention performance. For our study, this metric reliably demonstrated that subjects slowed down significantly when using the BrainPort, a finding which was confirmed by the fact that subjects scanned their environments (see video). We are currently collecting data on whether PPWS scores can improve after prolonged use of the BrainPort with additional orientation and mobility training. Percent Error data consistently suggested trends for improved performance across every course level. A large gap in function for camera based artificial vision devices is lack of depth information. It is likely that Percent Error outcomes would improve if artificial vision devices had the ability to enable this precept. Indeed, we have conducted pilot studies comparing several vibrotactile canes to the BrainPort as well as studies with multimodal input (BrainPort plus vibrotactile canes) using this obstacle course (data not shown). Preliminary results suggest that use of vibrotactile systems, which can convey depth cues, improve both PPWS and Percent Error Performance. The two tertiary outcomes of obstacle detection can be used to provide depth to the navigation analysis. For example, although Percent Error scores did not improve appreciably, subjects were able to detect if an obstacle was present about half the time, when presumably this would be none of the time for a blind person without an assistive device.

Comment should be made regarding making comparisons between each level of the course. As mentioned in the introduction, each "level" possesses its own unique conditions designed test a specific combination of navigation skills. Therefore, it is important not to conclude that there is a progressive increase in difficulty between levels 1, 2, and 3. For example, there are fewer obstacles to hit in level 3, but more floor and texture changes. We do account for these factors in calculating percent possible error calculations. For any given course, we only count the actual obstacles a subject can hit, but not floor texture changes. For texture or color changes located on the floor, behavioral changes (i.e. hesitations, etc.) are recorded, and are reflected in the PPWS calculation. In the obstacle detection tasks, floor texture and glare "obstacles" are included in the calculation. The specific details for recording are included in the instruction document.

Further studies need to be undertaken to verify if the course iterations are identical within each level for patients with low vision. Several features of the perceptions enabled by the BrainPort may not transfer to patients with remaining sight. For example, when using the BrainPort, lighter high contrast objects are easier to detect than those with low contrast. The device does have an invert function, which can make darker objects stand out against a lighter background. Moreover, because of luminance averaging software, the lighting condition (dim versus bright light) did not make a statistically significant difference in performance with the BrainPort, but we would expect ambient illumination would generally affect performance for persons suffering from diseases such as glaucoma or macular degeneration.

We feel that our course possesses several attributes which make it attractive for both research and clinical purposes compared to existing obstacle avoidance platforms. Most importantly, we found the course to be reproducible. We have two instillations, and there was no difference in performance between sites. Furthermore, the setup is easy to arrange and administer, with average test time taking less than 90 min. The fact that there are a total of 36 possible course permutations makes memorization unlikely to occur even during repeated testing, providing randomization schemes are used. Having both dim and bright light conditions allows for examination of whether ambient illumination is having a negative impact on mobility. Several outcomes measures are possible, including PPWS, percent error, two untimed visual identification tasks, and the ability to analyze according to both the color and the type of obstacles.

Disadvantages of our course include the need to have a hallway that is 40 ft in length that can be painted the same color as the floor tiles, and a storage closet to house the obstacles. It is also helpful if once can permanently install the floor tiles and keep the lights affixed to the ceiling. Once installed, these are both unobtrusive, but depending on the décor of the facility could be noticeable.

In conclusion, we describe a portable, standardized obstacle course tool that has been used to assess some mobility functions for use with artificial vision devices and states of ultra-low vision. The course is relatively inexpensive, simple to administer, and has been shown to be reliable and reproducible. Future work should investigate its usefulness in low vision populations.

Disclosures

The authors have nothing to disclose.

Acknowledgments

DCED State of Pennsylvania

References

  1. Applegate WB, Miller ST, Elam JT, Freeman JM, Wood TO, Gettlefinger TC. Impact of cataract surgery with lens implantation on vision and physical function in elderly patients. JAMA. 1987;257(8):1064–1066. [PubMed] [Google Scholar]
  2. Ebert EM, Fine AM, Markowitz J, Maguire MG, Starr JS, Fine SL. Functional vision in patients with neovascular maculopathy and poor visual acuity. Arch. Ophthalmol. 1986;104(7):1009–1012. doi: 10.1001/archopht.1986.01050190067041. [DOI] [PubMed] [Google Scholar]
  3. Dougherty BE, Martin SR, Kelly CB, Jones LA, Raasch TW, Bullimore MA. Development of a battery of functional tests for low vision. Optom. Vis. Sci. 2009;86(8):955–963. doi: 10.1097/OPX.0b013e3181b180a6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Alexander MF, Maguire MG, Lietman TM, Snyder JR, Elman MJ, Fine SL. Assessment of visual function in patients with age-related macular degeneration and low visual acuity. Arch. Ophthalmol. 1988;106(11):1543–1547. doi: 10.1001/archopht.1988.01060140711040. [DOI] [PubMed] [Google Scholar]
  5. Ross CK, Stelmack JA, Stelmack TR, Fraim M. Preliminary examination of the reliability and relation to clinical state of a measure of low vision patient functional status. Optom. Vis. Sci. 1991;68(12):918–923. doi: 10.1097/00006324-199112000-00002. [DOI] [PubMed] [Google Scholar]
  6. Bullimore MA, Bailey IL, Wacker RT. Face recognition in age-related maculopathy. Invest. Ophthalmol. Vis. Sci. 1991;32(7):2020–2029. [PubMed] [Google Scholar]
  7. Turco PD, Connolly J, McCabe P, Glynn RJ. Assessment of functional vision performance: a new test for low vision patients. Ophthalmic. Epidemiol. 1994;1(1):15–25. doi: 10.3109/09286589409071441. [DOI] [PubMed] [Google Scholar]
  8. Bittner AK, Jeter P, Dagnelie G. Grating acuity and contrast tests for clinical trials of severe vision loss. Optom. Vis. Sci. 2011;88(10):1153–1163. doi: 10.1097/OPX.0b013e3182271638. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. West SK, Rubin GS, Munoz B, Abraham D, Fried LP. Assessing functional status: correlation between performance on tasks conducted in a clinic setting and performance on the same task conducted at home. J. Gerontol. A Biol. Sci. Med. Sci. 1997;52(4):209–217. doi: 10.1093/gerona/52a.4.m209. [DOI] [PubMed] [Google Scholar]
  10. Owsley C, McGwin G, Jr, Sloane ME, Stalvey BT, Wells J. Timed instrumental activities of daily living tasks: relationship to visual function in older adults. Optom. Vis. Sci. 2001;78(5):350–359. doi: 10.1097/00006324-200105000-00019. [DOI] [PubMed] [Google Scholar]
  11. Mangione CM, Lee PP, Gutierrez PR, Spritzer K, Berry S, Hays RD. National Eye Institute Visual Function Questionnaire Field Test Investigators. Development of the 25-item National Eye Institute Visual Function Questionnaire. Arch. Ophthalmol. 2001;119(7):1050–1058. doi: 10.1001/archopht.119.7.1050. [DOI] [PubMed] [Google Scholar]
  12. Massof RW, Rubin GS. Visual function assessment questionnaires. Surv. Ophthalmology. 2001;45(6):531–548. doi: 10.1016/s0039-6257(01)00194-1. [DOI] [PubMed] [Google Scholar]
  13. Massof RW, Fletcher DC. Evaluation of the NEI visual functioning questionnaire as an interval measure of visual ability in low vision. Vision Res. 2001;41:397–413. doi: 10.1016/s0042-6989(00)00249-2. [DOI] [PubMed] [Google Scholar]
  14. Stelmack JA, Stelmack TR, Massof RW. Measuring low-vision rehabilitation outcomes with the NEI VFQ-25. Invest. Ophthalmol Vis Sci. 2002;43(9):2859–2868. [PubMed] [Google Scholar]
  15. Stelmack JA, Szlyk JP, Stelmack TR, Demers-Turco P, Williams RT, Moran D, Massof RW. Psychometric properties of the Veterans Affairs Low-Vision Visual Functioning Questionnaire. Invest. Ophthalmol. Vis. Sci. 2004;45(11):3919–3928. doi: 10.1167/iovs.04-0208. [DOI] [PubMed] [Google Scholar]
  16. Velikay-Parel M, Ivastinovic D, Koch M, Hornig R, Dagnelie G, Richard G, Langmann A. Repeated mobility testing for later artificial visual function evaluation. J. Neural. Eng. 2007;4(1):102–107. doi: 10.1088/1741-2560/4/1/S12. [DOI] [PubMed] [Google Scholar]
  17. Geruschat DR, Bittner AK, Dagnelie G. Orientation and mobility assessment in retinal prosthetic clinical trials. Optom. Vis. Sci. 2012;89(9):1308–1315. doi: 10.1097/OPX.0b013e3182686251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Chader GJ, Weiland J, Humayun MS. Artificial vision: needs, functioning, and testing of a retinal electronic prosthesis. Prog. Brain Res. 2009;175:317–332. doi: 10.1016/S0079-6123(09)17522-2. [DOI] [PubMed] [Google Scholar]
  19. Sachs HG, Veit-Peter G. Retinal replacement--the development of microelectronic retinal prostheses--experience with subretinal implants and new aspects. Graefes Arch. Clin. Exp. Ophthalmol. 2004;242(8):717–723. doi: 10.1007/s00417-004-0979-7. [DOI] [PubMed] [Google Scholar]
  20. Alteheld N, Roessler G, Walter P. Towards the bionic eye--the retina implant: surgical, opthalmological and histopathological perspectives. Acta Neurochir. Suppl. 2007;97(2):487–493. doi: 10.1007/978-3-211-33081-4_56. [DOI] [PubMed] [Google Scholar]
  21. Benav H, et al. Restoration of useful vision up to letter recognition capabilities using subretinal microphotodiodes) Conf. Proc. IEEE Eng. Med. Biol. Soc. 2010. pp. 5919–5922. [DOI] [PubMed]
  22. Rizzo JF, 3rd, Wyatt J, Loewenstein J, Kelly S, Shire D. Perceptual efficacy of electrical stimulation of human retina with a microelectrode array during short-term surgical trials. Invest. Ophthalmol. Vis. Sci. 2003;44(12):5362–5369. doi: 10.1167/iovs.02-0817. [DOI] [PubMed] [Google Scholar]
  23. Rizzo JF, 3rd, Wyatt J, Loewenstein J, Kelly S, Shire D. Methods and perceptual thresholds for short-term electrical stimulation of human retina with microelectrode arrays. Invest. Ophthalmol. Vis. Sci. 2003;44(12):5355–5361. doi: 10.1167/iovs.02-0819. [DOI] [PubMed] [Google Scholar]
  24. Humayun MS, et al. Argus II Study Group. Interim results from the international trial of Second Sight's visual prosthesis. Ophthalmology. 2012;119(4):779–788. doi: 10.1016/j.ophtha.2011.09.028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Danilov Y, Tyler M. Brainport: an alternative input to the brain. J. Integr. Neurosci. 2005;4(4):537–550. doi: 10.1142/s0219635205000914. [DOI] [PubMed] [Google Scholar]
  26. Merabet LB, Battelli L, Obretenova S, Maguire S, Meijer P, Pascual-Leone A. Functional recruitment of visual cortex for sound encoded object identification in the blind. Neuroreport. 2009;20(2):132–138. doi: 10.1097/WNR.0b013e32832104dc. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Arfken CL, Lach HW, McGee S, Birge SJ, Miller JP. Visual Acuity, Visual Disabilities and Falling in the Elderly. J. Aging Health. 1994;6(38):38–50. [Google Scholar]
  28. Lovie-Kitchin J, Mainstone JC, Robinson J, Brown B. What areas of the visual field are most important for mobility in low vision patients. Clin. Vis. Sci. 1990;5(3):249–263. [Google Scholar]
  29. Hassan SE, Lovie-Kitchin J, Woods RL. Vision and mobility performance of subjects with age-related macular degeneration. Optom. Vis. Sci. 2002;79(11):697–707. doi: 10.1097/00006324-200211000-00007. [DOI] [PubMed] [Google Scholar]
  30. Leat S, Lovie-Kitchin JE. Measureing mobility performance: experience gained in designing a mobility course. Clin. Exp. Optom. 2006;89(4):215–228. doi: 10.1111/j.1444-0938.2006.00050.x. [DOI] [PubMed] [Google Scholar]
  31. Marron JA, Bailey I. Visual factors and orientation-mobility performance. Am. J. Optom. Physiol. Opt. 1982;59(5):413–426. doi: 10.1097/00006324-198205000-00009. [DOI] [PubMed] [Google Scholar]
  32. Clark-Carter DD, Heyes AD, Howarth CI. The efficiency and walking speed of visually impaired people. Ergonomics. 1986;29(6):779–789. doi: 10.1080/00140138608968314. [DOI] [PubMed] [Google Scholar]
  33. Fryan CD, Gu Q, Ogden CL. Division of Health and Nutrition Examination Surveys. Anthropometric Reference Data for Children and Adults United States 2007-2010. Vital and Health Statistics Series. 2012;11(252):20–22. [PubMed] [Google Scholar]
  34. Maguire AM, et al. Age-dependent effects of RPE65 gene therapy for Leber's congenital amaurosis: a phase 1 dose-escalation trial. Lancet. 2009;374(9701):1597–1605. doi: 10.1016/S0140-6736(09)61836-5. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Visualized Experiments : JoVE are provided here courtesy of MyJoVE Corporation

RESOURCES