Skip to main content
Indian Journal of Ophthalmology logoLink to Indian Journal of Ophthalmology
. 2025 May 30;73(Suppl 3):S492–S497. doi: 10.4103/IJO.IJO_1621_24

Artificial intelligence-powered smart vision glasses for the visually impaired

Devi Udayakumar 1, Sarika Gopalakrishnan 1, Aparna Raghuram 2, Arathy Kartha 3, Arun Kumar Krishnan 4, Ramkumar Ramamirtham 5, Ramu Muthangi 6, Ramakrishna Raju 7,
PMCID: PMC12178407  PMID: 40444311

Abstract

Purpose:

In India, 4.80 million people are blind, and 4.69 million have severe visual impairment. Globally, the digital era and the advent of artificial intelligence devices offer solutions for daily challenges faced by the visually impaired, but they are often expensive. This study evaluates the acceptance and usability of a cost-effective AI-powered spectacle-mounted assistive device in a multicentric Indian cohort.

Methods:

Smart vision glass (SVG) is a lightweight tool, mounted on a spectacle frame with braille-coded keys and a real-time voice interface for guiding the visually impaired, and is connected to a smartphone Android App. The four major functions of the device are as follows: “Things Around You,” “Reading,” “Walking Assistance,” and “Face Recognition.” Ninety participants were recruited from five vision rehabilitation centers across India, and feedback was collected after a month’s usage.

Results:

The mean age of the participants was 23.5 years (range: 6 to 56 years) with 58.9% male; 51% from rural areas; and 64.4% students. As per the World Health Organization (WHO) classification, 90% of the participants were blind (best corrected visual acuity in better eye less than 3/60). A total of 100% of the participants were able to access all the functions of the SVG. Participants reported a positive experience with “Reading” (72.9%), “Things Around You” (44.7%), “Face Recognition” (36.5%), and “Walking Assistance” (22.4%). About 2/3rd of the participants utilized the device for an hour or more per day.

Conclusion:

SVG is a promising cost-effective device for the blind and severe visually impaired in aiding their day-to-day challenges and becoming self-reliant.

Keywords: Artificial intelligence, assistive technology, blindness, low vision, vision rehabilitation, visual impairment


Globally, 285 million people are reported to have visual impairment[1,2] with one-fifth live in India.[3] In India, 4.95 million are reported to be blind and 4.8 million to have severe visual impairment.[4] Southeast Asia has the highest number of blind individuals globally, with 11.7 million out of a total of 36 million. Out of the 216.6 million worldwide who have moderate to severe visual impairment, South Asia accounts for the largest share with 61.2 million individuals.[5]

Sensory information acquired through vision is crucial in providing precise information about environment. Vision loss impacts an individual’s ability to understand and communicate effectively with the world around them. Common daily living challenges include barriers in education, understanding the spatial environment, navigation assistance, and interaction with peers and family.[6] These challenges often result in a loss of confidence and motivation, influencing various aspects of their life, including education, employment, and overall livelihood.[7]

The economic burden in the United States due to vision loss was 27.5 billion US dollars in the year 2012.[8] This calculation was made with the assumption that a visually disabled person requires 10% of a sighted adult’s time for care. The use of special assistive devices can reduce dependency on a caregiver and empower individuals with low vision.[9] The estimated net loss of Gross National Income due to blindness in India is Indian Rupees (INR) 845 billion.[10] The costs of low vision and blindness in India in 2019 are estimated at INR 1158 billion. Poor eye health imposes recurring costs on the Indian economy equivalent to 0.47% to 0.70% of gross domestic product (GDP) affecting the growth of the economy.[11] Nearly 80% of the visually impaired (VI) or blind belong to the middle to low socio-economic condition who lack access to care or cannot afford expensive assistive devices.[5]

The main objective of any assistive device is to help an individual with visual impairment become independent and self-sufficient.[12] However, most of the currently available artificial intelligence-based sensory substitution, spectacle-mounted, and wearable devices available globally are expensive and unaffordable.[13,14] Low affordability is one of the barriers to acceptance of devices.[14] There are limited studies on the perception of a cost-effective assistive technology device that can help VI in India. The usability and acceptance of a cost-effective spectacle-mounted wearable assistive device have not been studied in the Indian population.

The study objective is to analyze the usage of spectacle model artificial intelligence (AI) powered assistive device (smart vision glasses [SVG]) for the visually impaired. The study aims to understand the perception of users across multicentric locations in India and analyze the usability of the device while performing activities of daily living.

Methods

This prospective observational study was conducted in 2022 with 90 participants. The participants were recruited from five vision rehabilitation centers (Appendix A) across India who were prioritized access to this device due to being top-tier, high volume eye care centers. The study was approved by the Institutional Review Board and ethics committee at each center. All participants gave a signed informed consent before participating in the study and the study adhered to the tenets of the Declaration of Helsinki. According to the definition of the World Health Organization (WHO), people who had best corrected visual acuity from severe visual impairment (visual acuity less than 6/60) to blindness were included in the study[15] irrespective of their specific ocular diagnosis. Participants, who were already smartphone users with a basic Android configuration, were recruited for the study. Individuals with cognitive disabilities and multiple disabilities were excluded from the study to isolate the impact of device usage among VI individuals.

Smart vision glass (SVG, SHG Technologies, Bangalore, India) is a non-invasive, lightweight device that is mounted on the temple of a spectacle frame. The spectacle frame is used for mounting the device and does not provide vision enhancement. It uses a front-facing camera with a field of view of 72 degrees and a resolution of 5 megapixels. The device, weighing 53 grams with the frame, is powered by AI and machine learning algorithms. It features a camera with a flashlight, LiDAR sensors, and a braille-coded capacitive operating platform. Connecting to a smartphone, it operates via an Android App [Fig. 1], providing real-time voice commands through the smartphone speaker or Bluetooth earphones.

Figure 1.

Figure 1

(a) Components of smart vision glasses. (b) Architecture of smart vision glasses

SVG device assists visually impaired individuals with four major functions: “Things Around You,” “Reading,” “Walking Assistance,” and “Face Recognition.” These functions broadly relate to the four functional domains of ultra-low vision as follows: visuomotor, reading, wayfinding, and visual information gathering.[16] Each function can be operated using braille-coded touch keys as “T,” “R,” “W,” and “F,” respectively, as shown in Fig. 1. Additionally, there is a touch key for standby mode. When the “Stand By” key is touched, the device navigates to the main menu, allowing users to switch between functions. The device is a sensory substitution device that offers audio descriptions in all functions. It has no visual display or electronic vision enhancement system.

“Things Around You” function assisted users in orienting themselves within their environment. When activated, this mode utilizes the device’s camera to identify objects in their surroundings, such as laptops, desks, doors, and flowers. It goes beyond object recognition by extending to the identification of individuals and offering details such as age and gender when a person is present in the camera’s field of view.

The “Reading” function helped to read any print such as in a book, magazine, newspaper, etc. The role of the English language in education, empowerment, and media is significant in the Indian population,[17] and the optical character recognition (OCR) technology of the device was designed to extensively support the English language. It is also designed to read nine Indian regional languages including Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil, and Telugu.

The “Walking Assistance” function employs LiDAR sensors to detect obstacles and provides audio navigation cues. Upon activation, users receive information about the distance and nature of obstacles, assisting in navigation. Participants used this function alongside their white cane for additional support in various environments. The feature announces obstacles ahead and estimates distances, allowing users to adjust their direction to avoid potential hazards.

The “Face Recognition” function allows users to store the names of family and friends by capturing their picture and verbally recording it with a name through the function key. The images are securely stored in the smartphone used. When a stored face approaches, the device identifies the person and announces their name using its face recognition capability.

Each participant received a detailed training session lasting an average of 1.5 hours to use the device. The training focused on device configuration for smartphones, operation tactics, and troubleshooting issues. The trainer confirmed each participant’s ability to use the device, ensuring the study’s neutrality and unbiased nature. The device was then loaned for one month based solely on demonstrated usefulness and competency. During the month, participants were followed via telephone and provided additional assistance if needed while using the device. After one month of usage, the user perception and feedback were captured through an online survey. The survey consisted of open-ended and multiple-choice questions (see Appendix B for details). The online survey consisted of quantitative feedback and qualitative feedback on functions and usability. Descriptive statistics, including means, frequencies, and percentages, were used to analyze the quantitative data and presented in this study.

Results

A total of 90 individuals with severe visual impairment participated in the study. The mean age of the 90 participants was 23.5 ± 8.2 years ranging from 6 to 56 years. Of the participants, 58.9% (n = 53) were male. Based on their age, the participants were grouped as 0–9 years, 10–24 years, 25–40 years, and 41–65 years. More than half of the participants, 58.9% (n = 53), belonged to the 10–24 years age group, followed by 34.4% (n = 31) in the 25–40 years age group, as shown in Table 1.

Table 1.

Demographic characteristics of study participants

Category Subcategory n (%)
Age 0–9 2 (2.2)
10–24 53 (58.9)
25–40 31 (34.4)
41–65 4 (4.4)
Gender Male 53 (58.9)
Female 37 (41.1)
Geographic Region South 71 (78.9)
North 10 (11.1)
West 9 (10)
Economic status Rural 51 (56.7)
Urban 39 (43.3)
Educational qualification School student 21 (23.3)
Student–Undergraduate 24 (26.7)
Student–Postgraduate 13 (14.4)
Undergraduate 13 (14.4)
Postgraduate 10 (11.1)
Discontinued schooling 9 (10)
Occupation Student 58 (64.4)
Unemployed 26 (28.9)
Employed 6 (6.6)

When grouped based on geographic location, the majority of 78.9% (n = 71) were from South India, 11.1% (n = 10), and 10% (n = 9) from North and West of India, respectively. When grouped as urban and rural, 56.7% (n = 51) belonged to the rural region, and 43.3% (n = 39) belonged to the urban region of India.

Among the participants, 64.4% (n = 58) were students, of which 23.3% (n = 21) were in secondary school education, and 39.9% (n = 37) were at the college level of their education. Of the college students, 25.5% (n = 24) were enrolled in undergraduate degrees, and 14.4% (n = 13) were pursuing postgraduate degrees. Nine subjects had discontinued their education at the school level. Among all the participants, 14.4% (n = 13) had completed undergraduate, and 11.1% (n = 10) had completed post-graduation. The majority of 28.9% (n = 26) of them were unemployed, while 6.6% (n = 6) were employed.

The visual acuity of 87.7% (n = 79) participants was an ultra-low vision; a range of 3–4 LogMAR (6/4800 – no light perception) and 11.1% (n = 10) was in the range of 1–2 LogMAR (6/60–6/600) acuity level checked using standardized logMAR charts. According to the WHO classification[15] of visual impairment (2018), the analysis of the cohort revealed that 98.9% (n = 89) belonged to the category of blindness, while 1.1% (n = 1) had severe visual impairment. Based on the categorization of visual impairment based on the Ministry of Social Justice and Empowerment, Government of India, 90% were classified as blind, and 10% had low vision. Table 2 gives more details on the visual impairment status of the participants.

Table 2.

Category of visual impairment of study participants

Category Subcategory n (%)
Visual acuity (LogMAR) <1 1 (1.1)
1–2 10 (11.1)
3–4 79 (87.7)
WHO category of visual impairment (2018) Severe 1 (1.1)
Blindness 89 (98.9)
Indian categorization of visual impairment iii a – low vision 1 (1.1)
iii e – low vision 8 (8.8)
iv a – blindness 5 (5.5)
iv b – blindness 76 (84.4)

After one month of usage, 94.4% (n = 85) participated in the feedback survey with a dropout of five participants. All participants were able to use the device functions. “Things Around You” function allowed users to assess their immediate surroundings and aided enhanced communication and understanding. It was found useful by 44.7% (n = 38) of the participants. With “Reading” function, the participants were able to read printed or educational materials like brochures, books, handwritten classroom notes, and similar materials. Students used SVG to prepare for their academic needs. The support for reading in the regional languages and reading independently was appreciated by the participants and helped those with limited literacy. The survey results showed that “Reading” function was felt useful by 72.9% (n = 62) of the participants.

“Walking Assistance” function was utilized in scenarios where assistance with wayfinding was needed, offering audio guidance to complement the tactile feedback given by the white cane. This function was felt useful by 22.4% (n = 19) of participants. The participants used this function primarily for the indoor environment. “Face Recognition” function was utilized to assist in identifying individuals who approached the participant. This feature enabled the device to recognize individuals by name and was felt useful by 36.5% (n = 31) of the participants.

Participants reported usefulness with the overall functions in the following order: “Reading” (72.9%), “Things Around You” (44.7%), “Face Recognition” (36.5%), and “Walking Assistance” (22.4%). When asked which function needed the most improvement, “Walking Assistance” was highest at 35.3% (n = 29) followed by “Face Recognition” at 27.1% (n = 23) then “Things Around You” 21.2% (n = 18) and “Reading” 16.5% (n = 14) for effective usage [Fig. 2].

Figure 2.

Figure 2

User experience about the functions of the device

When the frequency of usage was analyzed, 61% (n = 52) of the participants utilized the device for 1 hour or more per day, out of which 33% (n = 28) used the device for 1 hour daily, 18% (n = 15) for 2–3 hours, 8% (n = 7) for 4–5 hours, and 2% (n = 2) for up to 8 hours per day [Fig. 3].

Figure 3.

Figure 3

Frequency of usage

Discussion

Visual rehabilitation aims to adapt to permanent visual loss, enhance psychosocial functioning, and foster independence and social engagement using assistive technology (AT).[18,19,20] The role of AT, particularly with smartphones, is to facilitate vision-free interactions, allowing for independent daily activities, social inclusion, education, and learning.[21,22] Our study analyzed the usability of a lightweight, non-invasive, AI-integrated, spectacle-mounted SVG device in the Indian population, noting its ease of use and strong acceptance.[23] This aligns with Gori et al. and Manjari et al., who noted the drawbacks of invasive AT solutions like electro-tactile tongue stimulation (Tongue Display Unit/BrainPort) and emphasized the need for lightweight, wearable devices that provide real-time, accurate responses.[7,12]

Although females have a higher prevalence of visual impairment and blindness, our study found a slight predominance of male participants, likely due to females’ limited access to eye care.[24] The data showed greater participation of young people[25] from secondary school and college students, likely motivated to reduce reliance on sighted help for education and exams through the use of assistive devices. Geographically, there was a stronger presence from the southern zone, reflecting the higher involvement of centers there. Rural participants outnumbered urban ones due to the role of multicentric rehabilitation centers in enhancing access and empowerment for rural populations.

Persons with ultra-low vision heavily rely on visual information for daily living.[16] “Things Around You” function facilitated visual information gathering and promoted independence. Feedback was focused on improving accuracy to help in better orientation of the VI using it.

The “Reading” function was preferred in addition to braille, screen readers, tactile graphics, and tactile books. It showed potential in aiding students by error-free reading, processing images, and translating visual information, although there were suggestions for improvements in image description. Further research is necessary to assess the reading speed and accuracy and compare it with similar text-to-speech apps.

Despite the “Walking Assistance” function enhancing wayfinding, our study emphasized the need for improved speed and accuracy in obstacle detection during navigation. While wayfinding is crucial for VI, SVG should be used alongside tactile aids like a white cane for safety, as relying solely on obstacle detection and distance estimation is insufficient, especially on roads or in unfamiliar environments. While auditory cues like traffic sounds are essential for outdoor navigation, the device may mask these cues. Feedback highlighted the need to improve accuracy in low-light conditions and suggested announcing distances to clear paths rather than obstacles. Further research is needed to test the device outdoors and include individuals with dual sensory impairments, i.e., those who have both visual and hearing impairments. Although hearing status was not recorded, no participants reported difficulty following the auditory cues provided by the SVG.

Even though participants were unable to use any visual information while using this device, they were able to identify people through auditory output. Participants positively acknowledged the usage of “Face Recognition” function to store the faces of familiar persons. However, the study highlights the need for improvements in this function due to difficulties in storing faces. Additionally, announcing facial emotions could enhance personal interactions, making the function more effective.[26]

While there are over 200 smartphone apps for the VI,[27] SVG could become a single solution aiding daily activities while reducing time and power consumption. Vision rehabilitation options for severe visual impairment or ultra-low vision (ULV) are limited.[28] Several vision restoration therapies, including visual prostheses, gene, and stem cell therapy, are being developed but require invasive procedures.[29,30,31] Argus II provides a rudimentary vision for ULV/blindness, but costs are prohibitively high and unavailable in India. Finn et al.[32] found significant performance gains when Argus II was integrated with OrCam MyEye. Similarly, integrating SVG with visual prostheses could benefit those with native or artificial low vision. While there are many options to enhance the remaining vision of individuals with low vision, there are very few assistive devices available to aid individuals with blindness or ULV. In such cases, SVG would be a good option for them to explore their environment, identify objects around them, read signs, recognize faces, and navigate while walking.

Overall, the functional modes available in SVG cover three of the four functional domains identified in low vision namely, reading or detailed vision (Reading, Facial Recognition), visual information gathering (Things Around You), and mobility (Walking Assistance). In future, studying SVG’s impact on daily activities across each of these functional domains relevant to low vision will be useful.[33] Future comparison studies between SVG and similar multifunctional apps like Seeing AI or Google Lookout are needed.

Granquist et al.[34] noted that price significantly influences the decision-making of AT users. In India, cost-effective devices are essential to meet high demand, as expensive devices that lack regular updates and user feedback risk becoming outdated. Consistent technological updates and user-responsive adjustments are crucial for maintaining the relevance and effectiveness of assistive devices.

There are some limitations to the present study. First, participants included were already smartphone users, second, minimal follow-up period on usage, and third, the survey only recorded the user feedback or satisfaction and performance metrics perceived by users. Strengths include a multi-centered approach and feedback facilitating improvements. The survey feedback is crucial for the device’s evolution, ensuring its relevance for the VI, with a focus on reducing internet dependency and optimizing walking assistance feature to function independently. Our study identified usage barriers, including touch-sensitive switches and limited battery power, alongside feedback emphasizing improvements in under-used functions, internet independence, and enhanced user-friendly interface. The study insights will be useful for developers in evolving the device. Further qualitative research on qualitative feedback and device impact and its specific functions on the quality of life for VI is currently studied. Future studies should include both new and existing smartphone users to assess the device’s impact and analyze correlations between ocular disease categories, device usage, and acceptance. Though SVG is compatible with all smartphones commonly used by VI, minimizing the need for additional technological investment, comprehensive cost analysis, including associated smartphone costs, is needed.

Ensuring equitable eye care and addressing challenges for individuals with irreversible visual impairment are key priorities making socio-economic barrier-free strategies crucial.[3,35] With broad compatibility, SVG, priced at $360, is cost-effective and significantly more affordable than other global devices, enhancing accessibility.

Conclusion

Our study highlights the effectiveness of affordable AI-assistive devices for the visually impaired in addressing daily challenges among the Indian population. SVGs, with consistent updates from study outcomes, emerge as promising, cost-effective, non-invasive, and user-friendly wearable AI devices. We are currently studying real-world task performance with SVG in people with severe visual impairment and will compare self-reports with task performance in future studies using calibrated questionnaires and instruments.[36]

Conflicts of interest

There are no conflicts of interest.

Supplementary Material

Appendix A: List of Five Vision Rehabilitation Centers

  1. Aravind Eye Hospital, Madurai

  2. Voluntary Health Services, Chennai

  3. Sankara Nethralaya, Chennai

  4. Dr. Shroff’s Charitable Eye Hospital, Delhi

  5. Community Eye Care Foundation, Pune

Appendix B: Smart vision glasses – User experience survey

The purpose of this survey is to understand the usefulness of smart vision glasses. This would help us to improve the device further. This survey is a mix of open-ended and 1–4 scale questions of most useful to least useful

  1. Is the “Things Around You” function useful to you?

  2. Please provide your comments about the usefulness of the function “Things Around You” function in your daily activities

  3. Is the “Reading” useful to you?

  4. Please provide your comments about the usefulness of the function “Reading” function in your daily activities

  5. Is the “Walking Assistance” function useful to you?

  6. Please provide your comments about the usefulness of the function “Walking Assistance” function in your daily activities

  7. Is the “Face Recognition” function useful to you?

  8. Please provide your comments about the usefulness of the function “Face Recognition” function in your daily activities

  9. Which functions of the device need improvement? (Multiselect)

    1. Things Around You

    2. Reading

    3. Walking Assistance

    4. Face Recognition

    5. Nil

  10. Please mention how the functions can be improved

  11. Where do you use the device in terms of Location?

    1. Indoor

    2. Outdoor

    3. Both

  12. How many hours do you use the device in a day?

Funding Statement

Nil.

References

  • 1.Dandona R, Dandona L. Socioeconomic status and blindness. Br J Ophthalmol. 2001;85:1484–8. doi: 10.1136/bjo.85.12.1484. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Pascolini D, Mariotti SP. Global estimates of visual impairment: 2010. Br J Ophthalmol. 2012;96:614–8. doi: 10.1136/bjophthalmol-2011-300539. [DOI] [PubMed] [Google Scholar]
  • 3.Khanna R, Raman U, Rao GN. Blindness and poverty in India: The way forward. Clin Exp Optom. 2007;90:406–14. doi: 10.1111/j.1444-0938.2007.00199.x. [DOI] [PubMed] [Google Scholar]
  • 4.Vashist P, Singh SS, Gupta V, Gupta N, Rajshekhar V, Shamanna BR. National blindness and visual impairment survey 2015-19: A summary report 2019. Available from: https://npcbvi.mohfw.gov.in/writeReadData/mainlinkFile/File341.pdf . [Last accessed on 2024 Jun 24]
  • 5.Bourne RRA, Flaxman SR, Braithwaite T, Cicinelli MV, Das A, Jonas JB, et al. Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: A systematic review and meta-analysis. Lancet Glob Heal. 2017;5:e888–97. doi: 10.1016/S2214-109X(17)30293-0. [DOI] [PubMed] [Google Scholar]
  • 6.West SK, Rubin GS, Broman AT, Munõz B, Bandeen-Roche K, Turano T, et al. How does visual impairment affect performance on tasks of everyday life? The SEE Project. Evidence-Based Eye Care. 2002;3:218–9. doi: 10.1001/archopht.120.6.774. [DOI] [PubMed] [Google Scholar]
  • 7.Gori M, Cappagli G, Tonelli A, Baud-Bovy G, Finocchietti S. Devices for visually impaired people: High technological devices with low user acceptance and no adaptability for children. Neurosci Biobehav Rev. 2016;69:79–88. doi: 10.1016/j.neubiorev.2016.06.043. [DOI] [PubMed] [Google Scholar]
  • 8.Wittenborn JS, Zhang X, Feagan CW, Crouse WL, Shrestha S, Kemper AR, et al. The economic burden of vision loss and eye disorders among the United States population younger than 40 years. Ophthalmology. 2013;120:1728. doi: 10.1016/j.ophtha.2013.01.068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Gordois A, Cutler H, Pezzullo L, Gordon K, Cruess A, Winyard S, et al. An estimation of the worldwide economic and health burden of visual impairment. Glob Public Health. 2012;7:465–81. doi: 10.1080/17441692.2011.634815. [DOI] [PubMed] [Google Scholar]
  • 10.Mannava S, Borah R, Shamanna B. Current estimates of the economic burden of blindness and visual impairment in India: A cost of illness study. Indian J Ophthalmol. 2022;70:2141. doi: 10.4103/ijo.IJO_2804_21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Wong B, Singh K, Khanna RK, Ravilla T, Shalinder S, Sil A, et al. The economic and social costs of visual impairment and blindness in India. Indian J Ophthalmol. 2022;70:3470–5. doi: 10.4103/ijo.IJO_502_22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Manjari K, Verma M, Singal G. A survey on Assistive Technology for visually impaired. Internet of Things. 2020;11:100188. [Google Scholar]
  • 13.Granquist C, Sun SY, Montezuma SR, Tran TM, Gage R, Legge GE. Evaluation and comparison of artificial intelligence vision aids: Orcam MyEye 1 and Seeing AI. J Vis Impair Blind. 2021;115:277–85. [Google Scholar]
  • 14.Sivakumar P, Vedachalam R, Kannusamy V, Odayappan A, Venkatesh R, Dhoble P, et al. Barriers in utilisation of low vision assistive products. Eye. 2020;34:344–51. doi: 10.1038/s41433-019-0545-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Pan American Health Organization; Visual Health - PAHO/WHO. Available from: https://www.paho.org/en/topics/visual-health. [Last accessed on 2024 Jun 05] [Google Scholar]
  • 16.Adeyemo O, Jeter PE, Rozanski C, Arnold E, Dalvin LA, Swenor B, et al. Living with ultra-low vision: An inventory of self-reported visually guided activities by individuals with profound visual impairment. Transl Vis Sci Technol. 2017;6:10. doi: 10.1167/tvst.6.3.10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Meganathan R. Dreams and Realities: www.britishcouncil.org Policy planning and implementation 4 Language policy in education and the role of English in India: From library language to language of empowerment. 2011 [Google Scholar]
  • 18.van Nispen RMA, Virgili G, Hoeben M, Langelaan M, Klevering J, Keunen JEE, et al. Low vision rehabilitation for better quality of life in visually impaired adults. Cochrane Database Syst Rev. 2020;1:CD006543. doi: 10.1002/14651858.CD006543.pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Boey D, Tse T, Lim Y hui, Chan ML, Fitzmaurice K, Carey L. The impact of low vision on activities, participation, and goals among older adults: A scoping review. Disabil Rehabil. 2022;44:5683–707. doi: 10.1080/09638288.2021.1937340. [DOI] [PubMed] [Google Scholar]
  • 20.Gopalakrishnan S, Suwalal SC, Bhaskaran G, Raman R. Use of augmented reality technology for improving visual acuity of individuals with low vision. Indian J Ophthalmol. 2020;68:1136–42. doi: 10.4103/ijo.IJO_1524_19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Senjam SS, Primo SA. Challenges and enablers for smartphone use by persons with vision loss during the COVID-19 pandemic: A report of two case studies. Front Public Heal. 2022;10:1–7. doi: 10.3389/fpubh.2022.912460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Hakobyan L, Lumsden J, O’Sullivan D, Bartlett H. Mobile assistive technologies for the visually impaired. Surv Ophthalmol. 2013;58:513–28. doi: 10.1016/j.survophthal.2012.10.004. [DOI] [PubMed] [Google Scholar]
  • 23.Wittich W, Lorenzini MC, Markowitz SN, Tolentino M, Gartner SA, Goldstein JE, et al. The effect of a head-mounted low vision device on visual function. Optom Vis Sci. 2018;95:774–84. doi: 10.1097/OPX.0000000000001262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Murthy G, Pant HB, Bandyopadhyay S JN. Trends in gender and blindness in India. Community Eye Heal. 2016;29:S04–5. [PMC free article] [PubMed] [Google Scholar]
  • 25.WHO Adolescent health Adolescent health in the South-East Asia Region. 2023. pp. 1–8. Available from: https://who.int/southeastasia/health-topics/adolescent-health. [Last accessed on 2024 Apr 26]
  • 26.Ashok A, John J. Facial expression recognition system for visually impaired. In: Hemanth J, Fernando X, Lafata P, Baig Z, editors. International Conference on Intelligent Data Communication Technologies and Internet of Things (ICICI) 2018. Cham: Springer; 2019. pp. 244–50. [Google Scholar]
  • 27.Pundlik S, Shivshanker P, Luo G. Impact of Apps as assistive devices for visually impaired persons. Annu Rev Vis Sci. 2023;9:111–30. doi: 10.1146/annurev-vision-111022-123837. [DOI] [PubMed] [Google Scholar]
  • 28.Geruschat DR, Bittner AK, Dagnelie G. Orientation and mobility assessment in retinal prosthetic clinical trials. Optom Vis Sci Off Publ Am Acad Optom. 2012;89:1308–15. doi: 10.1097/OPX.0b013e3182686251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Fernández E, Alfaro A, Soto-Sánchez C, Gonzalez-Lopez P, Lozano AM, Peña S, et al. Visual percepts evoked with an intracortical 96-channel microelectrode array inserted in human occipital cortex. J Clin Invest. 2021;131:e151331. doi: 10.1172/JCI151331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Barnes N, Scott AF, Lieby P, Petoe MA, McCarthy C, Stacey A, et al. Vision function testing for a suprachoroidal retinal prosthesis: Effects of image filtering. J Neural Eng. 2016;13:36013. doi: 10.1088/1741-2560/13/3/036013. [DOI] [PubMed] [Google Scholar]
  • 31.Ho E, Boffa J, Palanker D. Performance of complex visual tasks using simulated prosthetic vision via augmented-reality glasses. J Vis. 2019;19:22. doi: 10.1167/19.13.22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Finn AP, Tripp F, Whitaker D, Vajzovic L. Synergistic visual gains attained using Argus II retinal prosthesis with OrCam MyEye. Ophthalmol Retin. 2018;2:382–4. doi: 10.1016/j.oret.2017.08.008. [DOI] [PubMed] [Google Scholar]
  • 33.Massof RW, Ahmadian L, Grover LL, Deremeik JT, Goldstein JE, Rainey C, et al. The Activity Inventory: An adaptive visual function questionnaire. Optom Vis Sci Off Publ Am Acad Optom. 2007;84:763–74. doi: 10.1097/OPX.0b013e3181339efd. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Granquist C, Sun SY, Montezuma SR, Tran TM, Gage R, Legge GE. Evaluation and comparison of artificial intelligence vision aids: Orcam MyEye 1 and seeing AI. J Vis Impair Blind. 2021;115:277–85. [Google Scholar]
  • 35.Webson A. Eye health and the decade of action for the sustainable development goals. Lancet Glob Heal. 2021;9:e383–4. doi: 10.1016/S2214-109X(21)00035-8. [DOI] [PubMed] [Google Scholar]
  • 36.Kartha A, Singh RK, Bradley C, Dagnelie G. Self-Reported visual ability versus task performance in individuals with ultra-low vision. Transl Vis Sci Technol. 2023;12:1–6. doi: 10.1167/tvst.12.10.14. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Indian Journal of Ophthalmology are provided here courtesy of Wolters Kluwer -- Medknow Publications

RESOURCES