Skip to main content
IEEE Journal of Translational Engineering in Health and Medicine logoLink to IEEE Journal of Translational Engineering in Health and Medicine
. 2017 May 2;5:3800107. doi: 10.1109/JTEHM.2017.2679746

Wearable Improved Vision System for Color Vision Deficiency Correction

Paolo Melillo 1,, Daniel Riccio 2,3, Luigi Di Perna 1, Gabriella Sanniti Di Baja 3, Maurizio De Nino 4, Settimio Rossi 1, Francesco Testa 1, Francesca Simonelli 1, Maria Frucci 3
PMCID: PMC5418066  PMID: 28507827

Abstract

Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (Inline graphic) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD.

Keywords: Augmented reality, color vision deficiency, wearable device, medical device


Our wearable system, based on augmented reality devices, improves color vision in subjects with color vision deficiency (CVD). A subject with CVD looking at Fig. a) actually perceives what is shown in Fig. b), and hence reads SO instead of ISO. With our system he/she perceives the image in Fig. c) and does not miss any relevant information. The system performance was validated by using the clinical Ishihara test: Fig. d) shows the improvement of test score observed in almost all the enrolled subjects.

graphic file with name jtehm-gagraphic-2679746.jpg

I. Introduction

Color vision deficiency (CVD), also referred to as color blindness, is defined as the “inability to distinguish certain shades of color or in more severe cases, see colors at all”. Around 5%–8% of men and 0.8% of women have certain kinds of CVD [1]. Unlike people with normal color vision, people with CVD report difficulties discriminating certain color combinations and color differences.

Human color vision is based on the response of three different classes of photoreceptors, called cones, located in the retina [2]. Each class of cones is sensitive to photons of different wavelengths: long-wavelength (L), middle wavelength (M), and short-wavelength (S) regions of the visible spectrum. CVD generally arises from either a complete lack of one of the three classes of cone pigments or the modification of one of them. The former is called dichromacy and the latter is called anomalous trichromacy. The dichromacies are distinguished in protanopia, deuteranopia and tritanopia according to the missing class of cones. Similarly, anomalous trichromacies are classified as protanomaly, deuteranomaly and tritanomaly.

In the last decades several algorithms, applications and commercial products have been developed for subjects with CVD. In particular, special lenses have been developed to improve color vision [3], however, they are specific to a particular disability (deuteranopia and protanopia) and require adaptation of the eye to be effective. Moreover, several apps for smartphones have been deployed in order to help people with CVD. Most of them are not designed for real-time applications and also those presenting real-time function are impractical, since using a smartphone during a real-life activity is often troublesome [4].

The advent of wearable augmented reality devices, such as the Google Glass, paves the way to the design and development of a wearable system to help people with CVD. Although several augmented reality solutions have been used in different health-related contexts [5][10], to the best of the authors’ knowledge, only one study explored the application of this technology, based on an optical see-through device, to support subjects with CVD [4]. This may be related to the fact that most algorithms developed to improve color vision have been designed for asynchronous mode and not for real-time applications [11].

The current paper describes the design, development and clinical validation of a wearable augmented reality system to improve color vision for subjects with CVD.

II. Methods and Procedures

A. System Design

We designed a wearable real-time augmented-reality system in order to improve color vision in subjects suffering from CVD.

Two approaches can be used for real-world manipulation: the optical see-through that combines computer-generated imagery with the image of the real world, and video see-through that presents video feeds from cameras inside head-mounted devices.

Video see-through systems suffer the latency (delay) from sensing to imaging. Optical see-through has no latency, which sounds great but it gives a mismatch or lack of synchronization between what you see through the glasses and the graphics. When the real world needs to be overlaid, this mismatch is annoying and distracting. In contrast, using video see-through allows you to synchronize the delay so that your video and graphics are always in sync.

For this reason, our system is based on video see-through. The real world is acquired by a stereoscopic camera, then the left and right images are processed separately applying the algorithms described in the following and the new world is presented to the patient via the Liquid Crystal Display (LCD) of the Head Mount Display (HMD).

The architecture consists of three main modules:

  • Video Acquisition Module (VAM);

  • Video Processing Module (VPM);

  • Video Rendering Module (VRM);

The VAM is based on an OVRVISION PRO camera (OVR VISION, Osaka, Japan), mounted in the front part of HMD, represented in Figure 1, with a resolution of Inline graphic @ 60Hz with a horizontal field of view (FOV) of 100° and a vertical FOV of 98°.

Fig. 1.

Fig. 1.

HMD with OVRVISION PRO camera mounted.

The VPM implements the three different algorithms (protanopia, deuteranopia and tritanopia) using the openCV library and customized software ensuring an execution time less than the acquisition period (1/60 s) in order to avoid any loss of frames.

The VRM is based on an HMD display with a resolution of Inline graphic @ 75Hz, a diagonal FOV of 100°, a persistence <3 ms and a weight of 440 grams.

Subjects with CVD typically perceive a narrower color spectrum compared to people with normal color vision. The color spectrum seen by subjects with normal color vision and by people affected by CVD are shown in Figure 2. The proposed wearable system aims at improving color vision of visually impaired people and its function is to perform a re-mapping of the colors in the scene observed by a visually impaired subject in order to increase her/his ability to distinguish colors.

Fig. 2.

Fig. 2.

A comparison of the visible color spectrum in common types of color blindness.

To this purpose, we designed an algorithm to redistribute the colors in the scene on the spectrum actually perceived by the subject. Since the system is requested to work real-time, the computational cost of the algorithm should be as limited as possible and this suggests the use of a linear transformation-based approach. The standard RGB color space is not the best choice to implement the desired re-mapping. A color space better modeling human color perception is the LMS which simulates the response of the three types of cones to long, medium and short wavelengths.

The color remapping process is inspired to the daltonization strategy suggested by Brettel et al. [12], which is mainly based on the transformation from RGB (LMS) to LMS (RGB) color spaces by means of linear transformation matrices. Actually, in [12] only protanopia has been taken into account for the correction of daltonization. In this work, we adopted the transformation matrix reported in [12] for protanopia, and, since we are also interested in correcting deuteranopia and tritanopia, we further computed the relative transformation matrices by means of a regression approach.

The first step of our algorithm is the transformation from the RGB to LMS color space by means of the following matrix:

A.

The obtained LMS image undergoes a further linear transformation aimed at simulating the specific color blindness according to one of the following matrices:

Protanopia:

A.

Deuteranopia:

A.

Tritanopia

A.

After this, inverse transformation is performed to simulate the resulting image in the RGB color space, showing the original image as perceived by a person with CVD.

A.

Starting from the simulated RGB image we compute the difference with respect to the original one:

A.

The following step applies a shift so that R’, G’, B’ are in the visible spectrum. Of course, the shift depends on the dichromacy:

Protanopia

A.

Deuteranopia:

A.

Tritanopia

A.

The final step consists in compensating the components r, g, b obtained by the simulation as follows R”=RR+r, G”=GG+g e B”=BB+b. Since R”, G”, B” could have values outside [0-255], their values undergo a saturation process. Figure 3 shows an example for protanopia. The image to the left is the original image, the one in the middle is what the subject affected by protanopia perceives and the one to the right is the perceived image after correction of protanopia.

Fig. 3.

Fig. 3.

From left to right, original image, image perceived by a subject affected by protanopia, image perceived after correction.

B. Clinical Study for Validation

A sample of 100 patients visited at the Referral Centre for Inherited Retinal Diseases of the Academic Hospital “University of Campania Luigi Vanvitelli”, mainly affected by Retinitis Pigmentosa [13] and Stargardt Disease [14], who previously provided informed consent to participate in clinical experiments, was screened.

The following inclusion criteria were adopted for study eligibility:

  • CVD, as proven by a clinical test;

  • a best corrected visual acuity (BCVA) of at least 20/200;

  • living within 2-hours drive from the Hospital;

  • willingness to be contacted for clinical experimentation.

A subset of 24 patients resulted eligible and accepted to be enrolled. The patients were asked to complete Farnsworth D15 test, in order to evaluate the entity of CVD. Since the latter was adopted as an additional test, unwillingness to perform Farnsworth D15 test did not affect subject inclusion. Only two subjects refused to perform the Farnsworth D15. The test score sheets were analyzed according to Bowman [15] and to moment of inertia methods [16], in particular, the following parameters were computed:

  • Total Color Difference Score (TCDS), i.e., a quantitative index of colour confusion, which does not differentiate between different types of defects in individual subjects;

  • Color Confusion Index (CCI), i.e., the ratio between the TCDS of the subject and the reference value. It ranges from 1 (for normal vision people) to 2 or more (for people with CVD);

  • Total error score (TES), i.e., a combination of the major and minus radius used in the moment of inertia method to determine the severity of CVD (range from about 11 for normal vision, to 40 and above for severe CVD);

  • Confusion Angle, i.e., the axis angle producing the minimum moment of inertia that measures the type of defect.

The above-cited parameters have been selected since they provide information about the entity and the type of CVD. Detailed description of the Farnsworth D15 parameters can be found elsewhere [15], [16].

Moreover, since BCVA is a parameter universally used to evaluate visual function in clinical trials [17], in the current study it is measured for each eye with Landolt rings [18].

After diagnostic testing, a computerized and modified version of the Ishihara test was performed in order to test which of the three matrices (i.e., for protanopia, deuteranopia, tritanopia) suited best for each subject. The Ishihara plates, corrected for each of the three algorithms, were randomly presented to each patient sitting in front of the monitor, and the algorithm which enabled the patient to achieve the highest score (i.e., number of Ishihara table correctly read) was selected.

Once the best matrix for the patient was chosen, the subject was invited to use the wearable system in a room with natural lighting and the luminance was measured with a luxmeter. The contrast and light intensity of the see-through system was adjusted and the subject was invited to explore the room for about 5 minutes. After that, the Ishihara test was repeated for each subject in the usual clinical manner (i.e., with Ishihara paper book), except for the use of the wearable system, without correction and with the correction suggested by the most suitable matrix for that subject. All the study procedures required about one hour and half for each subject.

The outcome measure is the difference between the score of Ishihara test conducted with and without correction for color vision deficiency.

The research followed the tenets of the Declaration of Helsinki, and informed consent was obtained before participant assessment. Ethics approval was obtained from the Institutional Review Board of the Second University of Naples.

Since this is a pilot study, no formal sample size computation was performed and a sample of 24 patients was considered to be satisfactory to test feasibility, usability and performance of the system. Differences in the main outcome measurement (i.e., the number of correctly read plates) were evaluated by paired t-test. Moreover, the correlation between the outcome measurement and other features, such as Best Corrected Visual Acuity (BCVA), Farnsworth D15 parameters, light level, was explored by Pearson correlation. A p-value lower than 5% was considered statistically significant.

III. Results

A sample of 24 patients (18 males and 6 females), aged 37.4 ± 14.2, accepted to be enrolled in the clinical experimentation to validate the system. All the subjects completed the experiment without reporting any particular problems. Table 1 shows the main clinical and demographic features of the selected sample of patients. BCVA on average was 0.6 ± 0.3 in both eyes. Farnsworth D15 test, performed in 22 out of 24 patients showed:

  • 8 (36.4%) patients with diffuse alteration;

  • 7 (31.8%) patients with prevalent CVD in protan axis;

  • 1 (4.5%) patient with prevalent CVD in deutan axis;

  • 5 (22.7) patients with prevalent CVD in tritan axis;

  • 1 (4.5%) borderline patient.

TABLE 1. Main Clinical and Demographic Features.

ID Age Sex TCDS CCI TES Angle Alteration BCVA Best Correction algorithm Ishihara score Light level
RE LE baseline with correction
1 27 M 231.5 1.98 26.4 64.6 diffuse 0.9 0.9 Deuteranopia 5 (fail) 17 (pass) 53
2 32 M 381.3 3.26 39.9 52.5 diffuse 0.2 0.2 Deuteranopia 5 (fail) 10 (fail) 40
3 23 M 307.1 2.62 35.7 72.4 diffuse 0.8 0.8 Deuteranopia 7 (fail) 17 (pass) 88
4 36 F 315.1 2.69 33 −88.9 tritan 0.3 0.3 Deuteranopia 5 (fail) 7 (fail) 50
5 42 M 296.1 2.53 29.7 −86.3 tritan 0.2 0.2 Deuteranopia 5 (fail) 14 (borderline) 49
6 66 M 323.6 2.77 33.1 −77.7 tritan 0.8 0.8 Deuteranopia 5 (fail) 14 (borderline) 48
7 62 F 216.7 1.85 23.1 −88.2 tritan 0.9 0.9 Protanopia 5 (fail) 17 (pass) 69
8 67 M 246.5 2.11 27 −68.2 tritan 0.4 0.4 Protanopia 6 (fail) 5 (fail) 30
9 30 M 374.3 3.2 41 12.7 protan 1 1 Deuteranopia 4 (fail) 19 (pass) 100
10 21 F 318.4 2.72 34.6 55.2 diffuse 0.8 0.7 Protanopia 2 (fail) 5 (fail) 40
11 19 M 318.1 2.72 37.4 65 diffuse 0.7 0.7 Deuteranopia 10 (fail) 16 (borderline) 46
12 23 F 299.8 2.56 31.9 3.3 protan 0.4 0.4 Deuteranopia 6 (fail) 12 (fail) 25
13 33 M 415.4 3.55 43 6.7 protan 0.9 1 Deuteranopia 5 (fail) 19 (pass) 59
14 48 M 252.5 2.16 27.9 −9.8 deutan 1 1 Deuteranopia 7 (fail) 14 (borderline) 49
15 23 M 410.4 3.51 43.9 12.1 protan 1 1 Deuteranopia 1 (fail) 20 (pass) 48
16 38 M n/a n/a n/a n/a n/a 0.4 0.3 Deuteranopia 5 (fail) 14 (borderline) n/a
17 39 M n/a n/a n/a n/a n/a 0.4 0.1 Deuteranopia 5 (fail) 19 (pass) n/a
18 24 M 473 4.04 45.5 0.3 protan 0.3 0.2 Deuteranopia 6 (fail) 12 (fail) 41
19 37 M 204.9 1.75 21.6 85.6 diffuse 0.8 0.5 Deuteranopia 5 (fail) 6 (fail) 40
20 56 M 242.1 2.07 29.6 23 protan 0.1 0.1 Tritanopia 11 (fail) 19 (pass) 44
21 30 M 315.2 2.69 35.9 16.9 protan 1 1 Deuteranopia 2 (fail) 19 (pass) 115
22 34 M 145.9 1.25 16.6 76 bordeline 1 1 Deuteranopia 2 (fail) 19 (pass) 40
23 49 F 205 1.75 21.8 73 diffuse 0.3 0.7 Protanopia 13 (fail) 20 (pass) 55
24 39 F 295.6 2.53 31.3 82.4 diffuse 0.5 0.7 Protanopia 12 (fail) 21 (pass) 56

TCDS: Total Colour Difference Score; CCI: Color Confusion Index; TES: Total Error Score; BCVA: Best Corrected Visual Acuity; RE: Right Eye; LE: Left Eye; n/a: not available

The Ishihara score (i.e, number of correctly read plates) significantly improved (p=0.03) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction for CVD which best suited the subject. As shown in Figure 4, all but one subject showed an improvement in the number of correctly read plates. In particular, 12 subjects (50%) passed the Ishihara test (i.e, more than 16 correctly read plates) with the system on, while 5 (21%) achieved a borderline result (14-16 correctly read plates), and the remaining 6 patients (25%), in spite of the color vision improvement, continued to fail the Ishihara test.

Fig. 4.

Fig. 4.

Comparison of results of Ishihara test with and without the proposed correction for color vision deficiency.

Among the three different matrices, the one for deuteranopia correction achieved the highest results in most subjects (n=18 - 80%). The correlation analysis, reported in Table 2, showed that the improvement in the Ishihara test score was significantly (p<0.01) correlated with BCVA and light level, but not with Farnsworth D15 parameters.

TABLE 2. Main Clinical and Demographic Features.

Parameter Pearson Correlation p-value
TCDS (Farnsworth D15) 0.122 0.589
CCI (Farnsworth D15) 0.122 0.587
TES (Farnsworth D15) 0.172 0.443
Angle (Farnsworth D15) 0.112 0.619
BCVA RE 0.524 0.009
BCVA LE 0.533 0.007
BCVA in the best-seeing eye 0.524 0.009
Light level 0.562 0.006

TCDS: Total Color Difference Score; CCI: Color Confusion Index; TES: Total Error Score; BCVA: Best Corrected Visual Acuity; RE: Right Eye; LE: Left Eye;

IV. Discussion

In the current study, the design, development and clinical validation of a novel wearable system for improving color vision in CVD subjects was presented. The results achieved in the pilot experimental study are encouraging and confirm that a wearable augmented-reality platform could be feasible and helpful for people suffering from CVD. In particular, 50% of the subjects passed the Ishihara vision color test, which is usually adopted in clinical practice to identify the most common CVD and to disqualify those affected by color blindness from certain occupations [19].

As recently reported in a review on high-tech aids for the visually impaired, one of the challenges in the field is the choice and development of appropriate outcome measurements for the evaluation of the various techniques [20]. Previous studies evaluated the proposed algorithms by subjective evaluations [21] or error metrics designed to evaluate differences between the original and corrected images [11]. Only one study adopted a computerized version of the Ishihara test, associated with other ad hoc designed general or specialized tests [4]. We strongly believe that, for this kind of platform, which may be considered a medical device since its aim is to restore a compromised function, a well-designed clinical study, based on a clearly defined outcome, is required for CE marking (the conformity marking within the European Economic Area). We therefore chose the improvement in the score of Ishihara test with and without the algorithm correction as primary outcome. Since the system would be used in real-life conditions, the setting of the experiments is chosen as a room with natural light condition and the level of light was recorded. Since light level could influence the performance, we checked that, when corrected for BCVA, the correlation was not significant. For that reason, the BCVA is the main predictor of the expected improvement at the Ishihara test. Actually, the six subjects failing Ishihara test with the correction showed a significantly (p-value=0.04) lower BCVA (0.46 ± 0.27) than subjects passing the test with the correction (0.79 ± 0.28). To this regard, we underline that it is well known in ophthalmological clinical practice that worst BCVA implies a lower discrimination ability (independently from the CVD) and a less preserved cone function [22]. Consequently, it is expected that subjects with lower BCVA could benefit less from the proposed correction.

Differently from previous studies, which mainly focused on the most frequent CVD (i.e., dueteranopia and protanopia), our system is designed to deal with all kinds of CVD. For that reason, we tested the three proposed correction matrices in all subjects and we noticed that the one for correction of deuteranopia achieved the best results in almost all patients, independently from the classification obtained by Farnsworth D15 test. In particular, we noticed that also patients, classified according to Farnsworth D15 test as probably protanomaly / protanopy; or as probably tritanomaly / tritanopy, achieved the best improvement with correction for deuteranopia. This could be explained considering the following issues: alterations in the deutan axis may be present in almost all patients with CVD, consistently with the findings of epidemiological studies [23] ; the classification provided by the Farnsworth D15 test should be confirmed by more complex tests for assessment of CVD (i.e., Farnsworth-Munsell 100 Hue Tests); the correction matrix should be adapted to deal with alterations in more than one axis. To this regard, the results of this pilot study are very useful for the development of further clinical experimentations: from the clinical point of view, they showed the need of a more accurate screening of CVD, i.e., Farnsworth-Munsell 100 Hue Tests; consequently, from the technical point of view, a future development could be the tuning of the correction matrix in order to be customized for the specific alteration of each subject.

Finally, the development of the proposed system uncovered some limitations of the state-of-art devices, such as the limited resolution of stereoscopic cameras (Inline graphic @ 60fps) and of HMD (Inline graphic @ 75Hz), the unpleasant ghosting effect caused by a slow response time of LCD mounted on HMD. Nevertheless, new HMD based on OLED technology are coming on the market with higher resolution and refresh rate and lighter weight. The good results achieved using our prototype can be easily improved upgrading the hardware technology and adding to the developed software the management of new devices.

V. Conclusions

The present study shows the design, development and clinical validation of a wearable system based on augmented reality devices to improve color vision in subjects with any type of CVD. Almost all subjects (96%) showed an improvement in color vision and 50%, with the system on, passed the Ishihara color vision test, which is usually adopted to identify CVD and to disqualify those affected by color blindness from certain occupations.

From the experimental results we observed that, in most cases, subjects show color vision alterations in more than one axis, suggesting that tuning the correction matrix may further improve color vision. In particular, designing new combination policies as well as weighting strategies would be of high interest.

Acknowledgment

The authors would like to thank Dr Carmela Acerra for English editing on the manuscript.

Biographies

graphic file with name melil-2679746.gif

Paolo Melillo (M’12) was born in Naples, Italy, in 1985. He received the M.Sc. degree (Hons.) in biomedical engineering and the Ph.D. degree in health management from the University of Naples Federico II, Naples, in 2008 and 2012, respectively, and the Ph.D. degree in bioengineering from the University of Bologne “Alma Mater”, Bologne, Italy, in 2015.

He is currently an Assistant Professor of Applied Medical Technology and Methodology with the Multidisciplinary Department of Medical, Surgical and Dental Sciences, University of Campania Luigi Vanvitelli, Naples. He has authored or co-authored about 50 journal and conference papers in the fields of medical technology, in particular, data mining applied to health information, translational engineering in health, telemedicine, and signal processing.

Dr. Melillo is a member of the IEEE Engineering in Medicine and Biology Society, the Italian Association of Medical and Biological Society, the International Federation of Medical and Biological Society, the Association for Research in Vision and Ophthalmology, and the Italian Mathematical Union.

graphic file with name ricci-2679746.gif

Daniel Riccio (M’12) was born in Cambridge, U.K., in 1978. He received the Laurea (cum laude) degree and the Ph.D. degree in computer sciences from the University of Salerno, Salerno, Italy, in 2002 and 2006, respectively.

He is currently an Associate Professor with the University of Naples, Federico II. He is also an Associate Researcher with the Istituto di Calcolo e Reti ad Alte Prestazioni. His research interests include biometrics, medical imaging, image processing and indexing, and image and video analytics.

Dr. Riccio has been a member of the Italian Group of Researchers in Pattern Recognition since 2004.

graphic file with name diper-2679746.gif

Luigi Di Perna received the Laurea degree in medicine and surgery from the Second University of Naples, Italy, in 2010, and the Residency in Ophthalmology (cum laude) in 2016. From 2014 to 2015, he held an Observership at Cliniques Universitaires Saint Luc, Bruxelles, Belgium.

Mr. Di Perna is a current member of the Association for Research in Vision and Ophthalmology (ARVO). He received the travel grant at the 2016 ARVO Annual Meeting for the paper Activation of Melanocortin Receptors Mc1 and Mc5 Attenuates Retinal Damages in Experimental Diabetic Retinopathy.

graphic file with name sanni-2679746.gif

Gabriella Sanniti Di Baja received the Laurea (cum laude) degree in physics from the Federico II University of Naples, Italy, in 1973, and the Ph.D. degree (Hons.) from Uppsala University, Sweden, in 2002.

She was with the Institute of Cybernetics E. Caianiello, Italian National Research Council (CNR), from 1973 to 2015, where she has been the Director of Research and is currently an Associate Researcher with the Institute for High Performance Computing and Networking. She has authored over 200 papers in international journals and conference proceedings. Her research activity is in the field of image processing, pattern recognition, and computer vision.

Dr. Sanniti Di Baja is an IAPR Fellow and a Foreign Member of the Royal Society of Sciences, Uppsala, Sweden. He is the Co-Editor-in-Chief of the Pattern Recognition Letters . She has been the President of the International Association for Pattern Recognition and the Italian Group of Researchers in Pattern Recognition.

graphic file with name denin-2679746.gif

Maurizio De Nino was born in Naples, Italy, in 1972. He received the degree (Hons.) in physics with a specialization in artificial intelligence and data acquisition systems. He was a PlayStation Game Developer (Puma Street Soccer). He gained experience for twenty years working with scientific centers, such as CIRA, ASI, ESA, CNR, INAF, and DLR, and hitech companies, such as Finmeccanica, Selex-ES, SSC, Thales, Astrium, Cosine, and TSD, with a specific experience in Research and Development for real-time embedded system, image processing, computer vision, and virtual reality. He was the Technical Director for over 50 projects and for these projects, he was honored with four awards for his technology contribution for Rosetta, Venus Express, Photon Capsule, and Maser Sounding Rocket. He wrote dozens of articles on image processing for specialized journals and presented many papers during International Congresses.

In the last six years, he has been with Digitalcomoedia as the CTO for virtual and augmented reality system development, coordinating a team of developers with a direct link with the main University of Naples for which he followed, as a Mentor for many theses in image processing and virtual reality application.

He is the CTO of four projects, such as StopEmotion, Virtual Training Environment, iEngine, and FCI Image SCOE.

graphic file with name rossi-2679746.gif

Settimio Rossi was born in Pompei, Italy, in 1978. He received the degree (Hons.) in medicine and surgery from the Second University of Naples in 2002, and the Residency in Ophthalmology from the School of Medicine, Second University of Naples, in 2006.

He is currently an Associate Professor of Ophthalmology with the Multidisciplinary Department of Medical, Surgical and Dental Sciences, University of Campania Luigi Vanvitelli, Naples, Italy. He has authored or co-authored about 60 journal and conference papers in the field of Ophthalmology.

Prof. Rossi was a member of the Association for Research in Vision and Ophthalmology. He is a member of the Italian Society of Ophthalmology.

graphic file with name testa-2679746.gif

Francesco Testa was born in Caserta, Italy, in 1970. He received the degree (Hons.) in medicine and surgery from the Second University of Naples, Naples, Italy, in 1996, and the Residency in Ophthalmology from the School of Medicine, Second University of Naples, in 2000, and the Ph.D. degree in biochemistry and medical biotechnology from the University of Naples Federico II, Naples, in 2004.

He is currently an Associate Professor of Ophthalmology with the Multidisciplinary Department of Medical, Surgical and Dental Sciences, University of Campania Luigi Vanvitelli, Naples. He has authored or co-authored about 70 journal and conference papers in the field of Ophthalmology.

Prof. Testa is a member of the Association for Research in Vision and Ophthalmology and the Italian Society of Ophthalmology.

graphic file with name simon-2679746.gif

Francesca Simonelli was born in Nola, Italy, in 1959. She received the degree (Hons.) in medicine and surgery from the University of Naples Federico II, Naples, Italy, in 1983, and the Residency in Ophthalmology from the School of Medicine, University of Naples Federico II, in 1987.

She is currently a Full Professor of Ophthalmology with the Multidisciplinary Department of Medical, Surgical and Dental Sciences, University of Campania Luigi Vanvitelli, Naples. She has authored or coauthored about 100 journal and conference papers in the field of Ophthalmology.

Prof. Simonelli is a member of the Association for Research in Vision and Ophthalmology, the President of the Italian Society of Ophthalmologic Genetics, a Member of the Eye Working Group of the Telethon Institute of Genetics and Medicine, a Member of the National Fighting Blindness Committee of the Italian Ministry of Health, a Member of the Italian Society of Ophthalmology, and the President of the Scientific Committee of Retina Italia Onlus.

graphic file with name frucc-2679746.gif

Maria Frucci received the Ph.D. (cum laude) degree in physics from the University Federico II of Naples, Italy, in 1983. From 1984 to 1987, she was with the Centre for Informatics and Industrial Automation Research, Portici, Naples, Italy, as a Research Fellow and then a Senior Researcher. She was active in the area of natural language, expert systems, and shape analysis. From 1988 to 2014, she was a Researcher with the Institute of Cybernetics E. Caianiello, Italian National Research Council (CNR). Since 2003, she has been a Senior Researcher. In 2014, she was with the Institute for High-Performance Computing and Networking, CNR. From 1999 to 2005, she was a Lecturer for the Computer Science degree course of the University Federico II of Naples, teaching algorithms and data structures. She has authored over 90 papers on different topics, such as natural language, perception, representation, image analysis, segmentation, biometrics, color image processing, and medical imaging. Her research activity is mainly in the field of image processing, computer vision, and pattern recognition.

Funding Statement

This work was supported by the Regione Campania under the research project Computer Improved VISus.

References

  • [1].Wong B., “Points of view: Color blindness,” Nature Methods, vol. 8, p. 441, May 2011. [DOI] [PubMed] [Google Scholar]
  • [2].Wandell B. A., Foundations of Vision. Sunderland, MA, USA: Sinauer Associates, 1995. [Google Scholar]
  • [3].Chen X. and Lu Z., “Method and eyeglasses for rectifying color blindness,” Google Patents US 5 369 453 A, Nov. 29, 1994.
  • [4].Tanuwidjaja E., et al. , “Chroma: A wearable augmented-reality solution for color blindness,” in Proc. ACM Int. Joint Conf. Pervasive Ubiquitous Comput., Sep. 2014, pp. 799–810. [Google Scholar]
  • [5].Widmer A., Schaer R., Markonis D., and Müller H., “Facilitating medical information search using Google Glass connected to a content-based medical image retrieval system,” in Proc. 36th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Aug. 2014, pp. 4507–4510. [DOI] [PubMed] [Google Scholar]
  • [6].Abushakra A. and Faezipour M., “Augmenting breath regulation using a mobile driven virtual reality therapy framework,” IEEE J. Biomed. Health Informat., vol. 18, no. 3, pp. 746–752, May 2014. [DOI] [PubMed] [Google Scholar]
  • [7].Dowling A. V., Barzilay O., Lombrozo Y., and Wolf A., “An adaptive home-use robotic rehabilitation system for the upper body,” IEEE J. Transl. Eng. Health Med., vol. 2, 2014, Art. no. 2100310. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Subbian V., Ratcliff J. J., Meunier J. M., Korfhagen J. J., Beyette F. R., and Shaw G. J., “Integration of new technology for research in the emergency department: Feasibility of deploying a robotic assessment tool for mild traumatic brain injury evaluation,” IEEE J. Transl. Eng. Health Med., vol. 3, 2015, Art. no. 3200109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Schaer R., et al. , “Live ECG readings using Google Glass in emergency situations,” in Proc. 37th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), Aug. 2015, pp. 315–318. [DOI] [PubMed] [Google Scholar]
  • [10].Fortmeier D., Mastmeyer A., Schroder J., and Handels H., “A virtual reality system for PTCD simulation using direct Visuo-haptic rendering of partially segmented image data,” IEEE J. Biomed. Health Informat., vol. 20, no. 1, pp. 355–366, Jan. 2016. [DOI] [PubMed] [Google Scholar]
  • [11].Machado G. M. and Oliveira M. M., “Real-time temporal-coherent color contrast enhancement for Dichromats,” Comput. Graph. Forum, vol. 29, no. 3, pp. 933–942, Jun. 2010. [Google Scholar]
  • [12].Brettel H., Viénot F., and Mollon J. D., “Computerized simulation of color appearance for Dichromats,” J. Opt. Soc. Amer. A, vol. 14, no. 10, pp. 2647–2655, 1997. [DOI] [PubMed] [Google Scholar]
  • [13].Testa F., et al. , “Macular abnormalities in Italian patients with retinitis pigmentosa,” Brit. J. Ophthalmol., vol. 98, pp. 946–950, Feb. 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Rossi S., et al. , “Subretinal fibrosis in stargardt’s disease with fundus flavimaculatus and ABCA4 gene mutation,” Case Rep. Ophthalmol., vol. 3, pp. 410–417, Dec. 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Bowman K. J., “A method for quantitative scoring of the farnsworth panel D-15,” Acta Ophthalmol., vol. 60, no. 6, pp. 907–916, Dec. 1982. [DOI] [PubMed] [Google Scholar]
  • [16].Vingrys A. J. and King-Smith P. E., “A quantitative scoring technique for panel tests of color vision,” Invest. Ophthalmol. Vis. Sci., vol. 29, pp. 50–63, Jan. 1988. [PubMed] [Google Scholar]
  • [17].Fishman G. A., et al. , “Outcome measures and their application in clinical trials for retinal degenerative diseases: Outline, review, and perspective,” Retina, vol. 25, pp. 772–777, Sep. 2005. [DOI] [PubMed] [Google Scholar]
  • [18].Kniestedt C. and Stamper R. L., “Visual acuity and its measurement,” Ophthalmol. Clin. North Amer., vol. 16, pp. 155–170, Jun. 2003. [DOI] [PubMed] [Google Scholar]
  • [19].Almagambetov A., Velipasalar S., and Baitassova A., “Mobile standards-based traffic light detection in assistive devices for individuals with color-vision deficiency,” IEEE Trans. Intell. Transp. Syst., vol. 16, no. 3, pp. 1305–1320, Jun. 2015. [Google Scholar]
  • [20].Moshtael H., Aslam T., Underwood I., and Dhillon B., “High tech aids low vision: A review of image processing for the visually impaired,” Transl. Vis. Sci. Technol., vol. 4, no. 4, p. 6, Aug. 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Yang S., Ro Y. M., Nam J., Hong J., Choi S. Y., and Lee J.-H., “Improving visual accessibility for color vision deficiency based on MPEG-21,” Etri J., vol. 26, no. 3, pp. 195–202, 2004. [Google Scholar]
  • [22].Krill A., Deutman A., and Fishman M., “The cone degenerations,” Documenta Ophthalmol., vol. 35, no. 1, pp. 1–80, Jan. 1973. [DOI] [PubMed] [Google Scholar]
  • [23].Simunovic M. P., “Color vision deficiency,” Eye (Lond), vol. 24, no. 5, pp. 747–755, May 2010. [DOI] [PubMed] [Google Scholar]

Articles from IEEE Journal of Translational Engineering in Health and Medicine are provided here courtesy of Institute of Electrical and Electronics Engineers

RESOURCES