Skip to main content
Telemedicine Journal and e-Health logoLink to Telemedicine Journal and e-Health
. 2013 Aug;19(8):591–596. doi: 10.1089/tmj.2012.0191

Autonomy Versus Automation: Perceptions of Nonmydriatic Camera Choice for Teleretinal Screening in an Urban Safety Net Clinic

Omolola Ogunyemi 1,2,, Erin Moran 1,2, Lauren Patty Daskivich 3, Sheba George 1,2, Senait Teklehaimanot 2, Ramarao Ilapakurthi 1,2, Kevin Lopez 1,2,4, Keith Norris 2,3
PMCID: PMC3719439  PMID: 23763609

Abstract

Objective: Teleretinal screening with nonmydriatic cameras has been presented as a means of increasing the number of patients assessed for diabetic retinopathy in urban safety net clinics. It has been hypothesized that automated nonmydriatic cameras may improve screening rates by reducing the learning curve for camera use. In this article, we examine the impact of introducing automated nonmydriatic cameras to urban safety net clinics whose photographers had previously used manual cameras. Materials and Methods: We evaluated the impact of manual and automated digital nonmydriatic cameras on teleretinal screening using a quantitative analysis of readers' image quality ratings as well as a qualitative analysis, through in-depth interviews, of photographers' experiences. Results: With the manual camera, 68% of images were rated “adequate” or better, including 24% rated “good” and 20% rated “excellent.” With the automated camera, 61% were rated “adequate” or better, including 9% rated “good” and 0% rated “excellent.” Photographers expressed frustration with their inability to control image-taking settings with the automated camera, which led to unexpected delays. Conclusions: For safety net clinics in which medical assistants are already trained to take photographs for diabetic retinopathy screening with a manual camera, the introduction of automated cameras may lead to frustration and paradoxically contribute to increased patient wait times. When photographers have achieved a high degree of aptitude with manual cameras and value the control they have over camera features, the introduction of automated cameras should be approached with caution and may require extensive training to increase user acceptability.

Key words: ophthalmology, telemedicine, technology

Introduction

In the United States, studies have shown that although roughly 60% of diabetic patients receive an annual eye examination nationally,15 the screening rate may be lower than 25% in urban safety net clinics.68 One of eight service planning areas in Los Angeles County, Service Planning Area 6, the service planning area in which Charles Drew University is located, is a prototypical inner-city community with a population that has a very high risk of vision loss due to diabetes. This service area faces such challenges as having the lowest concentration of physicians, a population with the lowest socioeconomic status, the highest concentration of racial/ethnic minorities (95% Hispanic and African American), and the worst diabetes-related health outcomes in Los Angeles County.9

Charles Drew University has an ongoing relationship with a coalition of independent community clinics (federally qualified health centers10 and federally qualified health center look-alikes) that have formed a specialty care-focused research network. We have previously reported on a study conducted in Los Angeles with this coalition of clinics that examined the use of telemedicine to screen patients for diabetic retinopathy in six South Los Angeles safety net clinics.11,12

A challenge observed in implementing the teleretinal screening program was the need to train medical assistants as photographers because they have multiple assigned clinical duties. In several clinics, this challenge was exacerbated by personnel and other resource limitations that prevented teleretinal screening from being integrated into the doctor–patient visit. Photographers in those clinics were available for image taking on only a few specified days a week, increasing the likelihood that they would forget how to use certain manual camera features. The ability to capture acceptable fundus photographs with the manual digital nonmydriatic cameras requires facility with these features. In one of the study clinics, use of a manual digital nonmydriatic camera was delayed for several months while the photographer underwent multiple retrainings to achieve proficiency with the camera.12

Study staff and clinic management sought to address problems with image quality arising from photographer difficulty with achieving proficiency on the manual cameras. Using automated digital nonmydriatic cameras that reduce the training burden for photographers appeared to be a promising means of tackling this issue.

In this article, we examine the impact of introducing a new type of automated digital nonmydriatic camera that required minimal photographer training into some of the clinics participating in the teleretinal screening study. Although the goal of the original study was not to compare the use of automated digital retinal cameras with the use of manual cameras, our secondary findings on this issue may be of help to other researchers undertaking teleretinal screening projects in similar settings.

Materials and Methods

Institutional Review Board approval for the study was obtained from Charles Drew University of Medicine and Science. Methods used to evaluate the impact of manual and automated digital nonmydriatic cameras on teleretinal screening included a quantitative analysis of readers' ratings of image quality at one site (Clinic A) and a qualitative analysis, through in-depth interviews, of photographers' experiences with both types of cameras at two clinics (Clinics A and B).

Photographer Training

Photographers for the teleretinal screening study were medical assistants already employed at the clinics who took on additional responsibilities associated with teleretinal screening. We used the Eye Picture Archive Communication System (EyePACS)13,14 to store retinal images that were graded by ophthalmologist readers. In order to be certified as photographers, the medical assistants received training through the study's arrangement with EyePACS on the use of the digital nonmydriatic camera. To successfully complete the training, photographers had to upload 10 sets of retinal images (using their colleagues for practice) and have these 10 images graded as satisfactory by EyePACS staff.

Clinic Settings for Assessment of Automated Versus Manual Cameras

At one clinic, Clinic A, three photographers had received training on how to use a Canon (Tokyo, Japan) CR-1 Mark I camera (hereafter referred to as the manual camera), which they used to grade images between September 1, 2010 and December 16, 2010. They later received training on how to use the CenterVue (Santa Clara, CA) DRS camera (hereafter referred to as the automated camera), which they used to grade images between February 18, 2011 and June 16, 2011.

The two photographers at another clinic, Clinic B, were first trained on the manual camera, purchased an automated camera on their own for use at a senior citizens center, and were trained on use of that camera as well.

Image Grading Process

Readers for the study were three board-certified ophthalmologists who had prior experience with assessing patient images using teleretinal screening. The readers viewed the cases (i.e., retinal image sets) uploaded by photographers into EyePACS. In the general case list queue, readers could see the case number, which photographer had uploaded the case, whether the case had been read (and if it had, when and by whom), the clinic where the images were taken, and the date of upload.

To assess a case, readers clicked on the case number to see the patient data uploaded by the medical assistant/photographer, along with the fundus images.

Each reader viewed the three or four images of each eye (six to eight total) by clicking on the image. The standard EyePACS image protocol provides four views of each eye: an external eye photograph and macula, nasal, and temporal fields of the retina. Clicking on the image enlarges the image to allow for analysis. For the automated camera, the external eye photograph was no longer available as the camera does not provide that view.

After the reader reviewed the images, he or she completed a series of checkboxes stating the presence or absence of microaneurysms, retinal hemorrhages with or without microaneurysms, cotton wool spots, intraretinal microvascular abnormalities, venous beading, new vessels (on the disc and elsewhere), fibrous proliferation, vitreous hemorrhage or preretinal hemorrhage, and hard exudates.

An algorithm in the software, based on retinopathy guidelines established by the Early Treatment Diabetic Retinopathy Study modification of the Airlie House classification15 and the international classification of diabetic retinopathy developed by the International Council of Ophthalmology, uses these findings to compute an overall level of severity of retinopathy (i.e., no apparent diabetic retinopathy, mild/moderate/severe nonproliferative diabetic retinopathy, proliferative diabetic retinopathy) and the presence/absence of diabetic macular edema. The reader could also manually change/enter these fields as deemed appropriate.

Image Quality Rating System

Using another feature of EyePACS, readers were also able to grade the quality of the fundus images associated with each case. This feedback to the photographers was intended to help improve photographer skills and enhance the quality of the images. Readers chose the image grade from the following seven EyePACS options:

  1. Insufficient for any interpretation

  2. Insufficient for full interpretation

  3. Adequate

  4. Good

  5. Excellent

  6. Other (specify in comments)

  7. N/A (not applicable)

In addition to choosing one of these options, some readers also provided specific feedback to photographers in the comments section of the case interface. Monthly reports aggregated the image grades and rated the proficiency of each photographer and clinic on a scale of 1–4, with 4 being excellent, 3 good, 2 average, and 1 substandard. Clinics aimed for an average ranking of 2.5 or greater.

Qualitative Methods

Data collection

This study used the standard qualitative methods of focus group techniques, semistructured interviews, and participant-observation to assess the acceptability of teleretinal screening at six South Los Angeles safety net clinics. Convenience sampling was used to select a subset of photographers and other staff as interview participants, based on their availability and experience with teleretinal screening. Study staff interviewed employees and service providers, including six chief medical officers, four midlevel staff, five medical assistant/photographers, and three ophthalmologists. In addition, focus groups were conducted with 42 patients who had received screening in the participating clinics.

A script guided both the focus groups and in-depth interviews with questions arranged by category to facilitate content analysis. Interview questions were developed from a preliminary literature review and were open‐ended to allow subjects the leeway to describe, in their own words, the most meaningful aspects of teleretinal screening. Interview questions focused on staff satisfaction with teleretinal screening technology, workflow processes, and communication challenges. Questions also addressed the clinic patient population, the personal histories and specific clinical role(s) of the interviewee, and the role of teleretinal screening in an under‐resourced setting, as well as issues related to the implementation of teleretinal screening. Interviews lasted approximately 60–90 min. All groups were audiotaped, and the tapes were transcribed (to which participants consented). Upon each subject's completion of participation in the interview, the subject was provided with remuneration of $75.

Data analysis

Using Atlas Ti software (ATLAS.ti Scientific Software Development GmbH, Berlin, Germany) to help manage and analyze the data, interview transcripts were manually coded and indexed to develop analytical categories based on qualitatively informed and modified grounded theory techniques of analysis. Codes were initially derived inductively, using a collaborative open coding process. The unit of analysis was incidence, and incidence codes were accepted according to “fit” (i.e., whether or not the codes adequately capture incidents they are representing). Codes were then combined into categories, a codebook was developed, and then the transcripts were coded. Constant comparison within and across categories allowed researchers to check codes against the rest of the data to establish categories that reflect the nuances of the data, key themes, and theoretical insights. The group then reevaluated, refined the codes, and recoded the transcripts (a minimum of ) five more times until redundancy was achieved. Through this iterative process of analysis, hypotheses were organically developed and tested.

Scientific rigor was strengthened through the use of common procedural guidelines for qualitative studies.16 Credibility of the results was supported through use of data from six focus groups with carefully chosen participants and of a team with diverse research expertise and backgrounds. An iterative mode of data analysis by multiple team members increased dependability of the findings. Confirmability of the results was reinforced through a detailed audit trail. Transferability of the findings is made possible through published description of the methods and findings.

Results

Prior to the purchase of the automated camera, photographers and their supervisors had been concerned about the quality of images being taken with the manual camera. These concerns were prompted by the readers' direct qualitative feedback to photographers via comments entered into EyePACS, as well as monthly reports generated by the EyePACS management team. Chief medical officers and supervisors believed that low image ratings were the result of poor photographer aptitude and technical issues with the manual camera. Other factors influencing image quality that were unrelated to the camera included room darkness, patient compliance, and eye dilation. In this section, we present readers' ratings of image quality for the manual and automated cameras as well as a representative sample of clinic staff perceptions of the cameras from in-depth interviews.

Readers' Ratings of Images taken with Manual and Automated Cameras

The ophthalmologist readers' ratings of photographers in Clinic A for the 4-month period in which the manual camera was in use and the 5-month period in which the automated camera was in use are presented in Table 1. These ratings were aggregated from the EyePACS software and included (a) ophthalmologist reader ratings of all images taken at Clinic A with the manual camera between September 1, 2010 and December 16, 2010 and (b) all images taken at Clinic A with the manual camera between February 18, 2011 and June 16, 2011.

Table 1.

Readers' Ratings of Camera Image Quality for Clinic A

 
CAMERA USE
 
MANUAL (SEPTEMBER 1, 2010–DECEMBER 16, 2010)
AUTOMATED (FEBRUARY 18, 2011–JUNE 16, 2011)
READERS' RATING NUMBER OF IMAGES RATED PERCENTAGE OF RATED IMAGES NUMBER OF IMAGES RATED PERCENTAGE OF RATED IMAGES
Excellent 34 20% 0 0%
Good 40 24% 13 9%
Adequate 38 22% 74 52%
Insufficient for full interpretation 53 31% 42 30%
Insufficient for any interpretation 5 3% 13 9%
Other 0 0% 0 0%
 Total 170 100% 142 100%

Clinic Staff Perceptions of Automated Versus Manual Cameras from In-Depth Interviews

Here, we specifically provide the findings of the data analysis from the in-depth interviews, particularly from segments relevant to the comparison of automated and manual camera use. Both the photographers and the chief medical officer interviewed had anticipated that a new automated camera would decrease photographer-dependent image quality issues and improve image quality assessment. The chief medical officer put his hopes this way:

Basically, it requires a [patient] to put their…face in the hole, and the camera does the rest. [The photographer] just press[es] the button…So probably that will eliminate a lot of the operator-dependent issues…let me show you the latest report from EyePACS. So as you can see, [the quality assessment is] not very good. So I'm hoping the new camera will eliminate this problem.

Following a period of use, photographers unexpectedly reported preferring the manual camera to the automated camera. The primary reasons reported for this preference were that staff felt that the automated camera resulted in loss of control over camera features and that this loss caused (1) time inefficiency and (2) poorer image quality (Table 2).

Table 2.

Sample Responses About the Automated Camera

AUTOMATED CAMERA QUOTE
(a) Loss of control over camera features The [automated] camera, it does everything by itself. All we do is put in information and push the start button. If the image quality comes out bad, we have to wait until it finishes taking all the pictures. And then we delete the picture that's not good. And we do it again. With the [manual] camera, I feel I have more control of the camera. I can move it the way I need it to be moved. I can snap the picture as soon as the picture is ready to be taken. Sometimes the patients blink, move their eyes a lot. I just feel the [manual] camera, I can have more control of it…I don't know about anybody else but I like to have control of the camera. I just think the control of the camera has a lot to do with the pictures. Because the [automated] camera, it kinda comes out different. And we have to go back and do the picture again. And we have the patients waiting outside.
—Medical assistant/photographer
(b) Time inefficient One particular patient, she would blink a lot. She would move her pupils a lot. It was hard for the [automated] camera to read the eye. Either the head was too far back, was too much to the right. I actually had to do that exam four times. The exam—since we have it calculated to wait for 20 seconds after the picture was taken, so the picture's taken 20 seconds later. It tries to find the eye and takes a picture again—every 20 seconds for six pictures…It's just time-wise, hard to maneuver…I've got to repeat the process a couple of times and it does delay the time-frame that we have.
—Medical assistant/photographer
(c) Poorer image quality [My image ratings would be] “insufficient for full interpretation”; “repeat retinal exam.” It was hard to take and focus the camera [was bad]. Most of my reports used to say that. [With the manual camera] they're getting a little bit better.
—Medical assistant/photographer

Photographers felt that the automated camera did not allow them sufficient control over the camera features. Specifically, photographers complained that the automated camera limited their ability to manipulate camera controls, including focus, exposure, and maneuverability. One photographer felt that these limitations were reflected in lower image quality ratings. Another photographer similarly felt that the camera operator sacrifices control over camera features for simplicity of use, negatively impacting the quality of the images.

Photographers felt their diminished control over the image-taking features in the automated camera often necessitated that the images be retaken because patients were not able to hold their eyes still long enough for the camera to complete an entire set of photographs without pause. Resulting images were obscured by eyelashes or were images of the sclera instead of the fundus. One photographer estimated that he had to retake images about 70% of time with the automated camera. Furthermore, photographers pointed out that retaking images multiple times protracted the eye exam, causing an increase in the average time it took to screen a patient. One photographer estimated that using the automated camera had doubled the average length of a screening from 5 to 10 min. One medical assistant/photographer compared the manual camera with the automated camera as follows:

[The manual camera is] more accessible and you're able to maneuver it manually according to the picture quality, the dimness, the focus. Just overall, it's a different experience. The new one, it's a good camera, it's a great camera, good technology but the only thing is that it impairs me. [When] I see that there is [sic] a couple of issues, just the fact that we don't get to take the pictures itself, we don't get to maneuver it. It does everything by itself which is a time saver and it's great but just I've noticed that overall, our picture quality, has gone lower, the percentile, from our old camera that we used to have…Just the fact that we don't get to maneuver the camera as we would like and we don't get to focus it.

In a safety net setting, such as the clinics in this study, increasing the length of an exam has a significant impact on patient satisfaction and clinic workflow. Photographers are keenly aware that patients in this setting often wait hours for an appointment, and an increase in wait or exam time is viewed in an unfavorable light. One photographer described his satisfaction at returning to the manual camera. This individual felt that the manual camera returned control to the operator, ultimately speeding up the exam. He said,

I like [the manual] camera more than the [automated] one since we can see everything better. The pictures—we don't have to wait until the camera's finished doing all of them. If the picture is bad, we can retake it right there and then…I feel more comfortable with [the manual camera] and using it more often…But I think I prefer the [manual] camera, using it more than the [automated] one since we get more and more patients here. We wanna get the patients in and out.

All medical assistants/photographers who were interviewed specifically regarding their perceptions of the automated camera responded that, although they appreciated the technology of the automated camera, they preferred using the manual camera because it gave them more control over the images. This, they felt, resulted in faster screening appointments and better quality images (a perception that is supported by the quantitative data presented above).

Discussion

In the course of a broader study on teleretinal screening in urban, underserved settings we had the opportunity to examine ophthalmologist readers' and clinic staff's assessments of automated and manual cameras for diabetic retinopathy screening. Although clinic staff at two clinics had anticipated that the automated cameras would provide improved image quality and decrease the amount of time required for taking retinal images for diabetic retinopathy screening, results show that the readers' ratings of image quality with the manual camera were higher than with the automated camera: 68% of images were rated “adequate” or better with the manual camera, whereas 61% received an “adequate” or better rating with the automated camera. In addition, while just 3% of images were rated “insufficient for any interpretation” with the manual camera, a full 9% of images taken with the automated camera were rated by readers as being “insufficient for any interpretation.”

The goal of decreasing the time spent on image taking was also unmet because with the automated cameras, problems with images meant that the whole process of image taking had to be restarted from scratch. Photographers expressed dissatisfaction at their inability to control image-taking settings with the automated camera. Prior to the introduction of the automated camera, photographers had expressed pride in receiving image quality ratings of “good” or “excellent,” with 44% of images receiving such ratings. With the automated camera, although the vast majority of images were rated “adequate,” only 9% were rated “good,” and none was rated “excellent.” This was an additional source of dissatisfaction for photographers.

The goals of introducing the automated camera into the urban safety net setting were to decrease the time and frustration experienced with training and retraining photographers on the manual cameras, to reduce the time spent on image taking via automation, and also to improve the quality of the images taken. In the particular clinics that we assessed, these goals were not met. It is possible that in the broader context of training busy medical assistants in safety net clinics to be photographers, automated cameras would produce a higher percentage of images rated adequate for reader assessment, regardless of the level of potential photographer skill with a manual camera. However, in the clinic settings we examined, the fact that the photographers had achieved a high degree of aptitude with the manual cameras and had come to value the control they had over camera features may have meant that automated cameras no longer provided that benefit. In other words, automated cameras may indeed meet the goal of decreasing the time and frustration that can occur with photographer training by reducing the emphasis on skill, but these cameras may be more appropriate for a safety net clinic in which the medical assistants being trained to take photographs for diabetic retinopathy screening have no prior experience with image taking. This remains an area for further study.

Acknowledgments

This project was supported by the National Institutes of Health under grant number U54 MD007598-01S2 (formerly U54 RR026138-01S2).

Disclosure Statement

No competing financial interests exist.

References

  • 1.Brechner RJ. Cowie CC. Howie LJ. Herman WH. Will JC. Harris MI. Ophthalmic examination among adults with diagnosed diabetes mellitus. JAMA. 1993;270:1714–1718. [PubMed] [Google Scholar]
  • 2.Cavallerano AA. Conlin PR. Teleretinal imaging to screen for diabetic retinopathy in the Veterans Health Administration. J Diabetes Sci Technol. 2008;2:33–39. doi: 10.1177/193229680800200106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Moss SE. Klein R. Klein BE. Factors associated with having eye examinations in persons with diabetes. Arch Fam Med. 1995;4:529–534. doi: 10.1001/archfami.4.6.529. [DOI] [PubMed] [Google Scholar]
  • 4.Orr P. Barron Y. Schein OD. Rubin GS. West SK. Eye care utilization by older Americans: The SEE Project. Salisbury Eye Evaluation. Ophthalmology. 1999;106:904–909. doi: 10.1016/s0161-6420(99)00508-4. [DOI] [PubMed] [Google Scholar]
  • 5.Schoenfeld ER. Greene JM. Wu SY. Leske MC. Patterns of adherence to diabetes vision care guidelines: Baseline findings from the Diabetic Retinopathy Awareness Program. Ophthalmology. 2001;108:563–571. doi: 10.1016/s0161-6420(00)00600-x. [DOI] [PubMed] [Google Scholar]
  • 6.Deeb LC. Pettijohn FP. Shirah JK. Freeman G. Interventions among primary-care practitioners to improve care for preventable complications of diabetes. Diabetes Care. 1988;11:275–280. doi: 10.2337/diacare.11.3.275. [DOI] [PubMed] [Google Scholar]
  • 7.Payne TH. Gabella BA. Michael SL. Young WF. Pickard J. Hofeldt FD, et al. Preventive care in diabetes mellitus. Current practice in urban health-care system. Diabetes Care. 1989;12:745–747. doi: 10.2337/diacare.12.10.745. [DOI] [PubMed] [Google Scholar]
  • 8.Wylie-Rosett J. Basch C. Walker EA. Zybert P. Shamoon H. Engel S, et al. Ophthalmic referral rates for patients with diabetes in primary-care clinics located in disadvantaged urban communities. J Diabetes Complications. 1995;9:49–54. doi: 10.1016/1056-8727(94)00005-9. [DOI] [PubMed] [Google Scholar]
  • 9.Los Angeles County Department of Health Services. Key indicators of health by service planning area. 2003. www.lapublichealth.org/wwwfiles/ph/hae/ha/keyhealth.pdf. [Jun 24;2008 ]. www.lapublichealth.org/wwwfiles/ph/hae/ha/keyhealth.pdf Last updated.
  • 10.Takach M. Osius E. Federally qualified health centers and state health policy: A primer for California. Oakland, CA: California Healthcare Foundation; 2009. [Google Scholar]
  • 11.Fish A. George S. Terrien E. Eccles A. Baker R. Ogunyemi O. Workflow concerns and workarounds of readers in an urban safety net teleretinal screening study. AMIA Annu Symp Proc. 2011;2011:417–426. [PMC free article] [PubMed] [Google Scholar]
  • 12.Ogunyemi O. Terrien E. Eccles A. Patty L. George S. Fish A, et al. Teleretinal screening for diabetic retinopathy in six Los Angeles urban safety-net clinics: Initial findings. AMIA Annu Symp Proc. 2011;2011:1027–1035. [PMC free article] [PubMed] [Google Scholar]
  • 13.Cuadros J. Bresnick G. EyePACS: An adaptable telemedicine system for diabetic retinopathy screening. J Diabetes Sci Technol. 2009;3:509–516. doi: 10.1177/193229680900300315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.The EyePACS Handbook. 2009. http://eyepacs.com/documents/EyePACS_Handbook_FINAL_3_9_09.pdf. [Jun 6;2012 ]. http://eyepacs.com/documents/EyePACS_Handbook_FINAL_3_9_09.pdf Last updated.
  • 15.Grading diabetic retinopathy from stereoscopic color fundus photographs—An extension of the modified Airlie House classification. ETDRS report number 10. Early Treatment Diabetic Retinopathy Study Research Group. Ophthalmology. 1991;98(5 Suppl):786–806. [PubMed] [Google Scholar]
  • 16.Mays N. Pope C. Qualitative research in health care: Assessing quality in qualitative research. BMJ. 2000;320:50. doi: 10.1136/bmj.320.7226.50. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Telemedicine Journal and e-Health are provided here courtesy of Mary Ann Liebert, Inc.

RESOURCES