Abstract
Behavioral and neurophysiological studies strongly suggest that visual orienting reflects the integration of sensory, motor, and motivational variables with behavioral goals. Relatively little is known, however, regarding the goals that govern visual orienting of animals in their natural environments. Field observations suggest that most nonhuman primates orient to features of their natural environments whose salience is dictated by the visual demands of foraging, locomotion and social interaction. This hypothesis is difficult to test quantitatively, however, in part because accurate gaze-tracking technology has not been employed in field studies. We here report the implementation of a new, telemetric, infrared-video gaze-tracker (ISCAN) to measure visual orienting in freely-moving, socially-housed prosimian primates (Lemur catta). Two male lemurs tolerated the system at approximately ¼ body weight, permitting successful measurements of gaze behavior during spontaneous locomotion through both terrestrial and arboreal landscapes, and in both social and asocial environments.
Introduction
At the start of the last century, scientists noted that subjects do not accurately describe the eye movements they make while reading, suggesting that subjective reports cannot provide an accurate assessment of visual orienting (see Collewijn 1998 citing Dodge & Cline 1901). Since then, various means of quantitatively measuring eye movements have been developed, relying initially on tracked contrast boundaries (ibid) or corneal reflections (see Carpenter 1988 citing Jasper & Walker 1931), electrooculography (see Collewijn 1998 citing Fenn & Hursh 1934), current induction through magnetic search coils (Robinson 1963), and digital video oculography (see Carpenter 1988 citing Nakayama 1974).
In the 1960’s, Alfred Yarbus dramatically demonstrated that visual orienting reflects the interaction of stimulus-driven perceptual variables with behavioral goals (Yarbus 1967). Although his work revolutionized our understanding of the ethology of visual orienting, its scope was constrained by technical limitations. Most significantly, Yarbus tracked gaze using light reflected by mirrors, affixed by suction to the sclera of each subject, onto photo paper placed beside each picture. This technique made heroic demands on both the subjects and the experimenter, and required that subjects’ heads be firmly fixed in position. Consequentially, recordings were brief, focused on static two-dimensional images, and were conducted from a single vantage point with no opportunity for interaction or locomotion.
Due to its spatial accuracy and temporal precision, the magnetic search-coil technique has become widely used to study visual orienting in both humans and nonhuman primates (Collewijn 1998). This technique involves attaching a loop of conductive wire to the sclera so that it circumscribes the iris, the orientation of which can be measured by recording the current induced through the loop by a cycling magnetic field of known strength. The loop is surgically implanted near the conjunctiva in animal experiments but is also used in humans through the application of wire-implanted contact annuli placed directly on the eye. Much as for Yarbus’s optical technique, use of a magnetic search coil poses substantial design constraints that limit application to conditions outside the laboratory. First and foremost, the search coil technique requires subjects to be held rigid within a controlled and spatially uniform magnetic field. As a result, the search coil technique has been used to measure visual orienting only under decidedly non-naturalistic conditions (e.g. Deaner & Platt 2003, Guo et al 2003, Deaner et al 2005). Eye movements in the laboratory are generally evoked through operant conditioning, pairing salient but artificial stimuli with explicit juice or food rewards.
Unfortunately, these limitations have resulted in a gulf between laboratory measurements of gaze behavior and the natural contexts for which gaze control systems evolved (Walls 1962, Lipps & Pelz 2004). For example, gaze behavior in social settings has been largely inaccessible to both laboratory scientists and field researchers (but see Keating & Keating 1982, Tomasello et al 1998; for overview see Emery 2000, Itakura 2004). In the laboratory, gaze can be measured accurately, but only under conditions that typically fail to approximate the subject’s natural social environment. In contrast, observations of animals in their natural social context typically rely on spatially and temporally imprecise measurements of orienting: for example, noting approximate head direction at regular intervals (e.g. Keverne et al 1987, McNelis & Horowitz 1998, Watts 1998; for general ethological techniques see Martin & Bateson 1993).
It has recently become possible to use portable, dual-camera, optical gaze-tracking devices to quantitatively measure the visual behavior of freely-moving human subjects. This research has focused on performance of simple goal-directed tasks, for example making sandwiches or tea (Land & Hayhoe 2001), washing hands or filling a cup (Pelz & Canosa 2001), copying block designs (Pelz et al 2001), and driving (Shinoda et al 2001). This research has shown that task-irrelevant fixations are rare, that fixations tend to be “just-in-time” with a buffer length of 100 to 1000ms, and has reconfirmed Yarbus’s finding (1967) that both expectations and instructions influence the top-down constraints shaping gaze.
We aim to extend this approach to the study of visual orienting behaviors in nonhuman animals, specifically a prosimian primate, the ring-tailed lemur (Lemur catta). Ring-tailed lemurs provide excellent subjects for several reasons. First, lemurs branched from the main primate lineage in the early Eocene (50 million years ago) but are believed to retain many traits of ancestral primates and thus hint at the evolution of primate visuosocial behavior (Richard 1995). Second, lemurs are trichromats (Sauther et al 1999), have a large binocular field of 114–130° and are diurnal, despite the presence of a tapetum lucidum (Richard 1995). They live in open scrubland in societies whose complexity approaches that of anthropoid primates (Sauther et al 1999). Specifically, they form bisexual aggregations of 10–20 individuals which are characterized by well-defined social hierarchies and extensive use of auditory, olfactory, and visual communication (Sauther et al 1999). The importance of both olfaction and vision to social communication in this species is strikingly embodied by the large, high-contrast, musk-loaded ringtail that serves as the species’ namesake. The tails are used in ritualized combat to flick scent toward the heads of rivals (Richard 1995) and appear also to facilitate group cohesion—on the ground lemurs lift their tail high, while in the trees they allow their tails to hang; in both poses, the tails are conspicuous. Finally, the species is trainable, moderately sized, and tolerant of both experimenters and equipment, and subjects are readily accessible through the conservation and research programs of the Duke University Primate Center, a naturalistic but experimentally tractable setting (e.g. Nunn & Deaner 2004).
Approach
Gaze-Tracking Equipment
To record gaze in freely-moving nonhuman animals, we implemented a prototype optical telemetric gaze-tracker developed by Iscan, Inc. (ETL-200 Primate Research Eye Tracking Laboratory with Telemetry Upgrade). To our knowledge, this is the lightest of the few telemetric gaze-tracking systems yet developed; most competing systems are designed for human use and rely on portable recorders (e.g. the RIT Wearable Eyetracker, see Babcock & Pelz 2004) rather than wireless transmitters. The Iscan system consists of head-mounted eye and scene imaging systems, imported through the included RK-726PCI card into a Dell computer system for processing by raw eye-movement data acquisition software, and echoed for display to an eye and a scene monitor.
Optical gaze-tracking relies on the differential reflection of invisible infrared light by the pupil and retina relative to the sclera and iris. Gaze recording systems either track a bright pupil or dark pupil depending on their design; we used a dark-pupil system that is more resistant to changes in ambient infrared illumination than existing bright-pupil alternatives. The Iscan gaze-tracker uses two small head-mounted CCD cameras: a color “scene camera” to record the 76° × 52° view directly in front of the subject’s head, and an infrared “eye camera” to record the position of the eye via a small head-mounted dichroic (“hot”) mirror. An infrared LED, mounted directly beneath the eye camera, ensured adequate illumination. These components were mounted on a thermoplastic helmet specially fitted for Lemur catta (Fig.1A). An insulated wire connected this headgear to the power supply and a radio-frequency wireless transmitter (Fig.1B), which were worn in a backpack made from a modified primate vest (LOMIR), pouch (LOMIR), and velcro support belt.
FIGURE 1.

A displays the parts of the head assembly: the eye camera (a), dichroic mirror (b) and scene camera (c); the thermoplastic helmet (d) and camera mount (e), allen keys for headgear assembly (f), and the training camera (g) and mirror (h). An American quarter is shown for scale. B displays the transmitter (i), battery (j) and heat shield (k), with a quarter and a ruler for scale. C and D show the subject lemur during active gaze-tracking: C shows the fit of the vest and headgear to the lemur, while in D, the gaze of the lemur subject is recorded as he walks along a branch toward a conspecific female.
Eye position was computed at the receiving station. First, the camera image was thresholded in software to isolate the dark pupil from the brighter iris and cornea surrounding it. Optional use of a corneal reflection (the first Purkinje image) to track eye position was abandoned, both because it is inaccurate at eccentric eye positions (Rikki Rasdan, Iscan, personal communication) and because it was easily disrupted by glare from direct sunlight. The Iscan system was then calibrated to 5 locations in the visual field (see below), thus relating the centroid of the thresholded pupil region to its corresponding point of regard in the scene video. Intermediate pupil positions were mapped to intermediate scene coordinates using a proprietary method (ISCAN) analogous to cubic interpolation, and pupil coordinates were smoothed across frames to increase image stability. (Throughout this article, we shall describe eye orientation using two terms: first, “point of regard” or “POR”, denoting the attended region of the scene camera image and thus reflecting the orientation of eyes in the head; second, “gaze”, denoting the attended region of the world and thus reflecting the orientation of eyes and head in allocentric space.)
These data were combined into one video stream, with the point of regard marked by a white crosshair, and with pupil diameter and POR coordinates displayed in a black bar near the lower edge of the screen. Video was monitored for online confirmation of data quality and recorded to videocassette for subsequent offline analysis. Additional video outputs could be used to access the raw eye and scene videos for later re-analysis, and a digital data file recorded horizontal pupil diameter, pupil centroid coordinates, and POR coordinates.
In part because the digital data file was not timestamped in the same manner as the video output, we found it did not reliably synchronize with our video record under telemetric recording conditions. Instead, we relied on the processed video recording, which indicated the POR both by onscreen crosshair and with coordinates displayed on the lower part of the screen. However, the processed video did not display the POR crosshair when it was located near the edge of the scene image. Together, this meant that the scene data was incomplete (it was occluded by POR data) and that the POR data was incomplete (near the edge of the screen) and needed to be manually positioned or synched from the digital file.
For future recordings, we hope to obviate these problems by recording the raw eye and scene camera outputs to digital video. We found that a relatively long (200ms) smoothing window was necessary for online calculation of POR, but we believe we can dramatically increase the accuracy and precision of our gaze records by performing post-hoc re-analysis of the eye video. One potential improvement would be the use of an ellipsoid fit to reduce pupil centroid misalignments due to encroachment by sun glare, shadow, or tapetal reflection. Another improvement would be the implementation of direct oculometric measurements distinguishing fixations and pursuit movements from saccades (Gajewski et al 2005).
Harness Design
Since the deployed weight of the gaze-tracking system was a significant fraction of our subjects’ masses (about ¼ Lemur catta bodyweight) it was critical that the equipment be comfortably and securely harnessed. Fittings were required both for the transmitter assembly worn on the back and the camera assembly worn on the head.
To secure the transmitter and power pack to the lemur’s back, we used two LOMIR products: first, a small primate vest (LOMIR Biomedical, PJ01) to distribute transmitter weight across the lemur’s back; second, a small pouch (LOMIR Biomedical, JP01) to hold the transmitter itself. The vest fit over the front shoulders around the arms and zippered closed along the back. We attached the pouch to the back of the vest using plastic tie-wraps, removed from the vest a plastic reinforcement ring intended to support a cannula, and added attachment points for a Velcro stabilizer belt. In addition, we added a quarter-inch styrofoam and aluminum-foil heat shield to protect the lemur from the unexpectedly large heat output of the transmitter.
To secure the camera assembly to the head, we used a customized thermoplastic helmet. To make the helmet, we cut a slightly oversized patch from a sheet of thermoplastic resin (AbilityOne Corp.’s Ezeform Light or Polyform Light), heated it in boiling water, and molded it over an adult lemur skull covered with a damp cloth. After the resin had cooled, we added screw holes for attachment of the camera assembly. The helmet was later custom-fit to each subject by trimming and smoothing the helmet to ensure a comfortable fit with adequate clearance for each lemur’s eyes and ears and adding Velcro attachment points. The helmet was temporarily secured to the subject’s head during recording using two thin Velcro straps, which ran from the front of the helmet to the back, crossing under the jaw.
In total, the roving portion of our system massed 660g. Component masses on the body totaled 539g from the vest (193g), transmitter (239g), battery (103g) and heatshield (4g). Component masses on the head totaled 120g from the thermoplastic helmet and straps (11g), mirror (10g), and the cameras and mount (99g). This total was approximately equal to the weight borne by a lemur mother weaning twins.
Weight reduction from both head and body would likely improve recording quality and promote natural behaviors; these reductions might most easily be accomplished by eliminating the mirror or by reducing the weight of the vest. An alternate vest design might provide additional improvements by enhancing the stability of the “backpack”. Because a lemur’s torso is ellipsoid in cross-section, the pouch had a tendency to rotate to the side during recording sessions. One possible solution would be to eliminate the vest and anchor the pouch directly to the shoulders and hips—we have avoided this, however, because free movement of the lemur’s muscular hind legs appeared to preclude useful attachment.
Telemetry
To communicate eye and scene video data from the free-ranging subject to a computer for analysis, data was broadcast up to 300m by a 900MHz, 500 mW transmitter with a 7.2V, 10W lithium battery serving as a one-hour power supply (both ISCAN). Peak range and data quality varied substantially with the local environment, with particular types of enclosures and electrical interference causing distinctly different broadcast characteristics. In order to increase the range over which we could collect data, we mobilized our receiver and computer. We accomplished this using an uninterruptible power supply (APC Back-UPS XS 1500, Model BX1500) capable of powering the receiver itself as well as a desktop computer, computer CRT monitor, and separate eye and scene CRT monitors. The UPS was capable of maintaining this equipment unplugged for up to half an hour, about one third the battery life of the deployed transmitter system. The receiver system was mounted on a cart, which in turn was loaded into a small all-terrain vehicle to provide maximal mobility.
This prototype telemetric system could be improved both in versatility and portability. Transmissions were badly disrupted in some local environments, notably by types of wire or chain-link animal housing. Erratic signal fluctuations in these areas caused video flicker and partial deregistration of telemetric data. Because these fluctuations grew worse with decreasing signal strength, they could be reduced, if not obviated, by decreasing transmission distance to several meters. Strikingly, transmission problems were more severe within some outdoor enclosures than between electrically shielded areas surrounding our laboratory. In laboratory recordings using rhesus macaque, digital data desynchronized from our video record by only 17 samples (280ms) over 32 minutes. In our second-best recording from a moving lemur, we lost 800 digital samples (13 seconds) over 54 minutes; in our best, we lost 65 samples (1.0 second) over 22 minutes. In our worst recording sessions, we abandoned digital data altogether, as up to 21% of the normally stable video data stream was lost as flicker.
Because of this limited transmission range in certain fenced environments, versatility could also be improved by an increase in portability at the receiver. This could be accomplished by using a smaller computer and more space- and energy-efficient monitor, for example a “lunchbox” design with integrated LCD screen (e.g. ACME SKD Industrial Portables). It would also be very helpful to route eye and scene video directly to the computer for digital recording and display. Computerized display would eliminate the eye and scene video CRTs, and digital recording of these raw data streams would facilitate post-hoc reanalysis while obviating the need for additional digital video recorders.
Training
Each lemur was trained over the course of several months. Modular equipment design allowed us to gradually increase the mass and awkwardness of recording gear both on the back and on the head. In addition to the components described above, we used a dummy camera and mirror to facilitate habituation to headgear at the reduced mass of 48g (40% normal). Compliance was reinforced with food rewards, typically grapes and raisins, either hand-fed to the lemur or placed proximally in the local environment. In this manner, we were able to progressively habituate the lemur to handling and increased encumbrance while simultaneously encouraging normal mobility.
In total, habituation took approximately one month (one hour, thrice weekly) and two to three training sessions were sufficient to regain habituation after hiatus. Subjects showed a small reduction in spontaneous behavior, contingent on the ease with which animal handlers performed the initial capture, but normal movement was maintained and food rewards were accepted. Two behavioral changes were deemed detrimental. First, the weight of the headgear did decrease mobility by a small amount: sometimes, and particularly after protracted handling, subjects rested with head declined relative to the body. This was best avoided by limiting handling to the minimum possible duration. Second, subjects occasionally shook their heads, particularly when stressed, for example by the threat of conflict with rival males during the mating season. Nevertheless, equipment was fastened to the head securely enough that these bouts did not displace the camera system, and normal recording resumed without intervention as soon as the bout ended. Companion lemurs exhibited no marked change in behavior in response to the recording equipment.
Initially, we also trained one lemur to orient toward an audiovisual cue in return for food rewards. This was performed to assist in calibration; however, we discovered an alternative and more effective method not requiring conditioning (see below). This aspect of training was therefore discontinued in the first subject and omitted in the second.
Calibration
The primary challenge to field recording of gaze behavior in a habituated subject is proper calibration of the eye position to a point of regard in the visual scene. Calibration may shift across sessions, due to variations in lighting conditions and in the specific relative positions of eye, helmet and mirror. It is therefore necessary to recalibrate the subject at the beginning of each recording session. In humans, this can proceed through simple instruction and verbal confirmation. Our first approach was to train lemurs to orient on cue, much as monkeys with scleral search coils are trained in a laboratory environment (Fuchs and Robinson 1966).
First we attempted to draw each subject’s attention using a clicker made for training dogs, rewarding them with food after each successful fixation. However, we found that prolonged handling induced or exacerbated a state akin to learned helplessness (Seligman 1972), in which the lemur was minimally inclined to orient toward the clicker even when rewarded. We then attempted to train the lemur to orient toward a bright yellow squeeze-ball, with limited success. Additionally, we were concerned that the use of a visual orienting cue might influence the subject’s orienting behavior during the subsequent recording session.
Luckily, a simpler and training-independent method proved more effective. The lemur was released without calibration, and the equipment was allowed to settle into its resting position. Once the lemur had recovered from handling and was resting comfortably, one experimenter (the “trainer”) approached with food rewards from the direction of the various calibration points: center, upper-right, upper-left, lower-right, and lower-left. As the trainer entered the subject’s field of view, the lemur typically glanced at the approaching human. At the same time, the experimenter at the computer entered calibration points. This experimenter entered the calibration points when he observed maximal deflection of the lemur’s pupil in the direction of the trainer’s approach and heard the trainer’s verbal confirmation of eye contact. Because the eyes orient quicker than the head, and because calibration was triggered by maximal excursion of the eye in its orbit, head movements did not substantially impede the process of calibration.
Once all calibration points had been entered, we confirmed successful calibration in two ways. First, we hand-fed the lemur several raisins and observed smooth pursuit of the treats as the subject monitored their approach. Second, we attained eye contact with the lemur from within each quadrant of the scene video. Humans are very skilled at discriminating mutual gaze, and so once the trainer’s verbal report of eye contact matched the subject’s gaze in the scene display, we initiated data collection.
Ideally, we would perform a more thorough calibration, using the 9 or more points typical of human studies. This would require sophisticated manipulation of the subject’s eye movements, however it is critical that any manipulation not distort the intrinsic gaze behavior under study. One possibility would be to evoke fixations using an isolated flash of light, for example, a laserpointer directed to the wall of an otherwise dark room. Autocalibration systems of this type have been developed for human children (Trueswell 1999; Ramloll et al 2004) but have not yet been adapted to animals.
Gaze Recording
In the early phase of this research, we successfully measured pupil position and gaze during hand restraint and free movement, both in isolation and in visual contact with other lemurs. In later phases, we recorded these data while subjects ranged freely through interactive social environments in their home enclosure or outdoors. Recordings from lemur “Licinius” took place in one to three connecting indoor rooms (1.4 × 2.0 × 3.4 m each) with branches, potential food sources, platforms, and one heterospecific lemur (Eulemur fulvus, “Maurice”). Recordings from lemur “Aracus” took place during free movement between two indoor (1.8 × 1.6 × 2.4 m each) and one outdoor (3.7 × 3.9 × 2.4 m) enclosure, and also in one large, unroofed, treeless pen (5.6× 85 m). These areas were shared with up to seven conspecifics: three adult females, three juveniles, and one older male. Sessions for both lemurs included terrestrial and arboreal locomotion, leaps, foraging, and social interaction.
Recordings were robust against movement and outdoor releases were primarily limited by weather and risk of climbing. In bright sunlight, the high contrast between direct sun and shadow decreased video quality in both the scene and the eye camera. Environments that permitted the lemur to climb limited the ability of the experimenters to recapture the lemur to fine-tune the recording assembly, replace discharged batteries, or remove the recording equipment at the end of the session. By contrast, in enclosed environments with human-accessible perches, our subjects were quite tolerant of human approach for each of these manipulations.
To date, we have gathered data from freely-moving lemurs during eight recording sessions requiring 1–3 hours for setup, calibration, and data collection. Of these, six sessions resulted in gaze signal robust enough for analysis, and of those, 5–20 minutes per session appeared to be of optimal quality. We have fully analyzed 30 minutes of this data, during which gaze was calibrated to a location in the scene video for 67% of those 30 minutes.
Failure to assign gaze to a location in the scene video could have resulted from a loss of signal, a dead zone in our calibration, or from fixations outside the 76° × 52° window recorded by the scene camera. To some extent, these possibilities could be distinguished in the digital data: a valid measurement of pupil diameter in the absence of valid POR coordinates suggests that gaze is directed outside the scene video. Analyzing Iscan digital data files drawn from two sessions with minimal flicker, a validly recorded pupil (59%–67% over 42 minutes) was calibrated to an onscreen position in 76%–94% of the samples.
Data Coding
We coded point of regard and regions of interest (ROIs) using integrated POR on scene video output. The processed video was digitized for analysis at 0.22° × 33ms resolution using InterVideo WinDVD Creator, and included both a crosshair representing the point of regard and a set of POR coordinates stamped in the lower part of the video screen.
After a recording session had been digitized, it was segmented into one-minute clips and visually inspected to evaluate data quality. In addition, a small number of putative ROIs were selected for coding based on their putative reward value, locomotor relevance, or social relevance. Clips were analyzed in order of data quality using a custom-designed open-source Matlab environment (Skriatok Videoscore, www.duke.edu/~svs/skriatok). Videoscore provides a graphic user interface by which experimenters can browse and annotate video data and also track various targets, entering their coordinates by mouse-click. In this manner, POR and hypothesized ROI were manually located on each frame, and recorded frames were categorized as moving or stationary. In addition, some high contrast environmental landmarks were tracked to determine head movements relative to the external environment; these movements could be compensated over short periods to produce a scanpath in world-centered coordinates. For videos with minimal flicker, we were sometimes able to import and synchronize digital data representing POR, expediting at least part of the data coding process. The end result of this coding was a marked video sequence from which we could derive gaze scanpaths, head-centered eye position, and the proximity of gaze to the various categories of ROI. Examples drawn from a 2-second clip are shown in Figure 2. During this period, the lemur subject “Licinius” looked up at a researcher’s face and then at an offered raisin during the process of system evaluation. The gaze scanpath displayed has been stabilized to reflect world coordinates—the POR within the scene video was less motile. This stabilization was performed using just two environmental reference points, coded in the upper right and lower right corners of a small window in the near wall.
FIGURE 2.

The upper panel shows a two-second gaze scanpath, projected onto the environment reconstructed from multiple scene video frames (blurred by combining images across head angles). The initial point-of-regard record, coded in camera coordinates, was transformed to world coordinates by comparison with stable reference points marked digitally along the rear wall. Below, this data is plotted as a function of time: first as horizontal and vertical POR coordinates within the camera (that is, relative to the head), and then below as the distance between recorded gaze and two putative regions of interest, the researcher’s face (black) and handheld treats (gray).
Once several static points in the environment have been coded throughout the video, it is possible to stabilize the video across camera rotations and translations caused by lemur movement. Future upgrades to Videoscore could potentially incorporate this information as coordinates are coded, compensating head movements and thus facilitating coding of ROI coordinates across larger time steps. In environments with well-defined fiducial landmarks, this may permit accurate extraction of head location and orientation (e.g. using ARToolKit from the HIT Lab, University of Washington; see also Rothkopf & Pelz 2004).
Post-Hoc Data Confirmation
We performed several post-hoc analyses to confirm data quality. Direct observations suggested that recordings were robust. We often observed the subject’s gaze shift along a contour, for example the bright orange loops of an extension cord or twisting contours of a branch. These complex scanpaths seemed unlikely to arise by chance. Likewise, we regularly saw smooth pursuit of food rewards as the experimenter hand-fed the subject treats.
More formally, we analyzed the distribution of POR both within the camera and relative to putative regions of interest. We generated histograms of recorded POR positions within each data session, finding that distributions were stable across different clips from any particular recording sessions, but different between recording sessions. Good calibrations tended to result in broadly gaussian fixation distributions, while poorer calibrations were suggested by distributions with missing quadrants, sometimes accompanied by abnormally dense fixation in adjacent regions. Overall, POR was well distributed across the central portion of the scene video (Fig. 3, panel a).
FIGURE 3.

Post-hoc calibration controls are shown for two recording sessions, one from each of our lemur subjects (left, “Licinius”, right, “Aracus”). In the upper panels, point-of-regard histograms (smoothed over 2 degrees) are shown for the coded portion of the recording sessions. POR was well distributed across the central portion of the scene camera, although blank areas in the upper corners suggest weaker calibration in these areas on these days. Below, putative regions of interest are plotted relative to the point of regard (left) or, as a control, to a time-shuffled point of regard (right). Regions of interest are notably more clustered in the former than the latter case, confirming that recorded gaze is attracted to these putatively salient regions.
Though blank areas were evident in some sessions, suggesting weak calibration in these regions on these days, other regions appeared to accurately represent gaze. To confirm this intuition we measured the distribution of ROI relative to gaze. We reasoned that if our coded ROI accurately described the salient locations in the visual field, gaze should be drawn to these locations, and they should cluster tightly when plotted against gaze coordinates. To measure this, we generated histograms of the position of all ROIs versus gaze.
As expected, ROI clustered tightly when plotted in gaze coordinates (Fig. 3, panel b). The location of this peak suggests the accuracy of our calibration was, on average, accurate to within 5–10° degrees, and that any calibration error was systematic within a recording session and thus likely to affect all categories of ROI similarly. Furthermore, ROI failed to tightly cluster when plotted relative to time-shuffled POR data, that is, to POR observed at other times within the same one-minute clip (Fig. 3, panel c). To quantify this, we measured the proximity between observed POR and the closest ROI, and compared to a repeat analysis substituting time-shuffled POR data. In 26 of 30 clips, mean distances were smaller for the recorded data than for the time-shuffled control. Overall, proximity was 6% greater in our actual data than predicted by a time-shuffled control, suggesting that the recorded gaze was attracted to the contemporaneous regions of interest (p<0.003, paired t-test, 30 clips). This confirmed both that our a priori judgments of ROI relevance were reasonable, and that our telemetric data successfully captured the attraction of the subjects’ gaze toward these regions.
We also analyzed the properties of frame-to-frame POR shifts within the camera. The lengthy period over which eye positions were smoothed (200ms) precluded any neat segmentation of gaze shifts into fixations, saccades, vestibulo-occular reflex and optokinetic nystagmus; however, it was still possible to examine the pattern of POR shifts across time. First we examined fixation behavior, plotting histograms of relative eye position across intervals of one frame, two frames, four, eight, and so forth, up to 4.3 seconds (Fig. 4). The vast majority of successive frames show minimal shifts in eye position, as can be seen in the tight clustering of relative POR locations at 0° distance in the single-frame (33ms) offset comparison. As comparison time doubles to 66ms, 132ms, and so forth, this clustering becomes much less apparent, largely disappearing after two seconds. As can be seen in the first panel, direction and magnitude of frame-to-frame POR shifts were generally gaussian. No clips were observed with abnormal peaks, such as might derive from a transient, but characteristic, misidentification of the pupil boundary. These observations suggested that our gaze record was composed of a mix of fixations and saccades and represented the smoothed, but essentially accurate, pattern of lemur eye movements.
FIGURE 4.

Two-dimensional histograms plot relative POR position across different time shifts. POR locations are tightly clustered near 0° in the 33ms (one frame) comparison, but as comparison time doubles to 66ms, then 132ms, and so forth, the cluster relaxes. Beyond one second, the peak has largely disappeared. (note that the first three panels are within the 200ms smoothing window used in these experiments.)
We also examined our data for evidence of saccades; that is, for coherent shifts in gaze position across time. We compared the direction of successive POR shifts as a function of their magnitude, plotted here as polar histograms (Fig. 5). For the smallest observable POR shifts, single-pixel jitter during fixation resulted in aliasing along the cardinal angles; nonetheless, a slight increase at 0 radians suggests oriented movements. This becomes increasingly obvious for larger POR shifts of 15–60°/s (½°–2° shift between successive frames) and 60–240°/s (2°–8° between frames) respectively. These POR shifts seem likely to reflect large saccades, during which eye movement direction is strongly correlated across successive frames resulting in a sharp peak at 0 radians. For the POR shifts exceeding 240°/s (8° between frames), this coherence decreases. These shifts were rare, and their diminished coherence may suggest a correlation between increased noise and rapidly shifting POR.
FIGURE 5.

Polar plots indicate relative direction of frame-to-frame POR shifts across different magnitude ranges. In frame-to-frame POR shifts of a moderate size, direction was very stable over time, suggesting saccadic eye movements.
Finally, we contrasted oculomotor behavior between Lemur catta and Macaca mulatta, a species with very well defined oculomotor behavior similar to that of humans. A single male macaque with known and species-typical eye movement patterns was specially fitted with a thermoplastic cap; headgear was attached as described above and the power supply and transmitter were firmly strapped to the outside of his primate chair. The macaque was calibrated using methods analogous to those we employed in the field, and sat comfortably in a primate chair with his head unfixed while free-viewing his home colony. We compared 22 minutes of macaque POR data to our full 30 minutes of coded lemur data, and measured relative POR position as a function of time lapse. Comparing the 80th percentile distance between eye positions as a function of their separation in time, we found that fixations relaxed asymptotically into a more uniform distribution, and that this relaxation had a similar time course in the two species. However, we found that at asymptote the macaque eye positions were more widely distributed than the lemurs. We speculate that this distinction may correlate with the broader binocular field of monkeys (140–160°) versus lemurs (114–130°) (Richard 1995).
Conclusions
We report the implementation of a telemetric infrared-video gaze-tracker to measure visual orienting by freely-moving Lemur catta. Two lemur subjects tolerated system mass of approximately ¼ their body weight, permitting successful measurements of gaze behavior during social interaction, foraging, and locomotion in both terrestrial and arboreal landscapes. We found that lemurs displayed visual orienting behaviors similar to those of macaques and humans, suggesting that much primate gaze behavior evolved early in the lineage. The described techniques thus provide a quantitative method of examining gaze behavior as nonhuman animals navigate, forage, and interact within their natural environments. Future technological development will doubtless improve the versatility, subtlety, and accuracy of telemetric gaze tracking; however, we found that current technology is sufficient to study control of eye movements in the strategic contexts for which they evolved.
FIGURE 6.

Here we plot distance between POR data as a function of time, as recorded for lemur and macaque subjects. Each line reports the radius from the current POR position in which 80% of fixations will be included after a given amount of time has passed. For both lemurs and macaques, this distance is minimal for short time lapses (within fixations), but rapidly increases to asymptote. Both species approach asymptote with a similar time course, but eye positions distribute more broadly in macaques than in lemurs. These differences may reflect the larger binocular field of Macaca mulatta relative to Lemur catta.
Acknowledgments
Funding provided by Duke University Primate Center and NIH MHO66259. DUPC Publication #799. Thanks also to Daniel Schmitt for technical assistance; to Michael Bendiksby, Heather Dean, Robert Deaner, Jeff Klein and Arwen Long for helpful comments on the manuscript; to Luke Stewart and Lauren Whitted for assistance in data coding; and to Julie Ives and Stephanie Combes for research coordination and animal handling at DUPC.
Appendix I: Equipment and Supply Lists
AbilityOne Corp / Sammons Preston Rolyan
Polyform Light, 1/16th × 12 × 18 inch perforated sheets
Ezeform Light, 1/16th × 12 × 18 inch perforated sheets
ACME Portable Corp
http://www.acmeportable.com.tw/
SKD Industrial Portable
APC
http://www.apcc.com/resource/include/techspec_index.cfm?base_sku=BR1500
Back-UPS XS Series, Model BX1500 (apparently discontinued, but similar to BR1500)
ISCAN, Inc
ETL-200 Primate Research Eye Tracking Laboratory with Telemetry Upgrade
LOMIR Biomedical
http://www.lomir.com/jackets_vests.php
Primate Vest - PJ01
Jacket Pocket - JP01
The MathWorks
http://www.mathworks.com/products/matlab/
Matlab Software
University of Washington
http://www.hitl.washington.edu/artoolkit/
ARToolKit
References
- Babcock JS, Pelz JB. “Building a lightweight eyetracking headgear.”. ACM SIGCHI Eye Tracking Research & Applications Symposium. 2004:109–14. [Google Scholar]
- Carpenter RHS. “Methods of measuring eye movements”. In Movements of the eyes, 2nd edition, Carpenter RHS, appendix. 1988;1:405–26. [Google Scholar]
- Collewijn H. “Eye movement recording.”. In: Carpenter RHS, Robson JG, editors. Vision Research: A Practical Guide to Laboratory Methods. 1998. pp. 245–85. chapter 9. [Google Scholar]
- Deaner RO, Platt ML. “Reflexive social attention in monkeys and humans”. Current Biology. 2003;13(18):1609–13. doi: 10.1016/j.cub.2003.08.025. [DOI] [PubMed] [Google Scholar]
- Deaner RO, Khera AV, Platt ML. “Monkeys Pay-Per-View: Social Value in Rhesus Macaques”. Current Biology. 2005;15(6):543–8. doi: 10.1016/j.cub.2005.01.044. [DOI] [PubMed] [Google Scholar]
- Dodge R, Cline TS. “The angle velocity of eye movements”. Psych Rev. 1901;8:145–57. [Google Scholar]
- Emery NJ. “The eyes have it: the neuroethology, function and evolution of social gaze”. Neuroscience and Biobehavioral Reviews. 2000;24:581–604. doi: 10.1016/s0149-7634(00)00025-7. [DOI] [PubMed] [Google Scholar]
- Fenn WO, Hursh JB. “Movements of the eyes when the lids are closed”. Am J of Phys. 1934;118:8–14. [Google Scholar]
- Fuchs AF, Robinson DA. “A method for measuring horizontal and vertical eye movement chronically in the monkey”. Journal of Applied Physiology. 1966;21(3):1068–70. doi: 10.1152/jappl.1966.21.3.1068. [DOI] [PubMed] [Google Scholar]
- Gajewski DA, Pearson AM, Mack ML, Bartlett FN, Henderson JM. “Human gaze control in real world search.”. In: Paletta L, Tsotson JK, Rome E, Humphreys G, editors. Attention and Performance in Computational Vision. New York: Springer-Verlag; in press. [Google Scholar]
- Guo K, Robertson RG, Mahmoodi S, Tadmor Y, Young MP. “How do monkeys view faces?—a study of eye movements”. Experimental Brain Research. 2003;150:363–74. doi: 10.1007/s00221-003-1429-1. [DOI] [PubMed] [Google Scholar]
- Itakura S. “Gaze-following and joint visual attention in nonhuman animals”. Japanese Psychological Research. 2004;46(3):216–26. [Google Scholar]
- Jasper HH, Walker RY. “The Iowa eye-movement camera”. Science. 1931;74:291–4. doi: 10.1126/science.74.1916.291-a. [DOI] [PubMed] [Google Scholar]
- Keating CF, Keating EG. “Visual scan patterns of rhesus monkeys viewing faces”. Perception. 1982;11:211–9. doi: 10.1068/p110211. [DOI] [PubMed] [Google Scholar]
- Keverne EB, Leonard RA, Scruton DM, Young SK. “Visual monitoring in social groups of talapoin monkeys (Miopithecus talapoin)”. Animal Behavior. 1978;26:933–44. [Google Scholar]
- Land MF, Hayhoe M. “In what ways do eye movements contribute to everyday activities?”. Vision Research. 2001;41(25–26):3559–65. doi: 10.1016/s0042-6989(01)00102-x. [DOI] [PubMed] [Google Scholar]
- Lipps M, Pelz JB. “Yarbus revisited: task-dependent oculomotor behavior” [Abstract]”. Journal of Vision. 2004;4(8):115a. [Google Scholar]
- Martin P, Bateson P. Measuring Behavior. 2nd. 1993. “Recording methods” and “The recording medium”. chapters 6–7:84–112. [Google Scholar]
- McNelis NL, Boatright-Horowitz SL. “Social monitoring in a primate group: the relationship between visual attention and hierarchical ranks”. Animal Cognition. 1998;1:65–9. [Google Scholar]
- Nakayama K. “Photographic determination of the rotational state of the eye using matrices”. American Journal of Optometry. 1974;51:736–42. doi: 10.1097/00006324-197410000-00002. [DOI] [PubMed] [Google Scholar]
- Nunn CL, Deaner RO. “Patterns of participation and free riding in territorial conflicts among ringtailed lemurs (Lemur catta)”. Behavioral Ecology and Sociobiology. 2004;57:50–61. [Google Scholar]
- Pelz JB, Canosa R. “Oculomotor behavior and perceptual strategies in complex tasks”. Vision Research. 2001;41(25–26):3587–96. doi: 10.1016/s0042-6989(01)00245-0. [DOI] [PubMed] [Google Scholar]
- Pelz JB, Hayhoe M, Loeber R. “The coordination of eye, head and hand movements in a natural task.”. 139. 2001:266–77. doi: 10.1007/s002210100745. [DOI] [PubMed] [Google Scholar]
- Ramloll R, Trepagnier C, Sebrechts M, Finkelmeyer A. “A Gaze-Contingent Environment for Fostering Social Attention in Autistic Children.”. ACM SIGCHI Eye Tracking Research & Applications Symposium, 2004:19–26. [Google Scholar]
- Richard A. “Lemurs.”. In: Macdonald D, editor. The Encyclopedia of Mammals. 1995. pp. 321–2. [Google Scholar]
- Robinson DA. “A method of measuring eye movement using a scleral search coil in a magnetic field”. IEEE Transactions Biomedical Engineering. 1963;10:137–45. doi: 10.1109/tbmel.1963.4322822. [DOI] [PubMed] [Google Scholar]
- Rothkopf C, Pelz JB. “Head motion estimation for wearable eyetrackers.”. ACM SIGCHI Eye Tracking Research & Applications Symposium. 2004:123–30. [Google Scholar]
- Sauther ML, Sussman RW, Gould L. “The Socioecology of the Ringtailed Lemur: Thirty-Five Years of Research”. Evolutionary Anthropology. 1999;8:120–32. [Google Scholar]
- Seligman MEP. “Learned Helplessness”. Annual Reviews of Medicine. 1972;23:407–12. doi: 10.1146/annurev.me.23.020172.002203. [DOI] [PubMed] [Google Scholar]
- Shinoda H, Hayhoe MM, Shrivastava A. “What controls attention in natural environments?”. Vision Research. 2001;41(25–26):3535–45. doi: 10.1016/s0042-6989(01)00199-7. [DOI] [PubMed] [Google Scholar]
- Tomasello M, Call J, Hare B. “Five primate species follow the visual gaze of conspecifics”. Animal Behavior. 1998;55:1063–9. doi: 10.1006/anbe.1997.0636. [DOI] [PubMed] [Google Scholar]
- Trueswell JC, Sekerina I, Hill NM, Logrip ML. “The kindergarten-path effect: studying on-line sentence processing in young children”. Cognition. 1999;73(2):89–134. doi: 10.1016/s0010-0277(99)00032-3. [DOI] [PubMed] [Google Scholar]
- Walls GL. “The evolutionary history of eye movements”. Vision Research. 1962;2:69–80. [Google Scholar]
- Watts DP. “A preliminary study of selective visual attention in female mountain gorillas (Gorilla gorilla beringei)”. Pimates. 1998;39:71–8. [Google Scholar]
- Yarbus AL. Eye Movements and Vision. Plenum Press; 1967. “Eye Movements During Perception of Complex Objects”; pp. 171–211. Chapter 8. [Google Scholar]
