Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2019 Jul 25;9:10799. doi: 10.1038/s41598-019-47220-6

A low-cost hyperspectral scanner for natural imaging and the study of animal colour vision above and under water

N E Nevala 1,, T Baden 1,2,
PMCID: PMC6658669  PMID: 31346217

Abstract

Hyperspectral imaging is a widely used technology for industrial and scientific purposes, but the high cost and large size of commercial setups have made them impractical for most basic research. Here, we designed and implemented a fully open source and low-cost hyperspectral scanner based on a commercial spectrometer coupled to custom optical, mechanical and electronic components. We demonstrate our scanner’s utility for natural imaging in both terrestrial and underwater environments. Our design provides sub-nm spectral resolution between 350–950 nm, including the UV part of the light spectrum which has been mostly absent from commercial solutions and previous natural imaging studies. By comparing the full light spectra from natural scenes to the spectral sensitivity of animals, we show how our system can be used to identify subtle variations in chromatic details detectable by different species. In addition, we have created an open access database for hyperspectral datasets collected from natural scenes in the UK and India. Together with comprehensive online build- and use-instructions, our setup provides an inexpensive and customisable solution to gather and share hyperspectral imaging data.

Subject terms: Colour vision, Retina

Introduction

Hyperspectral imaging combines spatial and detailed spectral information of a scene to construct images where the full spectrum of light at each pixel is known1. Commercial hyperspectral imaging technology is used, for example, in food industry2,3, agriculture4,5, astronomy1 and low altitude aerial observations6,7. However, these devices typically are expensive, lack the ultraviolet (UV) part of the spectrum, or do not work under water. Moreover, many are bulky and must be attached to a plane or other heavy machinery, which makes them unsuitable for most basic research (but see6,7). Here, we present a low-cost and open source hyperspectral scanner design and demonstrate its utility for studying animal colour vision in the context of the natural visual world.

Animals obtain sensory information that meets their specific needs to stay alive and to reproduce. For many animals, this requires telling wavelength independent from intensity– an ability widely referred to as colour vision8. To study what chromatic contrasts are available for an animal to see in nature requires measuring the spectral content of its environment (natural imaging) and comparing this to the eye’s spectral sensitivity.

Most previous work on natural imaging to study animal colour vision used sets of spectrally narrow images generated by iteratively placing different interference filters within the range of 400–1,000 nm912 in front of a spectrally broad sensor array. So far, a major focus has been on our own trichromatic visual system that samples the short (blue “B”), medium (green “G”) and long (red “R”) wavelength (“human visible”) range of the electromagnetic spectrum9,11,1315. However, across animals the number and spectral sensitivity of retinal photoreceptor types varies widely. Perhaps most importantly, and unlike humans, many animals can see in the UV part of the spectrum, which has not been included in available hyperspectral measurements from terrestrial or underwater scenes. Johnsen et al. (2013, 2016)16,17 used an underwater hyperspectral imager (UHI) to map the seafloor in an effort to identify structures and objects with varying depth, but more shallow underwater habitats have not been studied in this way. Finally, in 2013 Baden et al.18 used a hyperspectral scanner based on a spectrometer reaching the UV spectrum of light and an optical fibre controlled by two servo motors. With their setup it is possible to build hyperspectral images in a similar way to the design presented here, but the system is bulky and fragile making it inconvenient to enclose in a waterproofed casing. In their setup the point of light from the scene is guided with the optic fibre attached to the spectrometer which further complicates the build. Our design instead uses mirrors to overcome these shortcomings.

Here, we designed and built a low-cost open source hyperspectral scanner from 3D printed parts, off-the-shelf electronic components and a commercial spectrometer that can take full spectrum (~350–950 nm), low spatial resolution (4.2° horizontal, 9.0° vertical) images above and under water. With our fully open design and instructions it is possible for researchers to build and modify their own hyperspectral scanners at substantially lower costs compared to commercial devices (~£1,500 for a spectrometer if unavailable, plus ~£113–340 for all additional components, compared to tens to hundreds of thousands for commercial alternatives). We demonstrate the performance of our system using example scans and show how this data can be used to study animal colour vision in the immediate context of their natural visual world. We provide all raw data of these and additional scans to populate a new public database of natural hyperspectral images measured in the UK and in India (https://zenodo.org/communities/hyperspectral-natural-imaging), to complement existing datasets1921. Complete build and installation instructions are detailed in the manual on the project GitHub page: https://github.com/BadenLab/Hyperspectral-scanner.

Methods

Hardware design

The scanner (Fig. 1) is built around a trigger-enabled, commercial spectrometer (Thorlabs CCS200/M, advertised as 200–1,000 nm but effectively useful between 350 nm and 950 nm). A set of two movable UV reflecting mirrors (Thorlabs PFSQ10–03-F01 25.4 × 25.4 mm and PFSQ05-03-F01 12.7 × 12.7 mm) directs light from the scanned scene onto the spectrometer’s vertically elongated slit (20 µm × 1.2 mm) via a 1 mm diameter round pinhole placed at 23 mm distance from the slit, giving an effective opening angle of ~2.5° (Figs 1B,C, 3A, see also Baden et al. 2013)18. However, the effective resolution limit of the full system is ~4.2° (horizontal) and ~9.0° (vertical) (see results). To gradually assemble an image, an Arduino Uno microcontroller (www.Arduino.cc) iteratively moves the two mirrors via servo-motors along a pre-defined scan-path under serial control from a computer. At each new mirror position, the Arduino triggers the spectrometer via a transistor-transistor logic (TTL) pulse to take a single reading (Fig. 1D,E). An optional 9 V battery powers the Arduino to relieve its universal serial bus (USB) power connection. The entire set-up is encased in a waterproofed housing fitted with a quartz-window (Thorlabs WG42012 50.8 mm UVFS Broadband Precision Window) to permit light to enter (Fig. 1A). For underwater measurements, optional diving weights can be added to control buoyancy. All internal mechanical components were designed using the freely available OpenSCAD (www.OpenScad.org) and 3D printed on an Ultimaker 2 3D printer running Cura 2.7.0 (Ultimaker). For detailed build instructions including all 3D files and Arduino control code, see the project’s GitHub page at https://github.com/BadenLab/Hyperspectral-scanner.

Figure 1.

Figure 1

A Hyperspectral scanner for low-cost natural imaging. (A) The waterproof casing with a window (white asterisk) for light to enter. The PVC tube on top protects the cables connected to the computer. (B) Internal arrangement of parts: the spectrometer, Arduino Uno microcontroller, 9 V battery, two servo motors (Motors 1 and 2) with mirrors attached to them and a round pinhole (r = 0.5 mm). (C) A schematic illustration of the optical path (Arduino, 9 V battery and chords are left out for clarity). First, light beam (yellow lines) enters the system from above through the window. Light reaches first the larger mirror underneath the window of the casing, reflects to the smaller mirror and from there through the pinhole to the spectrometer’s slit. The pinhole is placed at 23 mm distance from the slit (20 µm × 1.2 mm effective slit dimension). Light deflected off the first mirror is partly shadowed by the edges of the casing, which creates dark stripes at the horizontal edges of the scanned images when the box is closed. These edges are cropped in the presented example scans (Figs 3, 8). Spectral filtering by the quartz window was corrected for in postprocessing (Fig. 3D,E). (D) Operational logic. The scanning path is uploaded to the Arduino from the computer via Serial 2 connection to define the motor movements. After each movement the spectrophotometer is triggered via TTL to take a measurement and send the data to the computer vial serial. The ongoing state of the scanning path is fed from the control circuit to the computer. (E) Circuit diagram.

Figure 3.

Figure 3

Scanner performance. (A) Single pixel field of view (FOV) is vertically elongated as determined by spot-mapping. (B,C) A printout of a 3.8° white cross on a black background (B) was scanned with a 1,000 point spiral scanning path (Fig. 2D) to estimate the scanner’s spatial resolution. In (C), power (red and blue lines in the graphs) represents brightness profiles across the cross’ arms as indicated, superimposed on the original profile (black). (D) Spectrometer readings of a clear daylight sky taken through the spectrometer’s fibreoptic (orange) and through the complete optical path of the scanner (black, i.e. 2 mirrors and a quartz window, though lacking the fibreoptic). When purchased, the spectrometer is calibrated with the fibreoptic attached. Accordingly, we computed the corresponding correction curve and applied it to all scanner data presented throughout this work (E). (F) An action camera picture of the blue door + red brick wall measured outdoors and an RGB representation image of the scan when using opsin templates from human spectral sensitivity. Blue and red dots in the RGB representation refer to the two points used to show examples of individual spectra in (G).

Scan-paths

Four scan paths are pre-programmed onto the Arduino control code: a 100-point raster at 6° x- and y-spacing (60° × 60°), and three paths with spirals covering an r = ±30° area with equally spaced 300, 600 or 1,000 points, respectively (Fig. 2). To generate spirals, we computed n points of a Fermat’s spiral:

r=θ×n
θ=π(35)

where r is the radius and θ, in radians, is the “golden angle” (~137.5°). Next, we sorted points by angle from the origin and thereafter ran a custom algorithm to minimise total path length. For this, we iteratively and randomly exchanged two scan positions and calculated total path length. Exchanges were kept if they resulted in path shortening but rejected in all other cases. Running this algorithm for 105 iterations resulted in the semi-scrambled scan paths shown in Fig. 2.

Figure 2.

Figure 2

Four scanning paths created with the Fermat’s spiral across the 60° area. (A) 100 points square, (B) 300 points spiral (C) 600 points spiral (D) 1000 points spiral.

When choosing a suitable scan-path for a given application, it is important to weigh sampling density (and thus scan-time) against achievable resolution. The effective field of view (FOV) and thus resolution of the scanner is ~4.2° × ~9.0° (see results). In comparison, the pre-defined 300, 600 and 1,000 point spiral scan paths offer regular inter-point-spacings of 3.1°, 2.1° and 1.6°, respectively. Accordingly, the 1,000 point spiral (Fig. 2D) oversamples the image in both the horizontal and vertical dimension (i.e. both the X and Y dimensions of the scanner’s FOV exceed the scans’ inter-point spacings by a factor of 2). The 600 point spiral (Fig. 2C) also oversamples vertically, but horizontally is well matched to the scanner’s effective resolution. Finally, the 300 point spiral (Fig. 2B) undersamples horizontally but still oversamples vertically. In comparison, the 100-point rectangle scan (Fig. 2A, spacing of 6° along cardinal and 8.5° along obtuse angles, respectively) under samples in both dimensions and is therefore more suited for rapid “test-scans”. Another advantage of the round spiral scan paths is that they are matched to the scanner’s circular window. Overall, substantial oversampling can be desirable as it allows averaging out “noise” or movement in the scene in post-processing. Notably, the scanner can also be used standing on its side, thus effectively swapping the vertical and horizontal resolution limits.

Alternative scan-paths, such as higher-density rectangle-scans, a honeycomb pattern to compromise regular sampling density and regularity, or one that acknowledges the asymmetry of horizontal and vertical resolution, can be easily implemented by the user. Details on how to execute the pre-programmed scan modes and how to alter them are included in the manual: https://github.com/BadenLab/Hyperspectral-scanner.

Data collection

All recordings shown in this work used the 1,000-point spiral. Acquisition time for each scan was 4–6 minutes, depending on the time set for each mirror movement (260–500 ms) and the spectrometer’s integration time (100–200 ms). These were adjusted based on the amount of light available in the environment to yield an approximately constant signal-to-noise ratio (SNR) between scans. In all cases, the scanner was supported using a hard-plastic box to maintain an upright position. All outdoor scans were taken in sunny weather with a clear sky. For details of the underwater measurement done in West Bengal India, see Zimmermann, Nevala, Yoshimatsu et al., 201822. In addition, we took a 180° RGB colour photograph of each scanned scene with an action camera (Campark ACT80 3 K 360°) or a ~120° photograph with an ELP megapixel Super Mini 720p USB Camera Module.

Data analysis

The spectrometer was used with the factory-set spectral pre-compensation to ensure that readings are as accurate as possible across the full spectral range. This factory calibration was done with the optic fibre; however our system gathers light through a quartz window and two mirrors without an optic fibre (Fig. 1A–C). We measured the additional spectral transfer function required to correct our data (Fig. 3D,E), and applied this curve to all measurements throughout this work. To obtain this transfer function, we pointed the spectrometer at the mid-day sun (a bright and spectrally broad light source) and took 100 readings each through the optic fibre (as factory calibrated), and then without the fibre but instead passing through the scanner’s full optical path. The transfer function shown is the dividend of the mean from each of these recordings:

Let ei=(0,,0,1,0,,0) where the ith entry is 1, then

a(x)=i=1800bi(x)ci(x)1ei

Where a(x) is the transfer function and b and c the spectra taken through the full scanner and via the optic fibre, respectively. The inverse of this transfer function was applied to all subsequent spectra taken with the scanner. All data was analysed using custom scripts written in IGOR Pro 7 (Wavemetrics) and Fiji (NIH). To visualise scanned images, we calculated the effective brightness of each individual spectrum (hereafter referred to as”pixel”) as sampled by different animals’ opsin templates. In each case, we z-normalised each channel’s output across an entire scan and mapped the resultant brightness map to 16-bit greyscale or false-colour coded maps, in each case with zero centred at 215 and range to 0 and to 216-1. We then mapped each pixel onto the 2D plane using a standard fish-eye projection. To map each spiral scan into a bitmap image, we scaled a blank 150 × 150 target vector to ±30° (same as the scanner range), mapped each of n scanner pixels to its nearest position in this target vector to yield n seed-pixels, and linearly interpolated between seed-pixels to give the final image. The 150 × 150 pixel (60 × 60 degrees) target vector was truncated beyond 30° from the centre to cut the corners which comprised no data points. We also created hyperspectral videos by adding a 3rd dimension so that each pixel in the 150 × 150 target vector holds a full spectrum. This way each video is constructed from 800 individual images where one frame equals to 1 nm window starting from 200 nm.

Principal component analysis

For principal component analysis (PCA), we always projected across the chromatic dimension (e.g. human trichromatic image would use 3 basis vectors, “red”, “green” and “blue”) after z-normalising each vector.

Results

The scanner with water-proofed casing, its inner workings and control logic are illustrated in Fig. 1. Light from the to-be-imaged scene enters the box through the quartz window (Fig. 1A) and reflects off the larger and then the smaller mirror, passing through a pinhole to illuminate the active part of the spectrometer (Fig. 1B,C). To scan a scene, an Arduino script is started via serial command from a computer to iteratively move the two mirrors through a pre-defined scan path (Methods, Fig. 2 and Supplementary Video 1). At each scan-position, the mirrors briefly wait while the spectrometer is triggered to take a single reading. All instructions for building the scanner, including 3D part models and the microcontroller control code are provided at the project’s GitHub page at https://github.com/BadenLab/Hyperspectral-scanner.

Scanner performance

In our scanner design, several factors contribute to the spatial resolution limit of the complete system. These include spacing of the individual scan-points (discussed in methods), angular precision of the servo-motors, the effective angular size of the pinhole in two dimensions, the optical properties of the mirrors and the quartz window as well as the dimensions of the spectrometer’s slit. To therefore establish the scanner’s effective spatial resolution, we first determined a single “pixel’s” effective field of view (FOV). For this, we statically pointed the scanner at a PC screen and presented individual 5° white squares on a black background in all of 5 × 5 positions of a grid pattern, and each time noted the total signal power recorded by the spectrometer. This revealed that this FOV is vertically elongated, likely due to the spectrometer’s vertically oriented slid (Fig. 3A). To determine how this elongated FOV impacts spatial resolution in an actual scanned scene, we scanned a printout of a 3.8° width white cross on a black background in the mid-day sun using a 1,000-point spiral (Figs 2D, 3B,C) and compared the result to the original scene (Fig. 3B,C). The difference between the horizontal profile across the cross’ vertical arm and the original scene approximately equated to a Gaussian blur of 2.1° standard deviation. This effectively translates to ~4.2° as the finest detail the scanner can reliably resolve along the horizontal axis under these light conditions. Vertically, this blur was about twice that (~9.0°), in line with the vertically elongated FOV. While this spatial resolution falls far behind even the simplest commercial digital camera systems, our scanner instead provides 600 nm spectral range at sub-nm resolution that can be used to identify fine spectral details in the scanned scene.

To illustrate the scanner’s spectral resolution, we took a 1,000-point scan in the mid-day sun of a blue door and red brick wall (Fig. 3F) and reconstructed the scene based on human red, green and blue opsin templates23 to assemble an RGB image (Methods, Fig. 3F). From this scan, we then picked two individual “pixels” (blue and red dots) and extracted their full spectra (Fig. 3G). Next, we illustrate the function with examples from terrestrial and underwater scenes.

Natural imaging and animal colour vision

The ability to take high-spectral resolution images is useful for many applications, including food quality controls2,3, agricultural monitoring4,5 and surface material identification from space1. Another possibility is to study the spectral information available for colour vision by different animals. Here, our portable, waterproofed and low-cost hyperspectral scanner reaching into the UV range allows studying the light environment animals live in. To illustrate what can be achieved in this field, we showcase scans of three different scenes: a forest scene from Brighton, UK (Figs 46), a close-up scan of a flowering cactus (Fig. 7) and an underwater river scene from West Bengal, India (Fig. 8). In each case, the estimated 60° field of view covered by the scanner is indicated in the accompanying widefield photos (Figs 4A, 5A, 7A and 8A). To showcase chromatic contrasts available for colour vision by different animals in these scenes, we reconstructed the forest and cactus data with mouse (Mus musculus), human (Homo sapiens), bee (Apis melifera), butterfly (Graphium sarpedon), chicken (Gallus gallus domesticus) and zebra finch (Taeniopygia guttata) spectral sensitivities (Figs 6B, 7C). The underwater scan was reconstructed based on zebrafish (Danio rerio) spectral sensitivity (Fig. 8B)2328. In addition, we provide hyperspectral movies between 200 and 1,000 nm for these three scenes, where each frame is a 1 nm instance of the scanned scene (Supplementary Videos 24). These videos illustrate how different structures in the scene appear at different wavelengths.

Figure 4.

Figure 4

An example data set of the forest scene with human spectral sensitivity. (A) A 180° photo of the forest scene with an approximate 60° scanner covered area (left). On the right, monochromatic R-, G- and B-channels were constructed from the scanned data by multiplying spectra from each pixel with the opsin templates (see Figs 6B, 7C). The RGB image shows the reconstruction built based on the opsin channels. The different colour appearance of this RGB reconstruction compared to the photograph is due to differential colour-channel equalisations in the two images. (B) Pixels from the R-, G- and B-channels aligned in the order of the measurement with an arrow on the right indicating the direction of the principal component analysis (PCA) across the measurement points. (C) Achromatic and chromatic axes C1–2 aligned in the same order as in the previous image, and then reconstructed back to images in (D) to add the spatial information. The PC RGB image shows C1 in red and C2 in green (blue set to constant brightness). (E) Loadings from achromatic and chromatic axes, bars illustrating the amount of input from each opsin channel. (F) The cumulative variance explained (%) for each axis.

Figure 6.

Figure 6

PC reconstructions of the forest scene. (A) Achromatic and chromatic PCA reconstructions from the forest scene data for a mouse (Mus musculus), a human (Homo sapiens), a bee (Apis melifera), a butterfly (Graphium sarpedon), a chicken (Gallus gallus domesticus) and a zebra finch (Taeniopygia guttata) and PC RGB pictures. The number of chromatic axes equals to the number of cone types minus 1. Again, the PC RGB picture is constructed from chromatic axes C1-n. In PC RGB, the C1 is shown as red, C2 as green and C3 as blue. (B) Opsin absorption curves showing the spectral sensitivity of the cones for each animal. The pink, blue, green and red curves correspond to UV, blue, green and red sensitive opsins, respectively.

Figure 7.

Figure 7

PC reconstructions of the flowering cactus. (A) A 120° photo of the scanned scene with a flowering cactus and the approximate 60° window (black circle) the scanner can cover. (B,C) Reconstructions for the chromatic axes C1-n and PC RGB images (B) and the absorption curves for each animal (C) as in Fig. 6.

Figure 8.

Figure 8

An underwater scene from India with zebrafish spectral sensitivity. (A) A 180° photo of the scanned underwater river scene from West Bengal, India, and the approximate 60° scanner covered window. (B) The zebrafish opsin complement. (C) The monochromatic opsin channels (RGBU) and the RGB reconstruction as in Fig. 5. (D) The achromatic and chromatic axes reconstructed back to images to show where in the scene information based on each axis can be found. (E) Loadings from each opsin channel as explained in Fig. 4E. (F) The cumulative variance explained (%) for each axis.

Figure 5.

Figure 5

The forest scene with zebra finch spectral sensitivity. (A) A still image of the forest scene with the approximated 60° scanner covered area, monochromatic opsin channels (R, G, B, U) and an RGB reconstruction where R is shown as red, G as green and B + U as blue. (B–F) As in Fig. 3, with an addition of the UV channel (U) in all images. The PC RGB image in (D) displays C1 in red, C2 in green and C3 in blue.

First, we used the data from the forest scene scan to compute how a trichromat human with three opsins (red, green and blue) might see it (Fig. 4). To this end, we multiplied the spectra from each “pixel” with the spectral sensitivity of each of the three corresponding opsins templates to create “opsin activation maps” (red “R”, green “G” and blue “B”, Fig. 4A, Methods), hereafter referred to as “channels”. These false-colour coded, monochromatic images show the luminance driving each opsin across the scene. In this example, the R- and G-channels clearly highlight the dark band of trees in the middle of the scene with varying light and dark structures in the sky and on the ground. However, the B-channel shows mainly structures from the sky but provides low contrast on the ground. To illustrate how these channels can be used for our sense of colour vision, we combined them into an RGB image (Fig. 4A, right).

Next, we used principal component analysis (PCA) to highlight spectral structure in the data. When using PCA on natural images it is common to compute across the spatial dimension9,29, however we computed across the spectral dimension (i.e. the individual measurement points in 3-dimensional RGB space) by using the R-, G- and B-channels as 3 basis vectors (Fig. 4B,C). In natural scenes, most variance is driven by changes in overall luminance rather than chromatic contrasts9,12,13. In this type of data, the first principal component (PC1) therefore reliably extracts the achromatic (greyscale) image content. From here it follows that all subsequent principal components (PC2-n) must describe the chromatic axes in the image, in decreasing order of importance. For simplicity, we hereafter refer to PC1 as the achromatic axis and PC2, PC3 and (where applicable) PC4 as first, second and third chromatic axes, respectively (C1, 2, 3). When applied to the example scan of the forest scene with human spectral sensitivity, the achromatic image with near equal loadings across the R-, G- and B-channels accounted for majority (97.7%) of the total image variance (Fig. 4D–F), in agreement with previous work9,12,13. This left 2.3% total variance for the first and second chromatic axes C1 and C2 (Table 1). In line with Ruderman et al. (1998)9, the chromatic contrasts emerging from PCA were R + G against B (C1, long- vs short-wavelength opponency) and R against G while effectively ignoring B (C2, Fig. 4E). These two chromatic axes predicted from the hyperspectral image matched the main chromatic comparisons performed by the human visual system (“blue vs. yellow” and “red vs. green”). To show where in the image different chromatic contrasts exist across space, and to facilitate visual comparison between animals, we also mapped the chromatic axes into an RGB image such that R displays C1, G C2 and B C3. Since the trichromat human can only compute two orthogonal chromatic axes (nOpsins – 1), C3 was set to 215 (i.e. the mid-point in 16-bit) in this example. These PC-based RGB images ignore the brightness variations of the achromatic channel, therefore describing only chromatic information in a scene. This specific projection allows a trichromat human observer viewing an RGB-enabled screen or printout to judge where in a scanned scene an animal might detect dominant chromatic contrasts, even if that animal uses more than three spectral cone types for colour vision. The power of this approach can be illustrated when considering non-human colour vision based on the same dataset.

Table 1.

The total variance explained by chromatic axes C1-n in the forest and cactus scans.

Variance explained by chromatic axes C1−n (%)
Forest (Fig. 6) Cactus (Fig. 7)
Mouse 2.6 8
Human 2.3 1.4
Bee 3.9 8.1
Butterfly 3.8 3.8
Chicken 6.7 2.9
Zebra finch 7.5 6.5

An animal’s opsin complement dictates discernible chromatic contrasts.

Unlike humans, many animals use the ultraviolet (UV) part of the spectrum for vision30,31. To illustrate how the addition of UV-channel can change available chromatic information, we next performed the same analysis for a tetrachromatic zebra finch (Fig. 5). This bird uses four, approximately equi-spaced opsins (red, green, blue and UV), which in addition are spectrally sharpened with oil droplets26. As before, the monochromatic opsin-channels (RGB and “U” for UV, Fig. 5A) appeared with R- and G-channels showing structures both in the sky and on the ground while B- and U-channels mainly highlighted the sky. We next computed the principal components across the now four opsin channels (Fig. 5B–F).

This time the achromatic axis explained only 92.5% of the total variance leaving 7.5% for chromatic comparisons, which now comprised three chromatic axes (C1–3, Table 1). As with humans, the most important chromatic axis compared long- and short-wavelength channels (C1, R + G against B + U, single zero crossing in Fig. 5E). C2 was also similar to the human version by comparing R- and G-channels, but in addition paired the R-channel with the UV and the G-channel with the blue (two zero crossings). While the spatial structure highlighted by C1 was similar to that of the human, C2 picked up additional details from the ground (Fig. 5D). Finally, C3 (R + B against G + U) highlighted additional structures in the scene that are largely invisible to the human observer.

To further survey how an animal’s opsin complement can affect the way chromatic details are detectable in complex scenes, we compared data from the forest scene (Fig. 6) to a close-up scan of a flowering cactus (Fig. 7) and filtered each using different animals’ spectral sensitivities: a dichromat mouse, a trichromat human and bee and a tetrachromat butterfly, chicken and zebra finch. In these scenes, the order of the chromatic axes was largely stable across opsin complements used (PC1 – achromatic, C1 – long vs short wavelengths, C2 – R + U vs G + B, C3 – R + B vs G + U), and here we only show the achromatic and C1–3 reconstructions alongside the PC RGB images (Figs 6A, 7B) next to the spectral sensitivity of each animal (Figs 6B, 7C). In each case, the number of chromatic channels shown corresponds to the number of an animal’s cone types minus 1.

The chromatic axes usable by different animals revealed diverse spatio-chromatic structures from both scenes (Figs 6, 7). Across all animals compared, while C1 still reliably highlighted a long- vs. short-wavelength axis, the exact image content picked up along C1-n varied between opsin complements (Figs 6A, 7B). For example, in the cactus scene the C1 for the chicken highlighted spatial structures in the image that other animals instead picked up with C2. A similar difference was also seen in the forest scene, where C2 and C3 in butterfly showed structures that were captured in the inverse order in the chicken and zebra finch (Fig. 6A). In addition, humans and butterflies had more consistent arrangement and structures in chromatic axes between each other than with other animals, possibly due to their similarly overlapping spectral sensitivities of the green and red cones.

For all animals in both scenes, the achromatic image content captured at least 91.9% of the total variance, leaving 1.4–8.1% for the chromatic axes (Table 1). For the forest scene, the addition of opsin-channels increased the amount of variance explained by the chromatic axes, and in particular for animals with widely spaced spectral channels (e.g. with chicken and butterfly, Table 1). In general, more chromatic details was discerned with more cones, especially when these cones had low-overlap spectral sensitivities covering a wide range of the natural light spectrum (e.g. from around 350 nm to over 600 nm as with zebra finch). Moreover, spectral sharpening of the opsin peaks through the addition of oil droplets (chicken and zebra finch) brought out further details and higher chromatic contrasts in the scanned scene. The order of importance for the chromatic axes that optimally decompose scans depended strongly on the set of input vectors – the spectral shape and position of the animal’s opsins.

Hyperspectral imaging under water

As light travels through the water column, water and dissolved particles absorb both extremes of the light spectrum making it more monochromatic with increasing depth12,32. Mainly because of this filtering and scattering, underwater light environments have spectral characteristics that differ strongly from terrestrial scenes. To illustrate one example from this underwater world, we show a scan from a shallow freshwater river scene (Fig. 8A) taken in the natural habitat of zebrafish (Danio rerio) in West Bengal, India22. The data was analysed based on the spectral sensitivity of the tetrachromatic zebrafish with red, green, blue and UV sensitive cones (Fig. 8B)24,30. In this example, the monochromatic R-, G-, and B-channels picked up different dominant spatial structures in the scene, while the U channel appeared more “blurry” with only small intensity differences around the horizon (Fig. 8C). Here, the total variance explained by the chromatic axes C1–3 (14.7%, Fig. 8F) was higher compared to the two terrestrial scenes. C1 compared long (R + G) and short (B + U) wavelengths between upper and lower parts of the scene (Fig. 8D,E) that arose from spectral filtering under water. Finally, C2 and C3 brought out further details that probably correspond to pieces of the imaged vegetation.

An open database for natural imaging

Based on these and other additional scans above and under water from around the world (for example, see Zimmermann et al., 201722) we created an open access database online (https://zenodo.org/communities/hyperspectral-natural-imaging). All measurements in the database are taken with the hyperspectral scanner as described here.

Discussion

We have designed and implemented an inexpensive and easy-to-build alternative to commercial hyperspectral scanners suited for field work above and under water. Without the spectrometer (~£1,500), the entire system can be built for ~£113–340, making it notably cheaper than commercial alternatives. In principle, any trigger-enabled spectrometer can be used for the design. Alternatively, spectrometers can also be home-built33,34 to further reduce costs.

When studying natural imagery in relation to animal colour vision, it is important to consider how the spatial detail of the measured image relates to the spatial detail the animal’s retina can resolve. The spatial resolution limit of our scanner with the oversampling 1,000-points scan (4.2°), though substantially below that of most commercial camera systems, is close to the behavioural resolution limit of key model-species like zebrafish larvae (~3°)35 or fruit flies (~1–4°)36 but falls short of the spatial resolution achieved by most larger species such as mice or primates. Accordingly, natural imaging data obtained with our scanner spatially under samples the natural visual world of these larger animals. However, when studying animal colour vision this is not necessarily a major issue. First, spatial contrast in images is generally scale-invariant. Second, most animal visual systems inherently combine a low-spatial resolution chromatic representation of the visual world with a high-spatial resolution achromatic representation3739. As such, our system can likely also give useful insights into the chromatic visual world of animals with much more highly resolved eyes.

The spatial resolution of our system could principally be further improved, for example by using a smaller pinhole in combination with higher-angular-precision motors. However, the amount of natural light for vision is limited, especially when imaging under water where light is quickly attenuated with increasing depth. As a result, higher spatial resolution in our system would require a substantially increased integration times for each pixel. This would result in very long scan-durations, which is unfavourable when scanning in quickly changing natural environments. Alternatively, the addition of a lens or parabolic mirrors would allow substantially increasing the total amount of light picked up by the system, thus bringing down integration time. Finally, the use of an elongated pinhole oriented perpendicular to the spectrometer slit may help set-up a more symmetrical field of view. These modifications would likely need to come in hand with substantial mechanical alterations, increased cost, and possibly new limitations pertaining to chromatic aberrations.

Spatial resolution aside, the spectral range and detail of our scanning approach far exceeds the spectral performance of interference filter-based approaches, as used in most previous hyperspectral imaging studies9,11,12,20,40. This difference may be crucial for some questions. For example, zebrafish have four opsin-genes for middle wavelength sensitive (MWS) cones (“green cones”) that are used in different parts of the retina and are separated in spectral sensitivity by few nanometres25,41. Most interference filter setups use relatively broad spectral sensitivity steps and would therefore miss small details in the natural scenes that could be picked up with slightly different spectral sensitivities of different opsins. By choosing individual “pixels” and the spectra they hold, it is possible to analyse fine details in complex scenes that animals can use for colour vision. This can be done already with very coarse spatial resolution to reveal structures that otherwise would remain undetected. In agreement with previous studies, we have shown how principal component analysis aids to separate achromatic and chromatic information in natural images9,12. Here, PCA across the chromatic channels highlights spatio-chromatic aspects in the scene that may be useful for vision. Perhaps not surprisingly, this reveals major, overall trends in landscapes (Figs 46) with short wavelength dominated sky and long wavelength dominated ground. This is true also for the underwater habitats (Fig. 8), where light spectrum in the water column transforms from “blue-ish” short wavelength dominated to “red-ish” long wavelength dominated with increasing depth22. The PCs can also highlight details in complex scenes that might otherwise stay hidden but that may be important for animals to see in their natural habitats.

With our examples from terrestrial and aquatic environments (Figs 48) we demonstrate how our device and the resulting data can be used for studying the first steps of animal colour vision. With diverse and careful measurements, it is possible to reach better understanding of the spectral environments that animals live in. Here, by considering their photoreceptor tunings it is possible to get first ideas of what might be important for specific animals to see in their natural habitats. However, to more fully understand how animals use and respond to the spectral information reaching their retinas, additional direct physiological recordings as well as behavioural testing are needed8.

Conclusion

We have shown how our simple, self-made scanner can produce hyperspectral images that can be used to study animal colour vision. We demonstrate this with examples from both terrestrial and aquatic environments and show how individual hyperspectral images can be used to make comparisons between different species and their possible view of the world. We have also started to populate an open database of hyperspectral images from various natural scenes (https://zenodo.org/communities/hyperspectral-natural-imaging). In the future, it will be interesting to survey a more varied set of habitats and, for example, to compare how closely related animal species living in different habitats have evolved with varying visual abilities. This could also include variations of the presented design, for example to scan larger fields of view, or a time-automation mode by which the same scene can be conveniently followed over the course of a day. We will be pleased to facilitate other’s additions to the design through a centralised project repository (https://github.com/BadenLab/Hyperspectral-scanner) and hope that in this way more researchers will be able to contribute to building a more global picture of the natural light available for animal vision on earth.

Supplementary information

Supplementary Video 2 (141.4MB, avi)
Supplementary Video 3 (143.8MB, avi)
Supplementary Video 4 (184.4MB, avi)
Supplementary Video 1. (28.2MB, mov)

Acknowledgements

We thank Kripan Sarkar and Fredrik Jutfeld for help with fieldwork, and Dan-Eric Nilsson, Daniel Osorio and Thomas Euler for general discussions. The authors would also like to acknowledge support from the FENS-Kavli Network of Excellence. Funding was provided by the European Research Council (ERC-StG “NeuroVisEco” 677687 to T.B.), The Medical Research Council (T.B., MC_PC_15071), The Biotechnology and Biological Sciences Research Council (T.B., BB/R014817/1), the Leverhulme Trust (PLP-2017-005 to T.B.) and the Lister Foundation.

Author Contributions

The scanner was conceived and implemented by N.E.N. and T.B. Data was analysed by N.E.N. using custom scripts written by T.B. and modified by N.E.N. The paper was written by N.E.N. with help from T.B.

Competing Interests

The authors declare no competing interests.

Footnotes

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

N. E. Nevala, Email: n.nevala@sussex.ac.uk

T. Baden, Email: t.baden@sussex.ac.uk

Supplementary information

Supplementary information accompanies this paper at 10.1038/s41598-019-47220-6.

References

  • 1.Goetz AFH, Vane G, Solomon JE, Rock BN. Imaging Spectrometry for Earth Remote Sensing. Science (80-.). 1985;228:1147–1153. doi: 10.1126/science.228.4704.1147. [DOI] [PubMed] [Google Scholar]
  • 2.ElMasry G, Sun D-W, Allen P. Near-infrared hyperspectral imaging for predicting colour, pH and tenderness of fresh beef. J. Food Eng. 2012;110:127–140. doi: 10.1016/j.jfoodeng.2011.11.028. [DOI] [Google Scholar]
  • 3.Gowen AA, O’Donnell CP, Cullen PJ, Downey G, Frias JM. Hyperspectral imaging - an emerging process analytical tool for food quality and safety control. Trends Food Sci. Technol. 2007;18:590–598. doi: 10.1016/j.tifs.2007.06.001. [DOI] [Google Scholar]
  • 4.Lelong CCD, Pinet PC, Poilve H. Hyperspectral imaging and stress mapping in agriculture: A case study on wheat in Beauce (France) Remote Sens. Environ. 1998;66:179–191. doi: 10.1016/S0034-4257(98)00049-2. [DOI] [Google Scholar]
  • 5.Monteiro ST, Minekawa Y, Kosugi Y, Akazawa T, Oda K. Prediction of sweetness and amino acid content in soybean crops from hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2007;62:2–12. doi: 10.1016/j.isprsjprs.2006.12.002. [DOI] [Google Scholar]
  • 6.Uto K, Seki H, Saito G, Kosugi Y, Komatsu T. Development of a Low-Cost, Lightweight Hyperspectral Imaging System Based on a Polygon Mirror and Compact Spectrometers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016;9:861–875. doi: 10.1109/JSTARS.2015.2472293. [DOI] [Google Scholar]
  • 7.Uto K, Seki H, Saito G, Kosugi Y, Komatsu T. Development of a Low-Cost Hyperspectral Whiskbroom Imager Using an Optical Fiber Bundle, a Swing Mirror, and Compact Spectrometers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016;9:3909–3925. doi: 10.1109/JSTARS.2016.2592987. [DOI] [Google Scholar]
  • 8.Baden, T. & Osorio, D. The retinal basis of vertebrate color vision. Preprints (2018). [DOI] [PubMed]
  • 9.Ruderman DL, Cronin TW, Chiao C-C. Statistics of cone responses to natural images: implications for visual coding. J. Opt. Soc. Am. A. 1998;15:2036. doi: 10.1364/JOSAA.15.002036. [DOI] [Google Scholar]
  • 10.Brelstafø G, Párraga A, Troscianko T, Carr D. Proc. SPIE. 1995. Hyper-spectral camera system:- acquisition and analysis; pp. 150–159. [Google Scholar]
  • 11.Nagle MG, Osorio D. The tuning of human photopigments may minimize red-green chromatic signals in natural conditions. Proc. Biol. Sci. 1993;252:209–13. doi: 10.1098/rspb.1993.0067. [DOI] [PubMed] [Google Scholar]
  • 12.Chiao C-C, Cronin TW, Osorio D. Color signals in natural scenes: characteristics of reflectance spectra and effects of natural illuminants. Opt. Soc. Am. 2000;17:218–224. doi: 10.1364/JOSAA.17.000218. [DOI] [PubMed] [Google Scholar]
  • 13.Lewis A, Zhaoping L. Are cone sensitivities determined by natural color statistics? J. Vis. 2006;6:8. doi: 10.1167/6.3.8. [DOI] [PubMed] [Google Scholar]
  • 14.Webster MA, Mollon JD. Adaptation and the Color Statistics of Natural Images. Vis. Res. 1997;37:3283–3298. doi: 10.1016/S0042-6989(97)00125-9. [DOI] [PubMed] [Google Scholar]
  • 15.Buchsbaum G, Gottschalk A. Trichromacy, opponent colours coding and optimum colour information transmission in the retina. Proc. R. Soc. Lond. B. 1983;220:89–113. doi: 10.1098/rspb.1983.0090. [DOI] [PubMed] [Google Scholar]
  • 16.Johnsen G, Ludvigsen M, Sørensen A, Sandvik Aas LM. The use of underwater hyperspectral imaging deployed on remotely operated vehicles - methods and applications. IFAC-PapersOnLine. 2016;49:476–481. doi: 10.1016/j.ifacol.2016.10.451. [DOI] [Google Scholar]
  • 17.Johnsen G., Volent Z., Dierssen H., Pettersen R., Ardelan M.Van, Søreide F., Fearns P., Ludvigsen M., Moline M. Subsea Optics and Imaging. 2013. Underwater hyperspectral imagery to create biogeochemical maps of seafloor properties; pp. 508–540e. [Google Scholar]
  • 18.Baden T, et al. A tale of two retinal domains: Near-Optimal sampling of achromatic contrasts in natural scenes through asymmetric photoreceptor distribution. Neuron. 2013;80:1206–1217. doi: 10.1016/j.neuron.2013.09.030. [DOI] [PubMed] [Google Scholar]
  • 19.Foster DH, Amano K, Nascimento SMC, Foster MJ. Frequency of metamerism in natural scenes. J. Opt. Soc. Am. A. 2006;23:2359. doi: 10.1364/JOSAA.23.002359. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Párraga Ca, Brelstaff G, Troscianko T, Moorehead IR. Color and luminance information in natural scenes. J. Opt. Soc. Am. A. Opt. Image Sci. Vis. 1998;15:563–569. doi: 10.1364/JOSAA.15.000563. [DOI] [PubMed] [Google Scholar]
  • 21.Baden, T. et al. A Tale of Two Retinal Domains: Near-Optimal Sampling of Achromatic Contrasts in Natural Scenes through Asymmetric Photoreceptor Distribution [Data set], 10.5281/zenodo.1204501 (2014). [DOI] [PubMed]
  • 22.Zimmermann MJY, et al. Zebrafish Differentially Process Color across Visual Space to Match Natural Scenes. Curr. Biol. 2018;28:2018–2032.e5. doi: 10.1016/j.cub.2018.04.075. [DOI] [PubMed] [Google Scholar]
  • 23.Stockman A, Sharpe LT. The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype. Vision Res. 2000;40:1711–1737. doi: 10.1016/S0042-6989(00)00021-3. [DOI] [PubMed] [Google Scholar]
  • 24.Allison WT, Haimberger TJ, Hawryshyn CW, Temple SE. Visual pigment composition in zebrafish: Evidence for a rhodopsin-porphyropsin interchange system. Vis. Neurosci. 2004;21:945–952. doi: 10.1017/S0952523804216145. [DOI] [PubMed] [Google Scholar]
  • 25.Chinen A, Hamaoka T, Yamada Y, Kawamura S. Gene duplication and spectral diversification of cone visual pigments of zebrafish. Genetics. 2003;163:663–675. doi: 10.1093/genetics/163.2.663. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Toomey MB, et al. Complementary shifts in photoreceptor spectral tuning unlock the full adaptive potential of ultraviolet vision in birds. Elife. 2016;5:1–27. doi: 10.7554/eLife.15675. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Peitsch D, et al. The spectral input systems of hymenopteran insects and their receptor-based colour vision. J. Comp. Physiol. A. 1992;170:23–40. doi: 10.1007/BF00190398. [DOI] [PubMed] [Google Scholar]
  • 28.Jacobs GH, Neitz J, Deegan JF. Retinal receptors in rodents maximally sensitive to ultraviolet light. Nature. 1991;353:655–656. doi: 10.1038/353655a0. [DOI] [PubMed] [Google Scholar]
  • 29.Hancock PJB, Baddeley RJ, Smith LS. The principal components of natural images. Netw. Comput. Neural Syst. 1992;3:61–70. doi: 10.1088/0954-898X_3_1_008. [DOI] [Google Scholar]
  • 30.Hunt DM, Wilkie SE, Bowmaker JK, Poopalasundaram S. Vision in the ultraviolet. Cell. Mol. Life Sci. 2001;58:1583–1598. doi: 10.1007/PL00000798. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Witzany Guenther., editor. Biocommunication of Animals. Dordrecht: Springer Netherlands; 2014. [Google Scholar]
  • 32.Morris DP, et al. The attenuation of solar UV radiation in lakes and the role of dissolved organic carbon. Limnol. Oceanogr. 1995;40:1381–1391. doi: 10.4319/lo.1995.40.8.1381. [DOI] [Google Scholar]
  • 33.Rossel, E. & OtterVIS LGL Spectrophotometer. Available at, https://www.thingiverse.com/thing:2215840 (Accessed.: 18th December 2017)
  • 34.Warren, J. & CC-BY-SA 2017 Public Lab contributors. Desktop Spectrometry Kit 3.0. Available at, https://publiclab.org/wiki/desktop-spectrometry-kit-3-0 (Accessed: 18th December 2017).
  • 35.Haug MF, Biehlmaier O, Mueller KP, Neuhauss SC. Visual acuity in larval zebrafish: behavior and histology. Front. Zool. 2010;7:8. doi: 10.1186/1742-9994-7-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Juusola M, et al. Microsaccadic information sampling provides Drosophila hyperacute vision. Elife. 2017;6:1–148. doi: 10.7554/eLife.26117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Giurfa M, Vorobyev M, Kevan P, Menzel R. Detection of coloured stimuli by honeybees: minimum visual angles and receptor specific contrasts. J. Comp. Physiol. A. 1996;178:699–709. doi: 10.1007/BF00227381. [DOI] [Google Scholar]
  • 38.Lind O, Kelber A. The spatial tuning of achromatic and chromatic vision in budgerigars. J. Vis. 2011;11:2–2. doi: 10.1167/11.7.2. [DOI] [PubMed] [Google Scholar]
  • 39.Mullen KT. Bornstein changes in brightness matches may have produced artifacts in previous isoluminant. J. Physiol. 1985;359:381–400. doi: 10.1113/jphysiol.1985.sp015591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Osorio D, Ruderman DL, Cronin TW. Estimation of errors in luminance signals encoded by primate retina resulting from sampling of natural images with red and green cones. J. Opt. Soc. Am. a-Optics Image Sci. Vis. 1998;15:16–22. doi: 10.1364/JOSAA.15.000016. [DOI] [PubMed] [Google Scholar]
  • 41.Takechi M, Kawamura S. Temporal and spatial changes in the expression pattern of multiple red and green subtype opsin genes during zebrafish development. J. Exp. Biol. 2005;208:1337–1345. doi: 10.1242/jeb.01532. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Video 2 (141.4MB, avi)
Supplementary Video 3 (143.8MB, avi)
Supplementary Video 4 (184.4MB, avi)
Supplementary Video 1. (28.2MB, mov)

Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES