Skip to main content
Ecology and Evolution logoLink to Ecology and Evolution
. 2024 Mar 18;14(3):e11045. doi: 10.1002/ece3.11045

Analysing biological colour patterns from digital images: An introduction to the current toolbox

Christopher R Hemingson 1,2,, Peter F Cowman 3, David R Bellwood 1,2
PMCID: PMC10945235  PMID: 38500859

Abstract

Understanding the numerous roles that colouration serves in the natural world has remained a central focus in many evolutionary and ecological studies. However, to accurately characterise and then compare colours or patterns among individuals or species has been historically challenging. In recent years, there have been a myriad of new resources developed that allow researchers to characterise biological colours and patterns, specifically from digital imagery. However, each resource has its own strengths and weaknesses, answers a specific question and requires a detailed understanding of how it functions to be used properly. These nuances can make navigating this emerging field rather difficult. Herein, we evaluate several new techniques for analysing biological colouration, with a specific focus on digital images. First, we introduce fundamental background knowledge about light and perception to be considered when designing and implementing a study of colouration. We then show how numerous modifications can be made to images to ensure consistent formatting prior to analysis. After, we describe many of the new image analysis approaches and their respective functions, highlighting the type of research questions that they can address. We demonstrate how these various techniques can be brought together to examine novel research questions and test specific hypotheses. Finally, we outline potential future directions in colour pattern studies. Our goal is to provide a starting point and pathway for researchers wanting to study biological colour patterns from digital imagery.

Keywords: approaches, colouration, colours, guide, image analysis, methods, patterns, tools


We provide an introduction to the study of biological colour patterns using digital imagery. We cover many of the new techniques and describe their strength and weakness. This resource acts as a springboard for researchers entering the field.

graphic file with name ECE3-14-e11045-g002.jpg

1. INTRODUCTION

Understanding the role that certain colours and patterns serve in biological systems has remained a central focus in evolutionary and ecological studies. An organism's colouration (the combination of colours and patterns) often has an intrinsic link to its life history strategy; dictating how it behaves and interacts with other organisms as well as its environment. Researchers and naturalists alike have been fascinated with the intricacies of animal colouration since the times of Darwin and Wallace (Caro, 2017; Darwin, 1859; Wallace, 1877). However, the physical properties that are responsible for creating colouration makes it difficult to objectively study (Endler, 1978). How light behaves and interacts within an environment is extremely context dependent. Furthermore, how this light is then subsequently perceived and processed by another organism makes this seemingly simple field rather complex (Endler, 1990).

Darwin and Wallace would likely be impressed with the progress that has been made in characterising and quantifying organismal colours and patterns (Endler, 1978, 1990). Historically, descriptions of colouration were both context and viewer dependent. As noted by Longley, 1917: ‘The method is crude; allowance for the personal equation of the observer must be large…’. The advent of spectrometers, which operate by detecting the intensity of light at different wavelengths, allowed for more physical descriptions of light and consequently colour, to be made (Endler, 1990; Johnsen, 2016). Reflectance spectra can tell us detailed information about the object being measured, for example, which pigments are likely responsible for creating a specific colour (Toral et al., 2008). While this is by far the most accurate method for assessing the colour of an object, it does have its disadvantages. Reflectance spectra must be remeasured for each specific colour of interest making data collection both labour and equipment intensive (Marshall et al., 2003). In the life sciences, this means the observer must also decide which parts of the organism's body and pattern to measure, imposing a bias as to which aspects of colouration are thought to be meaningful (Badiane et al., 2017; Dalrymple et al., 2015). Importantly, they fail to provide any description of patterns, leaving this completely up to the interpretation of the viewer.

However, digital images provide an ideal medium in which to study biological colour patterns (Stevens et al., 2007). Since images inherently record spatial information of colour (i.e. its pattern), they are well suited for characterising the colour pattern data. Digital images remove the subjectivity of classifying patterns based on human constructs (e.g. categorising a pattern as ‘stripes’ or ‘spots’) and do not require the user to specify locations on an organism that has been deemed important for measurement. Furthermore, the relatively cheaper cost of many digital cameras compared to a complete spectrometer setup and their ease of use in the field make them a valuable resource for colour pattern studies.

In recent years, there has been a surge of new methodologies that aim to describe and characterise biological colour patterns, specifically from digital imagery (Mason & Bowie, 2020). These methods have benefitted from the combination of more informed research designs and affordable computing. Through the advent of open source programming languages, like R (R Core Team, 2023), many new and free computational resources are now available for use. These new resources allow researchers to ask and answer questions that were previously not possible. However, each technique or application possesses its own strengths and weaknesses, answers specific questions and requires time to learn and implement.

Herein, we present an introduction to many of the recent tools available for analysing biological colour patterns and their application. The resources covered will primarily focus on image analysis techniques that are available in open‐source, user‐friendly software, as these are the methods that have seen the most recent growth. First, we detail the basic knowledge around colouration and vision and highlight some key considerations to be made when constructing a study. We then provide an overview of what resources are available to measure and characterise colours and patterns from digital images. Finally, we demonstrate how some of these various techniques can be brought together and describe their potential applicability by outlining future directions for colour research. Our overall aim is to provide a resource for researchers entering the field of colour pattern science to help design, develop and conduct studies on biological colouration using new techniques in a rapidly growing field.

2. METHODOLOGICAL APPROACHES

2.1. Vision and perception: a necessary primer

Colours and patterns are a product of light and its ability to be detected, processed and interpreted by a viewer. Therefore, a fundamental understanding of both the physical properties of light and how it is viewed and processed is essential to study biological colourations. Light is electromagnetic radiation (small quantities of energy that lack mass or charge) that behaves in some manner as both a particle and wave. Visible light refers to the spectrum of electromagnetic radiation visible to most humans which spans from approximately 380 to 750 nm in wavelength (wavelength is frequently denoted by the symbol λ). However, many organisms can detect light in the ultraviolet range (300–400 nm; Siebeck, 2004) or shortwave infrared (750 nm–1000 nm; Gracheva et al., 2010), which is important to consider if your study explicitly involves a known, non‐human viewer (Caves et al., 2019). Light is detected in the retina of the eye by two main photoreceptor cell types: rods and cones. Rods are primarily involved in detecting changes in luminance, i.e. light intensity. Thus, rods are generally used for low light or night vision and do not often decipher chromatic (colour) differences. Conversely, cone cells are involved in the detection of light with difference wavelengths (i.e. colours) and have a greater variety of cell types which are often tuned to different spectral sensitivities. These different spectral sensitivities are determined by the type of opsin protein expressed by the photoreceptor; opsins being the light sensitive protein that react to light stimulus ultimately starting the colour detection and processing pathway for many organisms (Shichida & Matsuyama, 2009). Light of different wavelengths appear different in colour depending on the filtering media within the lens (e.g. oil droplets; Vorobyev, 2003), the type, density and the orientation of photoreceptors in the retina of the viewer (Carleton et al., 2020), as well as how an organism neurally processes the light signal (Endler, 1990). For comprehensive reviews on the acquisition and neural processing of light, see (Endler, 1978, 1990; Kelber et al., 2003; Kemp et al., 2015; Osorio & Vorobyev, 2008).

How do we make accurate assumptions about what colours other organisms can perceive? These conclusions are made through either: (1) behavioural experiments, (2) measurements of certain cellular properties within their retina (microspectrophotmetry and electroretinography) or, (3) more recently, by identifying genetic sequences that are commonly known to code for visual opsin proteins – the light sensitive proteins that are universal in animal vision (Kelber et al., 2003; Kemp et al., 2015; Shichida & Matsuyama, 2009). Behavioural experiments typically present a study organism with different stimuli to observe perceptive abilities and test their responses (Newport et al., 2017; Siebeck et al., 2008). Microspectrophotmetry and electroretinography work by either measuring the amount of light absorbed by photoreceptor cells or by measuring the electrical activity within the retina. Both techniques provide evidence as to what wavelengths of light the organisms likely can or cannot see, however exceptions do occur (Losey et al., 2003; Tosetto et al., 2021). Last, dedicated genetic research has linked certain gene encoding regions to the expression of specific visual opsin proteins. Opsins are a class of light sensitive proteins which give certain photoreceptor cells their ability to detect light. Different opsins have different spectral sensitivities they react to. Therefore, by identifying which opsins are being coded for, we can infer what possible spectral sensitivities an organism may have (Carleton et al., 2020; Musilova et al., 2019). It is important to note that the presence of specific opsin encoding genes does not directly equate to an individual possessing photoreceptors with that protein as organism's may ‘tune’ their visual capabilities to best fit the corresponding light environment the organism resides within (Kranz et al., 2018; Nandamuri et al., 2017). Ultimately, each of these approaches provides evidence as to what an organism likely can or cannot perceive. The latter of these two techniques must be validated using behavioural studies and trials, as solely relying on these correlative approaches can lead to unexpected conclusions (e.g. Tosetto et al., 2021).

Beyond the chromatic component of perception, organisms also vary widely in their ability to visually resolve details from an object or scene; termed ‘visual acuity’ (Caves et al., 2018). Lower levels of visual acuity mean an organism resolves less details of an object or scene being viewed. Acuity therefore has a clear impact on the interpretation of results in studies that are testing behavioural responses to certain stimuli, or the functional implications of certain colour patterns or signals (Caves et al., 2016). Many organisms have acuity much worse than our own, so it is important to consider when assessing how colours and patterns are perceive by other organismal viewers (Caves et al., 2019).

Rather quickly it becomes quite apparent that vision and perception vary widely within the natural world. Thus, it is critically important to know: (1) if your research question involves an explicit viewer and (2) if so, what are their visual capabilities (chromatic, achromatic and acuity) and how do they need to be considered. Previous syntheses in colour research show that most studies come from one of two schools of inquiry: ‘bottom‐up’ and ‘top‐down’ (Kemp et al., 2015). ‘Bottom up’ research questions ‘seek to understand the proximate basis of colour propagation, reception and perception’. These disciplines aim to form a physical and neural understanding of how colour is viewed and processed. Thus they often involve a model study taxon whose vision and perceptive abilities are studied in great detail (Tosetto et al., 2021). ‘Top‐down’ approaches ‘seek to use colour as a trait in tests of ecological and/or evolutionary hypotheses’. These studies often take a broader approach and look at entire groups of organisms simultaneously to understand broad patterns shaping phenotypes through space and time (Cooney et al., 2022). Frequently, ‘top‐down’ questions do not approach their research from the perspective of a specific viewer and therefore stick to more descriptive methods for characterising colours. Thus, it is critically important to identify if your research is a discriminatory/perceptual question (involving a specified viewer) or a spectral/physical question (describing broader patterns pertaining to light and colour). We use this dichotomy in the main methods figure to help identify what type of question certain techniques can be used to answer (Kemp et al., 2015). A large resource table (Table 1, described in more detail below) also lists whether applications are spectral/physical, discriminatory, or perceptual in nature. Sometimes simple approaches and metrics of colouration work fine for the question being asked. It ultimately always hinges on the specific research question being addressed.

TABLE 1.

Resources for handling, manipulating and analysing images for colour pattern analyses.

Goal Technique Description Function name Resource Approach Reference(s)
Colour standardisation Linearise reflectance Linearises reflectance of images based on grey standards and standardises exposure

‘Generate Multispectral Image’

‘Model Linearisation Function’

MICA a S/P/D Troscianko and Stevens (2015)
Calibrate colours across images Calibrate image colours using a commercial colour standard that has been included in the image ‘colorChecker’ Patternize b S/P/D Van Belleghem et al. (2018)
Alter images Colour segmentation Identify n clusters of colours that most accurately define an image based on either the visual capabilities of a viewer, properties of the colour patches, or parsimony

‘Naive Bayes Quantisation’

‘RNL Clustering’

‘Cluster Particle Analysis’

‘recolorize’

‘classify’

‘getKMeanColors’

QCPA a

Recolorize b

PAVO b

Colordistance b

P/D Maia et al. (2019), van den Berg et al. (2019), Weller (2021), Weller and Westneat (2019)
Separating subject from the background Uses machine learning approaches to identify and crop subjects from backgrounds sashimi.py (script) Sashimi c S Schwartz and Alfaro (2021)
Simulating acuity Simulate visual acuity on an image as it would be seen by a viewer. Sharp edges can be reconstructed to simulate acuity more accurately. Requires estimates of the proposed viewer's visual acuity

‘procimg’

‘Acuity View’

‘Gaussian Acuity Control’

‘RNL Ranked Filter’

PAVO b

QCPA a

D Caves and Johnsen (2018), Maia et al. (2019), van den Berg et al. (2019)
Simulate false colours Create display images that simulative perceptive differences between various ‘Make Presentation Image’ QCPA a P/D van den Berg et al. (2019)
Recover underwater image colours Algorithm that ‘removes’ the water from images, restoring colour as it would be seen at the surface seathru.py (script) Sea‐Thru c S Akkaynak (2019)
Perceptually compare colours Receptor‐noise limited (RNL) model Computes Receptor Noise limited model (Vorobyev & Osorio, 1998) to assess discrimination between colours

‘vismodel’

‘RNLmodel’

‘RNLthresh’

‘Run QCPA’ ‘Framework’

PAVO b

Colourvision b

QCPA a

P/D Gawryszewski (2018), Maia et al. (2019), van den Berg et al. (2019)
Measure perceptual distance between colours Measures the perceptual distances between colours within the framework of a visual model

‘coldist’

‘Colour/Luminance JND’

‘Difference Calculator’

‘deltaS’

PAVO b

MICA a

Colourvision b

P Gawryszewski (2018), Maia et al. (2019), Troscianko and Stevens (2015)
Plot/Compare colours graphically Spectral distributions Plotting of spectral data typically recorded from spectrometers. This depicts a distribution of reflected light across a wavelength range

‘procspec’

‘explorespec’

‘plotsmooth’

PAVO b S Maia et al. (2019)
RGB, HSV, CIELab Colour spaces Plot colours based on their coordinates within predefined colour spaces. Compatible with RGB, HSV and CIELab

‘plotPixels’

‘plotHist’

‘colspace’

Colordistance b

PAVO b

S/P/D Maia et al. (2019), Weller and Westneat (2019)
Receptor‐based colour space Plot colours based on how they stimulate various photoreceptors within the retina of a viewer. Examples include the Receptor‐Noise limited (RNL) colour space, Chittka colour hexagon and the colour tetrahedron

‘CTTKmodel’

‘EMmodel’

‘RNLmodel’

‘colspace’

Colourvision b

PAVO b

P/D Chittka (1992), Endler and Mielke (2005), Gawryszewski (2018), Maia et al. (2019)
Colour maps An extension of the RNL Model, where all pixel colours from an image can be plotted in a perceptual space ‘RNL Colour Maps’ MICA a P/D Troscianko and Stevens (2015)
Summarise full colourations Compare pixel colour distribution Extracts pixel colour data from each image and compares the distribution of colours between images

‘getImageHist’

‘getColorDistanceMatrix’

Colordistance b S/P/D Weller and Westneat (2019)
Location‐based colour pattern extraction Detects the distribution of colours on an organism with respect to its location on the body. Some techniques use landmarks to make comparisons between individuals with different shapes

‘patLanRGB’

‘patRegRGB’

‘rgb.measure’

‘Marking Matrix’

Patternize b

Colormesh b

PAT‐GEOM a

S Chan et al. (2019), Valvo et al. (2021), Van Belleghem et al. (2018)
Compare entire colour patterns graphically Plotting colour pattern ordinations Uses ordination techniques (PCA/RDA/MDS) to plot samples in ‘colour pattern’ space. Proximity of points corresponds to similarity in colour pattern

‘patPCA’

‘patRDA’

‘getColorDistanceMatrix’

Patternize b

Colordistance b

S Van Belleghem et al. (2018), Weller and Westneat (2019)
Quantify colour space occupancy Measure colour space volume Measure volumes occupied by colours/patterns in a various spaces. Some applications can measure concave hyper volumes using α‐shapes

‘vol’

‘tcsvol’

‘voloverlap’

PAVO b S/P/D Gruson (2020), Maia et al. (2019)
Measure visual and geometric aspects of pattern elements Characterise pattern patch aspects Resources that aim to describe primarily geometric aspects of colouration. These include functions that range from measuring patch size, complexity and direction to the area of certain colours on a body and where they are distributed across multiple individuals and species

‘Elliptical Fourier Shape’

‘Analysis’

‘Directionality of Shape’

‘Centroid Size’

‘Contrast CoV’

‘patArea’

‘plotHeat’

‘Cluster Particle Analysis’

PAT‐GEOM a

Patternize b

QCPA a

S Chan et al. (2019), Van Belleghem et al. (2018), van den Berg et al. (2019)
Colour Adjacency Analysis Analyse the frequency of colour transitions within a region of interest

‘Run QCPA Framework’

‘adjacent’

QCPA a

PAVO b

S/D Endler (2012), Maia et al. (2019), van den Berg et al. (2019)
Boundary Strength Analysis Analyses the visual strength of changes at boundaries between various elements of a colour pattern

‘Run QCPA Framework’

‘adjacent’

‘recolorize_adjacency’

QCPA a

PAVO b

Recolorize b

D Endler et al. (2018), Maia et al. (2019), van den Berg et al. (2019), Weller (2021)
Local Edge Intensity Analysis &Disruption Measures the intensity of chromatic and achromatic changes across image elements

‘Run QCPA Framework’

‘Gabrat Disruption’

QCPA a D van den Berg et al. (2019)

Note: In the column ‘Approach’, S – Spectral/Physical, P – Perceptual distance and D – Discriminatory techniques following Kemp et al., (2015).

a

Resource in ImageJ.

b

Resource in R.

c

Resource in Python.

A strong understanding of the concepts summarised above will better inform the experimental design, how data is collected and analysed, and most importantly, its interpretation (Endler & Mappes, 2017). Herein, most of the material presented and its proposed uses will be from a ‘top‐down’ perspective as these questions are more likely in broader ecological and evolutionary studies. However, it is up to the researcher to perform their own due diligence and ensure that they have a firm grasp of the relevant theory behind their research question before implementing, analysing and interpreting their findings.

2.2. An overview of the resources available

Following is an overview of the current resources available for image processing and analysis in colour pattern studies with a focus on those available in R and ImageJ. The resources have been organised in a manner that would mirror a typical workflow when analysing colours and patterns from digital imagery (Figure 1). A current subset of some available methods, including their function description, name, reference for further reading and platforms on which they are available is listed in Table 1 as well as a visual overview in Figure 4. The presentation of resources in the table has also been organised to mirror a typical workflow in image analysis. Last, throughout the text, the specific functions, feature, or tool used to perform each technique is denoted with italic text.

FIGURE 1.

FIGURE 1

The typical workflow in the study of biological colourations from digital images. The steps in red represent those prior to analyses. Blue are data exploration and interpretation techniques. Yellow are the final steps of analysing the data and formulating conclusions.

FIGURE 4.

FIGURE 4

What technique should you use? A sample of analyses are shown which are arranged along the two axes depending on: y axis – how one approaches analysing colouration, i.e. spectral/physical or perceptive/discriminatory and the x axis –whether one focuses on colours or patterns. Photos credits: H. Krisp, U. Schmidt, F. Franklin, V. Huertas and K. Schulz; CC BY‐SA 2.0. The figures for Local Edge Intensity Analysis, Boundary Strength Analysis and Colour Maps have been adapted from the original publications.

2.3. Image processing prior to analysis

Images form the foundations upon which most of the analyses and techniques covered herein are based. Therefore, it is crucial that images have been taken in a standardised and representative manner. Ideally, images are captured in a raw format; an image format where the camera makes minimal changes to the image preserving as much of the original scene's light information as possible. These formats offer the greatest flexibility and most accurately represent the true appearance of an object or scene. More common file types, like .jpeg are compressed meaning file information is deliberately discarded to reduce file size. Additionally, irreversible changes are often made to these images by the camera's processor which alter the photograph in ways that are thought to make it look more pleasing to the viewer. This typically includes boosting the saturation and vibrancy or altering the contrast of the colours within an image. Clearly, this poses a problem if the object of a study is to compare images objectively. For comprehensive guides to digital imaging for the study of biological colouration and their limitations, please see Stevens et al. (2007) and White et al. (2015).

Often the first step after imaging for most colour‐based research questions will involve processing and manipulating images in various ways to prepare them for analyses (Figure 2). Colour and grey standards are small items included in images that contain specific colours of known reflectance and hue. If standards have been included in the images, then the image's colours can be adjusted to ensure the lighting has been standardised/normalised between all photos. This is particularly important when photographs were taken outdoors where cloud cover and time of day can greatly impact the available light spectrum (Bergman & Beehner, 2008; Stevens et al., 2007). Images containing the Calibrite (formerly X‐Rite) ColorChecker Passport or the Image Science Associates ColorGauge can be adjusted using the function (‘colorChecker’) within the patternize R package (Van Belleghem et al., 2018). Generally, .jpeg and other non‐raw file types are nonlinear in nature, meaning the brightness of some pixels are increased or decreased more relative to others. To linearise these images (that is, to make the brightness more accurately reflect the number of photons hitting the cameras sensor) the images must include a grey standard. The linearisation can be done using the Multispectral Image Calibration and Analysis (MICA) toolbox (Troscianko & Stevens, 2015) using ‘Model Linearisation Function’. Calibrating images in the ultraviolet or infrared regions of the spectrum will require special standards that have UV/IR reflective properties as most commercial options only reflect visible light.

FIGURE 2.

FIGURE 2

Some of the alterations that can be made to images prior to analysis. (a) changing the colours within an image to create a false‐colour photograph to highlight discriminability, (b) adjusting an image to reflect a given viewers visual acuity, (c) cropping a subject to remove its background and (d) defining a region of interest (ROI) for the analysis or placing landmarks. These techniques are not mutually exclusive and often multiple will be combined depending on the research question.

Once images are colour‐accurate and representative, further changes can be made to mimic how certain organisms may perceive the scene photographed within each image (Troscianko & Stevens, 2015). Every organism possesses its own unique assemblage of photoreceptor cells giving it the ability to perceive light and certain colours (Kelber et al., 2003; Osorio & Vorobyev, 2008). The MICA toolbox and the Quantitative Colour Pattern Analysis framework (QCPA) provide a suite of resources that analyse colours and patterns from an explicitly visual perspective (Troscianko & Stevens, 2015; van den Berg et al., 2019). To use this approach, knowledge of the spectral sensitivities of the taxon of interest are required (discussed in more detail below). The ‘False‐Colour’ images (Figure 2a) which can be made in the MICA Toolbox (‘Make Presentation Image’) attempt to give an impression of the relative discriminability of a scene to a specific viewer, but not imitate what an organisms would actually see (van den Berg et al., 2019; Verhoeven et al., 2018). Although these images are generally for demonstration purposes only, they can identify some unique aspects to colouration that humans would not natively perceive. For example, showcasing a range of unique patterns found on flowers that possibly act as signals to attract pollinators (Lunau et al., 2021).

If your research question is discriminatory in nature (for example, how well can a predator detect a prey item from a certain distance), then visual acuity may need to be incorporated into the analysis (Figure 2b). Caves and Johnsen (2018) were first to develop an algorithm and associated R package (AcuityView) dedicated to simulating acuity. The user specifies: (1) the visual acuity of the viewer (in cycles per degree or minimum resolvable angle) and, (2) the distance between the subject and the viewer. A fast Fourier transformation is then performed to remove static spatial details that a viewer would not likely resolve from a scene. The original AcuityView algorithm has been updated and can now be implemented in both the QCPA framework (‘Acuity View’) and PAVO (‘procimg’) (Maia et al., 2019; van den Berg et al., 2019). Furthermore, the QCPA framework can also simulate acuity using a different approach (‘Gaussian Acuity Control’) that works on non‐rectangular regions of interest (unlike AcuityView) offering greater flexibility.

Images may need to have the subject cropped from the background or a region of interest denoted to facilitate further analyses (Figure 2c,d). Cropping the subject is most easily done in Adobe Photoshop using the ‘Quick Selection’ tool. For those with more programming experience, various machine learning pipelines can be used (typically in Python) to automatically detect and segment the subject from the background (Schwartz & Alfaro, 2021). Alternatively, your research question may only be concerned with a specific region within an image. Depending on the downstream analyses being performed, you may either need to manually draw the outline for the region of interest (ROI) or supply a file (typically a text file) containing the coordinates that denote the ROI. (Figure 2d). In these instances, downstream analyses are only performed on the area within the ROI. Lastly, your research question may require the placement of landmarks to align multiple images (Van Belleghem et al., 2018). Landmarks can easily be placed in ImageJ using the ‘point’ or ‘Multi‐point’ tool (Figure 2d). After placing landmarks points, the x and y coordinates of all points can be exported and saved as a text file or spreadsheet.

2.4. Representing colours graphically

Representing colours graphically allows for additional unique interpretations and analyses to be made with colour data. Which plotting technique is most appropriate depends entirely on the data and approaches used. The most fundamental method for plotting the physical properties of a colour is by displaying it as a reflectance spectrum (Figure 3a). This is the relative amount of light at specific wavelengths that have been reflected off of a surface or object; typically measured using a spectrometer (Endler, 1990). This method is particularly useful for initially comparing specific colours to known spectral sensitivities of certain photoreceptors within a viewer (Johnsen, 2016; Kelber et al., 2003). Although this is not a method in which patterns are assessed nor is it collected using digital imagery (although new techniques are emerging that can reconstruct reflectance spectra from digital images; Deng et al., 2021; Zhao & Berns, 2007), it is worth mentioning due to its specific applicability and longstanding use in the field (Endler, 1990) The PAVO (Perceptual Analysis, Visualisation and Organisation of spectral colour data) R package provides easy‐to‐use resources for plotting and visualising spectral data (‘explorespcec’) (Maia et al., 2013, 2019). Visualising colours using this approach can also identify possible latent properties about the object/organism being studied. For example, how two seemingly identical colours can be created from fundamentally different spectral distributions (called ‘metamerism’; Endler, 1990).

FIGURE 3.

FIGURE 3

Three alternative methods to graphically represent colours. The top is an image of the European Goldfinch (Carduelis carduelis). Five colours have been sampled across its body. (a) Colours represented as a distribution of the relative amount of reflected light at each wavelength. Spectral reflectance data are reproduced from Stavenga and Wilts (2014). (b) Colours represented in the RGB colour space. (c) Colours represented in hypothetical receptor space by how strongly they stimulate three photoreceptor types that are sensitive to short (S), medium (M) and long (L) wavelengths. Photo: Francis C. Franklin. CC BY‐SA 3.0.

An extension beyond plotting spectral distributions are colour spaces. Colour spaces are graphical techniques used to arrange colours spatially based on a set of criteria within a n‐dimensional coordinate system (Renoult et al., 2017). The axes of the coordinate systems differ depending on the rules used to construct the space; whether it be based on how humans perceive or categorise colours (RGB and CIELAB colour spaces) (Weller & Westneat, 2019) to how light stimulates certain photoreceptors within the eye (Chittka, 1992; Endler & Mielke, 2005). From a spectral/physical perspective, the RGB (red, green, blue, Figure 3b) colour space is common in computer graphics which contains three, perpendicular axes (x, y, z) that loosely imitate the three peak spectral sensitivities of photoreceptors in humans (blue – short wavelengths, green – medium wavelengths and red – long wavelengths). While convenient to work with in digital settings, distances within this colour space are not representative of perceptual distances, that is, how different we as humans would perceive two or more colours. To overcome this limitation, the CIELab colour space was intentionally designed so that Euclidean distances between colours closely approximate their perceptual difference in life to humans. The CIELab colour space uses a lightness axis (L), differences along a red–green axis (a) and differences along a blue – yellow axis (b). The R packages colordistance (‘plotPixels’) and PAVO (‘colspace’) can plot colours within an image or ROI within these colour spaces.

An alternative graphing technique is using n‐dimensional spaces whose axes correspond to how certain photoreceptors are stimulated given the capabilities of a specified viewer (Renoult et al., 2017). These receptor‐based colour spaces have the advantage of displaying colours in space by how they are theoretically perceived by a viewer within a psychophysical framework thus adding an additional layer of ecological or behavioural understanding (Troscianko et al., 2016). These spaces are flexible in that the number of axes can be increased or decreased depending on the number of photoreceptor types present in the viewer. One of the most notable and well established visual models is the Receptor Noise Limited model (RNL) (Vorobyev et al., 2001; Vorobyev & Osorio, 1998). This model estimates receptor spectral sensitivity while simultaneously accounting for inherent noise (caused by molecular ‘misfires’, Barlow et al., 1993) within the receptors. Like any model, it has a series of assumptions that need to be made and met which can be found in detail in the original description (for example, that colour is neurally coded using opponent mechanisms). Results from this model can then be plotted in a n‐dimensional colour space which can accommodate varying numbers of receptor sensitivities (to date, modelling up to four) making it flexible for many study taxa (Hempel De Ibarra et al., 2001). Importantly, distances between specific stimuli within these colour spaces (i.e. their Euclidean distance termed ΔS) aim to more accurately reflect perceptual distances inherent to the viewers. However, whether these colours are actually perceived differently requires experimental validation. Additional receptor colour spaces that are more generalised or specialised in nature have been created, such as the Tetrahedral Colour Space (Endler & Mielke, 2005) and the Colour Hexagon (Chittka, 1992). These spaces can be implemented in numerous platforms, including the colourvision R package (‘colspace’) and the PAVO R package (‘CTTKmodel’, ‘EMmodel’, ‘RNLmodel’, ‘GENmodel’).

2.5. Comparing the colours and patterns within and between images

Some research questions may aim to analyse many or all colours and patterns within an image or between multiple images (examples listed in Figure 4). Importantly, these techniques move beyond the traditional approaches of ‘patch‐measuring’ and seek to analyse the complete appearance of an organism or scene in its entirety. The MICA toolbox provides an extension of the Receptor Noise Limited model, called ‘Colour Maps’, which can plot millions of pixel's colours from an image into a single psychophysically calibrated colour space (‘RNL Colour Maps’). The boundaries around colour points (in a similar fashion to error bars) can be adjusted to represent different amounts of discriminatory distance (Just Noticeable Difference). For example, this space could be used to initially explore how a Honeybee (Apis mellifera) might distinguish the colours of a flower as compared to the background vegetation. This information may then be used to formulate hypotheses that can be experimentally tested and ground‐truthed using behavioural trials. In the case of the honeybee, these methods have shown the visual salience of a flower from its background is indeed indicative of its detection success (Spaethe et al., 2001).

Alternatively, you may want to measure and compare the distribution and amount of all colours found within one image to another (or many). The R package colordistance can quantitatively evaluate the similarity between the distribution of colours among n images (Weller & Westneat, 2019). Colordistance characterises colouration by plotting a sample of pixels from an image within a predetermined colour space (e.g. RGB, CIElab; non‐human perceptual spaces are not supported). The axes within this space are then divided into equal area subsections as specified by the user. The number of pixels that occur within each subsection is then counted and used to create a distribution that can be compared to every other image analysed using multivariate approaches. The intuitive nature of this approach allows for the full colouration of any number of organisms to be compared regardless of morphological or size differences. Furthermore, the user can specify: (1) colour space to be used, (2) how fine the resolution of colours are (i.e. the number of ‘subsections’ to divide the colour space into) and (3) the method to compare the distribution of colours (e.g. earth movers distance, χ 2 distance, etc.). However, the use of strictly human‐related colour spaces limits its applicability to more human‐centric questions.

If the exact location of where each colour occurs is important (i.e. pattern), colormesh offers utilities to efficiently measure colours across the body of the study subject (Valvo et al., 2021). First, images are unwarped using landmarks to match a consensus shape which allows colours to be compared at the same location across individuals whose morphology and shape may vary, or between images in which the subject's orientation differs. Delaunay triangulation creates a mesh across the body which (which can be changed depending on how fine or coarse the user would like to sample) is then used to select relatively even and representative sampling locations across the organisms being assessed. The RGB values of the pixel at the centre point of each triangle are then recorded (‘rgb.measure’) and can be converted into a data matrix (‘make.colormesh.dataset’) for further exploration and analyses. Colormesh does offer functions to implement image linearisation as well as calibration to known colour standards (‘rgb.calibrate’). However, this is the only colour space currently supported so thoughtful consideration is needed when deciding to use this approach.

The R package patternize also offers utilities to analyse colour patterns with respect to morphological location (Van Belleghem et al., 2018). patternize can use landmarks, in a similar fashion to colormesh, to align images but can also perform image registration where images are automatically aligned. The RGB triplet (this is the only colour space supported) value of the colour of interest, along with a tolerance parameter (that allows for a range of RGB values above and below the target value) are specified and detected in each image. While patternize was developed to work with only one colour, the source code can be modified to permit its use with multiple colours (e.g. Hemingson et al., 2019). The output of this analysis can be visualised and compared using multiple different techniques. These range from plotting the heat maps of specific colours across the bodies of multiple organisms (‘plotHeat’) to comparing the colour matrices using multivariate approaches (‘patLanRGB’) and visualising them using ordination techniques (‘patPCA’).

The plotting of both individual colours (techniques described in the previous section) as well as entire colours and patterns in reduced dimensional spaces allows for further metrics to be measured that summarise and describe and organism's colouration. One of the most notable is their n‐dimensional volume. This is frequently the convex or concave hull volume (among others, Mouillot et al., 2013), which are alternate methods for measuring the volume occupied by a set of points. These techniques have a long standing use in the literature and can function as simple indices of colour diversity (Gruson, 2020). A myriad of other metrics exists that aim to summarise and characterise multivariate data; mostly developed by community ecologists aiming to describe community composition (Legendre et al., 2005; Mouillot et al., 2013). These metrics can easily be adapted to work for multivariate colour data and offer an exciting new field of inquiry for this research. Recently, these metrics have been used to measure the diversity of colours found on individuals to analyse global trends in colourfulness (Cooney et al., 2022), as well as comparing the collective colourations of all individuals found in different habitats (Hemingson et al., 2022).

2.6. Measuring the visual and geometric aspects of pattern

There are many applications that analyse various geometric or spatial aspects of colouration within an image. These techniques explicitly incorporate the location of colours into the analyses and, depending on the technique, can include the visual capabilities of a specified viewer. These approaches are not restricted to only analysing colour patterns at the scale of the individual (i.e. the colouration on a frog), but can also assess how colours change between elements within an image (e.g. the foreground vs background). For example, some of these techniques could be used to assess the theoretical perceptual contrast between a subject when compared to its environment e.g. a red flower to a green, foliage dominated background. In this example, the techniques are not assessing pattern per se, but rather assessing the difference between colours with respect to their location within an image.

Boundary Strength Analysis (Endler et al., 2018), Local Edge Intensity Analysis (‘Run QCPA Framework’) (van den Berg et al., 2019) and Gabor Ratios (‘Gabrat Disruption’) are all useful techniques to identify the intensity of colour changes between elements within an image. These approaches work by modelling both the chromatic and luminance differences (ΔS and L) between different colours and elements. Each technique will have output tables that summarise the differences calculated within the image. The original images can also be visualised using plots that overlay the visual intensity of changes between colours/elements. The strength of these changes will be entirely context dependent on the visual capabilities of the viewer being modelled.

If the research question is more focused on characterising the complexity of a colour pattern at the scale of the individual (i.e. the colour pattern of an orchid flower – see van den Berg et al., 2019), Colour Adjacency Analysis (‘Run QCPA Framework’ in QCPA, ‘adjacent’ in PAVO) offers simplified summary metrics. This analysis runs multiple transects across the region of interest in both the x and y dimensions. The colour is recorded at set intervals along each transect. These transects are then summarised and used to create a transition matrix that contains how often colour changes along all transects (Endler, 2012). This is a useful technique to simply characterise the complexity of a colour pattern. The output is a single value than can then be used in further downstream analyses. However, if the colours between two images are different but the pattern is the exactly the same, the Colour Adjacency metric for both images will be identical (see van den Berg et al. (2019) for details). Thus, consideration is needed when using and interpreting this metric.

PAT‐GEOM is an ImageJ plugin that provides multiple tools to measure aspects of pattern geometry (Chan et al., 2019). These range from assessing the shape complexity of individual markings (‘Elliptical Shape Fourier Analysis’), marking size, patch directionality (‘Directionality of Distribution’), randomness and distribution (‘Marking Matrix’). These techniques seek to measure aspects at the marking or individual level. These tools have been used to demonstrate that marking size on three different populations of furrowed crabs (Xantho hydrophilus) closely resembled the background of their local environment (Chan et al., 2019). The QCPA framework also offers resources for measuring patch aspects, like the size, shape, distribution and angle of particles within a patch (‘Cluster Particle Analysis’).

3. CONCLUSIONS AND FUTURE PERSPECTIVES

The strength of these various approaches arises at the intersection of their use – combining aspects to ask higher‐order questions that begin to bridge the gap between ‘bottom‐up’ and ‘top‐down’ approaches. These questions would allow for the consideration of the perceptive and resolving capabilities of individual organisms (‘bottom‐up’) while also comparing multiple individuals or species within an evolutionary or ecological context (‘top‐down’). Fruitful lines of future research range from simulating what entire communities of organisms appear like to specific taxa (like a predator) to identifying macroecological patterns of colouration and how it relates to certain behaviours, like courtship and sexual selection whilst accommodating visual properties of the viewer (Endler & Mappes, 2017).

Many of the applications discussed herein also have use outside of their conventional design. For example, recent research used patternize to construct ‘damage heatmaps’ that display where different predatory reef fish injure their prey (Muruga et al., 2022). Damaged prey fishes were dissected out of the predators shortly after ingestion and were photographed. The injuries on every individual were manually painted onto the images in Photoshop with solid colours. These colours were then detected for and mapped using patternize to show where damage most likely occurred for each different predator type showing that predators with different tooth morphologies generally capture and process prey differently.

New hardware is revolutionising the field. Digital cameras of increasingly higher quality are becoming cheaper and open‐source designs for spectrometers and other equipment are now available, drastically reducing the initial startup costs of working in this field (Caves et al., 2020; Troscianko, 2022). Hyperspectral cameras capture the entire spectral distribution inherent to each pixel within an image, as opposed to the relative amount of light within specific wavelength bands (e.g. RGB). Hyperspectral images contain immense amounts of raw data compared to those taken by traditional cameras and can be used to further ask interesting questions about colouration in the natural world (Garcia et al., 2015). For example, tuning a hyper spectral camera to mimic the spectral sensitivity of a specific taxon to take images that closely resemble what that taxon would likely see. While the use of these cameras is still in their infancy in the life sciences due to their high cost (Zimmermann et al., 2018), there will likely be a transition to these devices over traditional cameras as they become more affordable.

Machine learning approaches are also rapidly changing the field (Fennell et al., 2021). Mentioned previously, various pipelines offer the ability to accurately detect and segment focal taxa form their backgrounds using convolutional neural networks (Schwartz & Alfaro, 2021). Different datasets can be used to train the model allowing for widespread use on many study groups. Machine learning approaches can also be used to help inform future research questions. The CamoEvo toolbox is an open access resource that is used to study the evolution of camouflage (Hancock & Troscianko, 2022). Users play an interactive game in which the subject (a simulated sphere) should be selected as fast as possible from a suite of background images. This data is then fed into an algorithm that alters the colouration to maximise the time taken to be selected – mimicking natural selection. Resources like this can be used to hone in on the selective pressure shaping camouflage patterns and then be ground‐truthed using field experiments (e.g. Kjernsmo et al., 2020). These approaches show much promise for the field of colour science.

The multitude of recent advances has made the field of organismal colouration exciting to study. By combining old and new techniques from different backgrounds, we are now capable of asking detailed questions about the appearance of organisms and how they are perceived. The goal of this review is to provide a starting point to help researchers navigate the methodologically dense field of biological colouration. We must be explicit, however, and reiterate that it is imperative to have a knowledge background relevant to one's research focus. Without a solid foundation, it is easy to make conclusions that are misleading and are not grounded in theory. Research in this field can be a unique blend of physics, biology, psychology, behaviour and ecology. Thus, the necessary background knowledge needed will be specific to your research question.

Future research is likely to yield new ways of thinking about colouration (Garcia et al., 2020). In just the last 5 years, there have been numerous developments and modifications made to existing techniques to answer interesting new questions. By combining new ways to assess colouration and further refining visual modelling, we are gaining an increasingly comprehensive understanding of how colouration functions in the natural world.

AUTHOR CONTRIBUTIONS

Christopher R. Hemingson: Conceptualization (lead); data curation (lead); formal analysis (lead); funding acquisition (supporting); investigation (lead); methodology (lead); project administration (equal); resources (lead); software (lead); validation (lead); visualization (lead); writing – original draft (lead); writing – review and editing (lead). Peter F. Cowman: Methodology (supporting); project administration (supporting); supervision (supporting); writing – original draft (supporting); writing – review and editing (supporting). David R. Bellwood: Funding acquisition (lead); project administration (supporting); supervision (lead); validation (supporting); writing – original draft (equal); writing – review and editing (equal).

CONFLICT OF INTEREST STATEMENT

The authors declare that there are no conflicts of interest.

ACKNOWLEDGEMENTS

We thank all members of the Research Hub for Coral Reef Ecosystem Function for their helpful comments and discussion. We also thank the handling editor and four anonymous reviewers for their constructive feedback. This work was supported by a JCU postgraduate research scholarship and a Stengl‐Wyer Postdoctoral Scholars award, made possible by the Stengl‐Wyer Endowment to The University of Texas at Austin's College of Natural Sciences, to CRH and ARC grants CE140100020 and FL190100062 to DRB. Open access publishing facilitated by James Cook University, as part of the Wiley ‐ James Cook University agreement via the Council of Australian University Librarians.

Hemingson, C. R. , Cowman, P. F. , & Bellwood, D. R. (2024). Analysing biological colour patterns from digital images: An introduction to the current toolbox. Ecology and Evolution, 14, e11045. 10.1002/ece3.11045

DATA AVAILABILITY STATEMENT

Our publication is a review and has not associated data.

REFERENCES

  1. Akkaynak, D. (2019). Sea‐thru: A method for removing water from underwater images (pp. 1682–1691). Computer Vision Foundation. [Google Scholar]
  2. Badiane, A. , Pérez i de Lanuza, G. , García‐Custodio, M. C. , Carazo, P. , & Font, E. (2017). Colour patch size and measurement error using reflectance spectrophotometry. Methods in Ecology and Evolution, 8(11), 1585–1593. 10.1111/2041-210X.12801 [DOI] [Google Scholar]
  3. Barlow, R. B. , Birget, R. R. , Kaplant, E. , & Tallentt, J. R. (1993). On the molecular origin of photoreceptor noise. Nature, 366, 64–66. [DOI] [PubMed] [Google Scholar]
  4. Bergman, T. J. , & Beehner, J. C. (2008). A simple method for measuring colour in wild animals: Validation and use on chest patch colour in geladas (Theropithecus gelada). Biological Journal of the Linnean Society, 94, 231–240. 10.1111/j.1095-8312.2008.00981.x [DOI] [Google Scholar]
  5. Carleton, K. L. , Escobar‐Camacho, D. , Stieb, S. M. , Cortesi, F. , & Marshall, N. J. (2020). Seeing the rainbow: Mechanisms underlying spectral sensitivity in teleost fishes. The Journal of Experimental Biology, 223(8), jeb193334. 10.1242/jeb.193334 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Caro, T. (2017). Wallace on coloration: Contemporary perspective and unresolved insights. Trends in Ecology & Evolution, 32(1), 23–30. 10.1016/j.tree.2016.10.003 [DOI] [PubMed] [Google Scholar]
  7. Caves, E. M. , Brandley, N. C. , & Johnsen, S. (2018). Visual acuity and the evolution of signals. Trends in Ecology & Evolution, 33(5), 358–372. 10.1016/j.tree.2018.03.001 [DOI] [PubMed] [Google Scholar]
  8. Caves, E. M. , Frank, T. M. , & Johnsen, S. (2016). Spectral sensitivity, spatial resolution and temporal resolution and their implications for conspecific signalling in cleaner shrimp. Journal of Experimental Biology, 219(4), 597–608. 10.1242/jeb.122275 [DOI] [PubMed] [Google Scholar]
  9. Caves, E. M. , & Johnsen, S. (2018). AcuityView: An R package for portraying the effects of visual acuity on scenes observed by an animal. Methods in Ecology and Evolution, 9(3), 793–797. 10.1111/2041-210X.12911 [DOI] [Google Scholar]
  10. Caves, E. M. , Nowicki, S. , & Johnsen, S. (2019). Von Uexküll revisited: Addressing human biases in the study of animal perception. Integrative and Comparative Biology, 59, 1451–1462. 10.1093/icb/icz073 [DOI] [PubMed] [Google Scholar]
  11. Caves, E. M. , Troscianko, J. , & Kelley, L. A. (2020). A customizable, low‐cost optomotor apparatus: A powerful tool for behaviourally measuring visual capability. Methods in Ecology and Evolution, 11(10), 1319–1324. 10.1111/2041-210X.13449 [DOI] [Google Scholar]
  12. Chan, I. Z. W. , Stevens, M. , & Todd, P. A. (2019). PAT‐GEOM: A software package for the analysis of animal patterns. Methods in Ecology and Evolution, 10(4), 591–600. 10.1111/2041-210X.13131 [DOI] [Google Scholar]
  13. Chittka, L. (1992). The colour hexagon: A chromaticity diagram based on photoreceptor excitations as a generalized representation of colour opponency. Journal of Comparative Physiology A, 170(5), 533–543. 10.1007/BF00199331 [DOI] [Google Scholar]
  14. Cooney, C. R. , He, Y. , Varley, Z. K. , Nouri, L. O. , Moody, C. J. A. , Jardine, M. D. , Liker, A. , Székely, T. , & Thomas, G. H. (2022). Latitudinal gradients in avian colourfulness. Nature Ecology & Evolution, 6(5), 622–629. 10.1038/s41559-022-01714-1 [DOI] [PubMed] [Google Scholar]
  15. Dalrymple, R. L. , Kemp, D. J. , Flores‐Moreno, H. , Laffan, S. W. , White, T. E. , Hemmings, F. A. , Tindall, M. L. , & Moles, A. T. (2015). Birds, butterflies and flowers in the tropics are not more colourful than those at higher latitudes. Global Ecology and Biogeography, 24(12), 1424–1432. 10.1111/geb.12368 [DOI] [Google Scholar]
  16. Darwin, C. (1859). On the origin of species by means of natural selection, or, the preservation of favoured races in the struggle for life . London: John Murray [PMC free article] [PubMed]
  17. Deng, L. , Sun, J. , Chen, Y. , Lu, H. , Duan, F. , Zhu, L. , & Fan, T. (2021). M2H‐net: A reconstruction method for hyperspectral remotely sensed imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 173, 323–348. 10.1016/j.isprsjprs.2021.01.019 [DOI] [Google Scholar]
  18. Endler, J. A. (1978). A Predator's view of animal color patterns. In Hecht M. K., Steere W. C., & Wallace B. (Eds.), Evolutionary Biology (pp. 319–364). Springer US. [Google Scholar]
  19. Endler, J. A. (1990). On the measurement and classification of color in studies of animal color patterns. Biological Journal of the Linnean Society, 41, 315–352. 10.1111/j.1095-8312.1990.tb00839.x [DOI] [Google Scholar]
  20. Endler, J. A. (2012). A framework for analysing colour pattern geometry: Adjacent colours. Biological Journal of the Linnean Society, 107(2), 233–253. 10.1111/j.1095-8312.2012.01937.x [DOI] [Google Scholar]
  21. Endler, J. A. , Cole, G. L. , & Kranz, A. M. (2018). Boundary strength analysis: Combining colour pattern geometry and coloured patch visual properties for use in predicting behaviour and fitness. Methods in Ecology and Evolution, 9(12), 2334–2348. 10.1111/2041-210X.13073 [DOI] [Google Scholar]
  22. Endler, J. A. , & Mappes, J. (2017). The current and future state of animal coloration research. Philosophical Transactions of the Royal Society, B: Biological Sciences, 372(1724), 1–8. 10.1098/rstb.2016.0352 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Endler, J. A. , & Mielke, P. W. (2005). Comparing entire colour patterns as birds see them. Biological Journal of the Linnean Society, 86(4), 405–431. 10.1111/j.1095-8312.2005.00540.x [DOI] [Google Scholar]
  24. Fennell, J. G. , Talas, L. , Baddeley, R. J. , Cuthill, I. C. , & Scott‐Samuel, N. E. (2021). The camouflage machine: Optimizing protective coloration using deep learning with genetic algorithms. Evolution, 75(3), 614–624. 10.1111/evo.14162 [DOI] [PubMed] [Google Scholar]
  25. Garcia, J. E. , Girard, M. B. , Kasumovic, M. , & Petersen, P. (2015). Differentiating biological Colours with few and many sensors: Spectral reconstruction with RGB and hyperspectral cameras. PLoS ONE, 10(5), e0125817. 10.7910/DVN/29370 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Garcia, J. E. , Phillips, R. D. , Peter, C. I. , & Dyer, A. G. (2020). Changing how biologists view flowers—Color as a perception not a trait. Frontiers in Plant Science, 11, 601700. 10.3389/fpls.2020.601700 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Gawryszewski, F. M. (2018). Color vision models: Some simulations, a general n‐dimensional model, and the colourvision R package. Ecology and Evolution, 8, 8159–8170. 10.1002/ece3.4288 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Gracheva, E. O. , Ingolia, N. T. , Kelly, Y. M. , Cordero‐Morales, J. F. , Hollopeter, G. , Chesler, A. T. , Sánchez, E. E. , Perez, J. C. , Weissman, J. S. , & Julius, D. (2010). Molecular basis of infrared detection by snakes. Nature, 464(7291), 1006–1011. 10.1038/nature08943 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Gruson, H. (2020). Estimation of colour volumes as concave hypervolumes using α‐shapes. Methods in Ecology and Evolution, 11(8), 955–963. 10.1111/2041-210x.13398 [DOI] [Google Scholar]
  30. Hancock, G. R. A. , & Troscianko, J. (2022). CamoEvo: An open access toolbox for artificial camouflage evolution experiments. Evolution, 76(5), 870–882. 10.1111/evo.14476 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Hemingson, C. R. , Cowman, P. F. , Hodge, J. R. , & Bellwood, D. R. (2019). Colour pattern divergence in reef fish species is rapid and driven by both range overlap and symmetry. Ecology Letters, 22(1), 190–199. 10.1111/ele.13180 [DOI] [PubMed] [Google Scholar]
  32. Hemingson, C. R. , Mihalitsis, M. , & Bellwood, D. R. (2022). Are fish communities on coral reefs becoming less colourful? Global Change Biology, 28(10), 3321–3332. 10.1111/gcb.16095 [DOI] [PubMed] [Google Scholar]
  33. Hempel De Ibarra, N. , Giurfa, M. , & Vorobyev, M. (2001). Detection of coloured patterns by honeybees through chromatic and achromatic cues. Journal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology, 187(3), 215–224. 10.1007/s003590100192 [DOI] [PubMed] [Google Scholar]
  34. Johnsen, S. (2016). How to measure color using spectrometers and calibrated photographs. Journal of Experimental Biology, 219(6), 772–778. 10.1242/jeb.124008 [DOI] [PubMed] [Google Scholar]
  35. Kelber, A. , Vorobyev, M. , & Osorio, D. (2003). Animal colour vision – Behavioural tests and physiological concepts. Biological Reviews, 78(1), 81–118. 10.1017/S1464793102005985 [DOI] [PubMed] [Google Scholar]
  36. Kemp, D. J. , Herberstein, M. E. , Fleishman, L. J. , Endler, J. A. , Bennett, A. T. D. , Dyer, A. G. , Hart, N. S. , Marshall, J. , & Whiting, M. J. (2015). An integrative framework for the appraisal of coloration in nature. American Naturalist, 185(6), 705–724. 10.1086/681021 [DOI] [PubMed] [Google Scholar]
  37. Kjernsmo, K. , Whitney, H. M. , Scott‐samuel, N. E. , Hall, J. R. , Knowles, H. , Talas, L. , & Cuthill, I. C. (2020). Iridescence as camouflage. Current Biology, 30(3), 551–555. 10.1016/j.cub.2019.12.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Kranz, A. M. , Forgan, L. G. , Cole, G. L. , & Endler, J. A. (2018). Light environment change induces differential expression of guppy opsins in a multi‐generational evolution experiment. Evolution, 72(8), 1656–1676. 10.1111/evo.13519 [DOI] [PubMed] [Google Scholar]
  39. Legendre, P. , Borcard, D. , & Peres‐Neto, P. R. (2005). Analyzing beta diversity: Partitioning the spatial variation of community composition data. Ecological Monographs, 75(4), 435–450. [Google Scholar]
  40. Longley, W. H. (1917). Studies upon the biological significance of animal coloration. I. The colors and color changes of west Indian reef‐fishes. Journal of Experimental Zoology, 23(3), 533–601. 10.1002/jez.1400230305 [DOI] [Google Scholar]
  41. Losey, G. S. , McFarland, W. N. , Loew, E. R. , Zamzow, J. P. , Nelson, P. A. , & Marshall, N. J. (2003). Visual biology of Hawaiian coral reef fishes. I. Ocular transmission and visual pigments. Copeia, 2003(3), 433–454. [Google Scholar]
  42. Lunau, K. , Scaccabarozzi, D. , Willing, L. , & Dixson, K. (2021). False colour photography reveals the complexity of flower signalling. A commentary on: ‘A bee's eye view of remarkable floral colour patterns in the southwest Australian biodiversity hotspot revealed by false colour photography’. Annals of Botany, 128(7), 821–834. 10.1093/aob/mcab076 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Maia, R. , Eliason, C. M. , Bitton, P. P. , Doucet, S. M. , & Shawkey, M. D. (2013). Pavo: An R package for the analysis, visualization and organization of spectral data. Methods in Ecology and Evolution, 4(10), 906–913. 10.1111/2041-210X.12069 [DOI] [Google Scholar]
  44. Maia, R. , Gruson, H. , Endler, J. A. , & White, T. E. (2019). Pavo 2: New tools for the spectral and spatial analysis of colour in R. Methods in Ecology and Evolution, 10(7), 1097–1107. 10.1111/2041-210X.13174 [DOI] [Google Scholar]
  45. Marshall, N. J. , Jennings, K. , McFarland, W. N. , Loew, E. R. , & Losey, G. S. (2003). Visual biology of Hawaiian coral reef fishes. II. Colors of Hawaiian coral reef fish. Copeia, 2003(3), 455–466. 10.1643/01-055 [DOI] [Google Scholar]
  46. Mason, N. A. , & Bowie, R. C. K. (2020). Plumage patterns: Ecological functions, evolutionary origins, and advances in quantification. The Auk, 137, 1–29. 10.1093/auk/ukaa060 [DOI] [Google Scholar]
  47. Mouillot, D. , Graham, N. A. J. , Villéger, S. , Mason, N. W. H. , & Bellwood, D. R. (2013). A functional approach reveals community responses to disturbances. Trends in Ecology & Evolution, 28(3), 167–177. 10.1016/j.tree.2012.10.004 [DOI] [PubMed] [Google Scholar]
  48. Muruga, P. , Bellwood, D. R. , & Mihalitsis, M. (2022). Forensic odontology: Assessing bite wounds to determine the role of teeth in piscivorous fishes. Integrative Organismal Biology, 4(1), 1–15. 10.1093/iob/obac011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Musilova, Z. , Cortesi, F. , Matschiner, M. , Davies, W. I. L. , Patel, J. S. , Stieb, S. M. , De Busserolles, F. , Malmstrøm, M. , Tørresen, O. K. , Brown, C. J. , Mountford, J. K. , Hanel, R. , Stenkamp, D. L. , Jakobsen, K. S. , Carleton, K. L. , Jentoft, S. , Marshall, J. , & Salzburger, W. (2019). Vision using multiple distinct rod opsins in deep‐sea fishes. Science, 364(6440), 588–592. 10.1126/science.aav4632 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Nandamuri, S. P. , Yourick, M. R. , & Carleton, K. L. (2017). Adult plasticity in African cichlids: Rapid changes in opsin expression in response to environmental light differences. Molecular Ecology, 26(21), 6036–6052. 10.1111/mec.14357 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Newport, C. , Green, N. F. , McClure, E. C. , Osorio, D. C. , Vorobyev, M. , Marshall, N. J. , & Cheney, K. L. (2017). Fish use colour to learn compound visual signals. Animal Behaviour, 125, 93–100. 10.1016/j.anbehav.2017.01.003 [DOI] [Google Scholar]
  52. Osorio, D. , & Vorobyev, M. (2008). A review of the evolution of animal colour vision and visual communication signals. Vision Research, 48(20), 2042–2051. 10.1016/j.visres.2008.06.018 [DOI] [PubMed] [Google Scholar]
  53. R Core Team . (2023). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.r‐project.org/. [Google Scholar]
  54. Renoult, J. P. , Kelber, A. , & Schaefer, H. M. (2017). Colour spaces in ecology and evolutionary biology. Biological Reviews, 92(1), 292–315. 10.1111/brv.12230 [DOI] [PubMed] [Google Scholar]
  55. Schwartz, S. T. , & Alfaro, M. E. (2021). Sashimi: A toolkit for facilitating high‐throughput organismal image segmentation using deep learning. Methods in Ecology and Evolution, 12(12), 2341–2354. 10.1111/2041-210X.13712 [DOI] [Google Scholar]
  56. Shichida, Y. , & Matsuyama, T. (2009). Evolution of opsins and phototransduction. Philosophical Transactions of the Royal Society, B: Biological Sciences, 364(1531), 2881–2895. 10.1098/rstb.2009.0051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Siebeck, U. E. (2004). Communication in coral reef fish: The role of ultraviolet colour patterns in damselfish territorial behaviour. Animal Behaviour, 68(2), 273–282. 10.1016/j.anbehav.2003.11.010 [DOI] [Google Scholar]
  58. Siebeck, U. E. , Wallis, G. M. , & Litherland, L. (2008). Colour vision in coral reef fish. Journal of Experimental Biology, 211(3), 354–360. 10.1242/jeb.012880 [DOI] [PubMed] [Google Scholar]
  59. Spaethe, J. , Tautz, J. , & Chittka, L. (2001). Visual constraints in foraging bumblebees: Flower size and color affect search time and flight behavior. Proceedings of the National Academy of Sciences of the United States of America, 98(7), 3898–3903. 10.1073/pnas.071053098 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Stavenga, D. G. , & Wilts, B. D. (2014). Oil droplets of bird eyes: microlenses acting as spectral filters. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1636), 20130041. 10.1098/rstb.2013.0041 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Stevens, M. , Parraga, C. A. , Cuthill, I. C. , Partridge, J. C. , & Troscianko, T. S. (2007). Using digital photography to study animal coloration. Biological Journal of the Linnean Society, 90(2), 211–237. [Google Scholar]
  62. Toral, G. M. , Figuerola, J. , & Negro, J. J. (2008). Multiple ways to become red: Pigment identi fi cation in red feathers using spectrometry. Comparative Biochemistry and Physiology Part B: Biochemistry & Molecular Biology, 150, 147–152. 10.1016/j.cbpb.2008.02.006 [DOI] [PubMed] [Google Scholar]
  63. Tosetto, L. , Williamson, J. E. , White, T. E. , & Hart, N. S. (2021). Can the dynamic Colouration and patterning of Bluelined goatfish (Mullidae; Upeneichthys lineatus) Be perceived by conspecifics? Brain, Behavior and Evolution, 2021, 103–123. 10.1159/000519894 [DOI] [PubMed] [Google Scholar]
  64. Troscianko, J. (2022). OSpRad; an open‐source, low‐cost, high‐sensitivity spectroradiometer. bioRxiv. 10.1101/2022.12.09.519768 [DOI] [PMC free article] [PubMed]
  65. Troscianko, J. , & Stevens, M. (2015). Image calibration and analysis toolbox—A free software suite for objectively measuring reflectance, colour and pattern. Methods in Ecology and Evolution, 6(11), 1320–1331. 10.1111/2041-210X.12439 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Troscianko, J. , Wilson‐Aggarwal, J. , Stevens, M. , & Spottiswoode, C. N. (2016). Camouflage predicts survival in ground‐nesting birds. Scientific Reports, 6, 19966. 10.1038/srep19966 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Valvo, J. J. , Aponte, J. D. , Daniel, M. J. , Dwinell, K. , Rodd, H. , Houle, D. , & Hughes, K. A. (2021). Using Delaunay triangulation to sample whole‐specimen color from digital images. Ecology and Evolution, 11(18), 12468–12484. 10.1002/ece3.7992 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Van Belleghem, S. M. , Papa, R. , Ortiz‐Zuazaga, H. , Hendrickx, F. , Jiggins, C. D. , Mcmillan, W. O. , & Counterman, B. A. (2018). Patternize: An R package for quantifying colour pattern variation. Methods in Ecology and Evolution, 9(2), 390–398. 10.1111/2041-210X.12853 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. van den Berg, C. P. , Troscianko, J. , Endler, J. A. , Marshall, N. J. , & Cheney, K. L. (2019). Quantitative colour pattern analysis (QCPA): A comprehensive framework for the analysis of colour patterns in nature. Methods in Ecology and Evolution, 11(2), 316–332. 10.1101/592261 [DOI] [Google Scholar]
  70. Verhoeven, C. , Ren, Z. , & Lunau, K. (2018). False‐colour photography: A novel digital approach to visualize the bee view of Floweres. Journal of Pollination Ecology, 23, 102–118. [Google Scholar]
  71. Vorobyev, M. (2003). Coloured oil droplets enhance colour discrimination. Proceedings of the Royal Society B: Biological Sciences, 270(1521), 1255–1261. 10.1098/rspb.2003.2381 [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Vorobyev, M. , Brandt, R. , Peitsch, D. , Laughlin, S. B. , & Menzel, R. (2001). Colour thresholds and receptor noise: Behaviour and physiology compared. Vision Research, 41(5), 639–653. 10.1016/S0042-6989(00)00288-1 [DOI] [PubMed] [Google Scholar]
  73. Vorobyev, M. , & Osorio, D. (1998). Receptor noise as a determinant of colour threshoIds. Proceedings of the Royal Society B: Biological Sciences, 265(1394), 351–358. 10.1098/rspb.1998.0302 [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Wallace, A. R. (1877). The colour of animals and plants. I. The Colours of Animals. American Naturalist, 11(11), 348–408. [Google Scholar]
  75. Weller, H. I. (2021). Recolorize: Color‐based image segmentation (0.1.0) [computer software]. https://cran.r‐project.org/web/packages/recolorize/index.html
  76. Weller, H. I. , & Westneat, M. W. (2019). Quantitative color profiling of digital images with earth mover's distance using the R package colordistance. PeerJ, 7, e6398. 10.7717/peerj.6398 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. White, T. E. , Dalrymple, R. L. , Noble, D. W. A. , O'Hanlon, J. C. , Zurek, D. B. , & Umbers, K. D. L. (2015). Reproducible research in the study of biological coloration. Animal Behaviour, 106, 51–57. 10.1016/j.anbehav.2015.05.007 [DOI] [Google Scholar]
  78. Zhao, Y. , & Berns, R. S. (2007). Image‐based spectral reflectance reconstruction using the matrix R method. Color Research & Application, 32(5), 343–351. 10.1002/col.20341 [DOI] [Google Scholar]
  79. Zimmermann, M. J. Y. , Nevala, N. E. , Yoshimatsu, T. , Osorio, D. , Nilsson, D. E. , Berens, P. , & Baden, T. (2018). Zebrafish differentially process color across visual space to match natural scenes. Current Biology, 28(13), 2018–2032.e5. 10.1016/j.cub.2018.04.075 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Our publication is a review and has not associated data.


Articles from Ecology and Evolution are provided here courtesy of Wiley

RESOURCES