Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Mar 1.
Published in final edited form as: Anim Cogn. 2022 Oct 16;26(2):589–598. doi: 10.1007/s10071-022-01691-9

Color is necessary for face discrimination in the Northern paper wasp, Polistes fuscatus

Christopher M Jernigan 1,Φ,Ψ, Jay A Stafstrom 1,Ψ, Natalie C Zaba 1, Caleb C Vogt 1, Michael J Sheehan 1,Φ
PMCID: PMC9974887  NIHMSID: NIHMS1861291  PMID: 36245014

Abstract

Visual individual recognition requires animals to distinguish among conspecifics based on appearance. Though visual individual recognition has been reported in a range of taxa including primates, birds, and insects, the features that animals require to discriminate between individuals are not well understood. Northern paper wasp females, Polistes fuscatus, possess individually distinctive color patterns on their faces, which mediate individual recognition. However, it is currently unclear what role color plays in the facial recognition system of this species. Thus, we sought to test two possible roles of color in wasp facial recognition. On one hand, color may be important simply because it creates a pattern. If this is the case, then wasps should perform similarly when discriminating color or grayscale images of the same faces. Alternatively, color itself may be important for recognition of an image as a “face”, which would predict poorer performance on grayscale discrimination relative to color images. We found wasps performed significantly better when discriminating between color faces compared to grayscale versions of the same faces. In fact, wasps trained on grayscale faces did not perform better than chance, indicating that color is necessary for the recognition of an image as a face by the wasp visual system.

Keywords: Polistes fuscatus, Color, individual recognition, operant conditioning, face discrimination, vision

INTRODUCTION

Individual recognition requires that animals discriminate among individuals within a population that share many features (Tibbetts and Dale, 2007, Tumulty and Sheehan, 2020). Reliably recognizing individuals of the same age-class and species, who tend to share many similar traits, poses a challenge for animals. One solution to this problem is for animals to evolve more individually distinctive traits that facilitate individual identification. Evidence for identity signal elaboration has been found in visual (Sheehan and Tibbetts, 2009, Sheehan and Nachman, 2014, Caves et al., 2015), acoustic (Beecher, 1989, Pollard and Blumstein, 2011), and chemical modalities (Steiger et al., 2008, Sheehan et al., 2016, Cappa et al., 2020, Beani et al., 2019, Cini et al., 2019) across a range of taxa. Additionally, perceptual adaptations among receivers can provide improved discrimination of identity cues and signals (Loesche et al., 1991, Avilés et al., 2010, Sheehan and Tibbetts, 2011, Hiramatsu et al., 2017). Despite evidence for perceptual adaptations for individual recognition, we have little understanding of which aspects of identity cues or signals animals use to discriminate among individuals. There are often many features that vary and could be used as aspects of identity signals, but it is unclear which are relevant for discrimination. Understanding which features are used for individual recognition will result in two advances. First, uncovering which aspects of variable features contribute to identity will more clearly define the perceptual and cognitive processes by which animals recognize individuals. Second, understanding which features are most important for recognition will provide insights into the evolutionary pressures shaping identity signal diversity.

Females of the Northern paper wasp, Polistes fuscatus, possess highly diverse and individually distinctive colored face patterning (Sheehan et al., 2017). These facial patterns alone can be used by female wasps to discriminate among nest mates or images of wasps (Tibbetts, 2002, Sheehan and Tibbetts, 2011). In fact, P. fuscatus female wasps are specialized for learning face stimuli, and their ability to discriminate among images is dramatically reduced when images are digitally altered, either by removal of the antenna or re-arrangement of the internal features of the face (Sheehan and Tibbetts, 2011). This finding and others suggests that like primates, faces in P. fuscatus are holistically processed (Tibbetts et al., 2021) and seem to be special perceptual objects (Sheehan and Tibbetts, 2011).

P. fuscatus facial patterns typically consist of only 3 pigments: a reddish-brown pigmentation, a yellow pigmentation, and a melanized black pigmentation (Enteman, 1904, Tumulty et al., 2021). Additionally, there are no hidden UV signals present on the cuticle of P. fuscatus (Tumulty et al., 2021). Most if not all wasp faces could be discriminated against using achromatic patterning information alone, simply using edge detection and luminance in the medium-wavelength visual channel to identify the spatial relations of the various pattern features. Thus, the colorful identity signals in wasps could have evolved for the purpose of providing contrast to make patterns that are processed achromatically.

Here we tested animals in an operant conditioning assay using either color images of wasps or grayscale versions of those same images to test the role of color in facial recognition in P. fuscatus. On one hand, color may be important simply because it creates a pattern. If this is the case, then wasps should perform similarly when discriminating color or grayscale images of the same faces. Alternatively, color itself may be important for recognition that an image is a face, which would predict poorer performance on grayscale discrimination relative to color images.

METHODS

Polistes fuscatus gynes were collected in Ithaca and Erin, NY in Aug. and Sept. 2020. After collection, wasps were housed individually in small cups in the laboratory and provided with a natural light cycle. Each wasp was provided with construction paper and ad libitum sugar and water.

Training stimuli

We first selected images of 6 paper wasps with distinct facial patterns and high contrasts in Adobe Photoshop (CC2020). We then selected the internal (between the eyes) facial patterns from each image and placed them onto the same wasp image with a standard gray background, creating 6 unique wasp images with the only features that vary being the facial patterns (body, antennae, mandibles, and eyes are identical across face images). Previous studies of facial discrimination abilities in paper wasps have tended not to control body features in this way (DesJardins and Tibbetts, 2018, Sheehan and Tibbetts, 2011), making this a more rigorous test of face color/pattern discrimination specifically. Grayscale versions of all standardized images were then created using the grayscale function in Photoshop (Figure 1a), generating a total of 12 standardized images (6 color and 6 grayscale). Images were sized to 1.3 cm x 0.6 cm and printed on Canon Photo Paper Plus Glossy II using a Canon Pixma iP110 inkjet photo printer (1200 DPI). Full-resolution training-image files are available upon request.

Fig. 1.

Fig. 1

Stimuli and training paradigm used for this study. (a) Image set used in this study. A total of 6 unique wasp stimuli with identical antenna, mandible positions, and body backgrounds were used to test face discrimination in Polistes fuscatus. Each wasp stimulus also had a corresponding grayscale image used to train half of the animals. For details of stimuli spectral properties and modeled photoreceptor excitation in honey bee visual system see Figure S1. (b) Schematic of pseudorandomized side pairing of shock associated stimulus over trials, including side switching of shock associated stimulus and trial type. Animals experienced two trial types: training (t) and Testing (T). On training trials wasps were allowed to freely move about the arena for 120 seconds and were shocked in the central zone every 10 seconds and when entering the shock associated stimulus zone every 3 seconds. Every third trial was a Testing trial, in which wasps were allowed to freely move about the arena for 60 seconds and no shocks were delivered. To-scale schematic of rectangular training arena schematic with electrical grid flooring, acrylic sheet cover, and removable central zone barriers. Red highlighted sections are electrified. Two stimuli images were affixed to each wall of the choice zones denoted by dark rectangles and faces were denoted by F1 and F2 respectively, (−) denotes electrical shock pairing. (d) Diagram of animal tracking methods and data acquisition. Red dot denotes tracked position on the wasp. Distance of track, left and right side zone entries, and time spent in each zone were collected during each trial.

We measured reflectance spectra of printed stimuli, focusing on the 2-3 primary colors present on the internal facial patterns of each face in both color and grayscale using an OceanOptics SD 2000 spectrometer and a PX-2 xenon light source (OceanOptics, Dunedin, FL, U.S.A.). We selected one area per color per stimulus face selecting the largest color patch for each color on the face and digitally enlarging that area for spectral measurement, We then analyzed printed stimuli using the pavo package in R to confirm that the presented printed color stimuli matched desired colors from the wasp cuticle (Figure S1) (R Core Team, 2020, Maia et al., 2019). We also estimated short-, medium-, and long-wavelength excitation of stimuli in honey bee visual space (Figure S1). The spectral sensitivity of photoreceptors in P. fuscatus is currently unknown, P. fuscatus are tri-chromatic based on their genome and based on comparative electroretinogram data from other hymenopterans they should have similar spectral sensitivities to the honey bee (Peitsch et al., 1992). Lastly, we analyzed all printed grayscale stimuli to confirm that stimuli were indeed spectrally gray, i.e. equally excited visual spectrum photoreceptors, in honey bee visual space (Figure S1). Additionally, no printed stimuli showed reflectance in the UV, which is also true with wasp cuticle such that there were no additional signal channels present in presented face stimuli that are not present on actual wasp faces (Figure S1) (Tumulty et al., 2021).

Training apparatus

We 3D-printed identical rectangular arenas with inner dimensions of, 3.4 cm x 10.1 cm x 7 mm; the two side zones were 3 cm long and the center was 4.1 cm in length. Each arena had two sets of stimulus images affixed to all 3 walls, making up each end zone of the arena: 6 images of one face at one end and 6 images of a second face at the opposite end as is standard for this assay (Figure 1b) (Sheehan and Tibbetts, 2011, Tibbetts et al., 2019). Arenas were placed on a printed copper electrical grid, which covered the entire floor of the arena (both the safe and shocked zones) and each zone could be independently electrified (Figure 1b). A 4.0V mild electric shock was manually delivered using a variac transformer through the grid on the floor of the arena. This level of shock was mild enough to not harm or kill the wasp, but strong enough to visibly see the wasp react when a shock was delivered (Sheehan and Tibbetts, 2011). A clear acrylic sheet was placed over the arena to contain the wasp and ensure the wasp always contacted the electrical grid during training. The arena was equally illuminated with bright white light (6000-6500K at 2,500 Lux) using three LED array lamps (EVISWIY brand), each composed of 30 white LEDs. Trials were recorded from above using an iPhone (Apple, iPhoneX) or Go-pro camera (Hero4 Black).

Training paradigm

Each wasp was trained to discriminate between one pair of face images, either in color or grayscale, and received 16 training trials and 8 testing trials (Figure 1b). Each training trial lasted 120 seconds and each testing trial lasted 60 seconds (Figure 1b). For each wasp, one of the two different wasp images were designated as the ‘correct’ image and the zone in front of this image was not electrified throughout the trial for that wasp. During training trials, a mild electric shock was associated with stimulus images that were not the ‘correct’ image for that wasp. The different images within a pair were assigned as ‘correct’ to half the wasps trained on that pair, such that across individuals a given image may have been associated with a safe zone or shock. On testing trials no electric shock was delivered to the animal in any part of the arena. The side of the arena that had the shock-paired face was switched in a pre-determined pseudorandomized pattern across trials and each wasp experienced the shock-associated face an equal number of times on the left and right sides of the arena (Figure 1b). Prior to each trial, wasps were confined in the middle of the arena by two clear removable barriers. Once the wasp was acclimated to the arena a trial would begin and the barriers were removed, at which point wasps were allowed to freely move around the arena (Supplemental video).

During training trials if the wasp did not move for 10s after being placed in the center area of the arena, a single shock was delivered. Each time the wasp thorax (or >50% of the body) crossed the boundary into the zone of the arena containing the shock-associated face, a mild shock was manually delivered every 3 seconds until the wasp exited the zone. After 120 seconds in the arena, the wasp was removed using forceps and placed in a holding container with sugar and water for 1 minute. On testing trials, the wasp was placed in the center zone of the arena until acclimated and barriers were then removed. The wasp was then allowed to explore the arena for 60 seconds without being shocked. Again after 60 seconds the wasp was removed from the arena and placed in a holding container with sugar and water for 1 minute.

Each set of trials consisted of two training trials followed by a single testing trial (Figure 1b). Between each trial, the apparatus was wiped with water to remove or homogenize possible chemical cues released by wasps. Between each individual wasp, the training apparatus was wiped with ethanol. Training was conducted in a windowless room, where the only source of light was the LED lights evenly illuminating the arena.

Data metrics and analyses

Deep Lab Cut (Mathis et al., 2018, Nath et al., 2019) was used to track each wasp’s thorax (as the thorax was used as the point of reference to induce shock) throughout the trial. We labeled 589 frames taken from 89 videos (then 95% of the labelled frames were used for training). We used a ResNet-50-based neural network with default parameters for 500,000 training iterations. We found the test error was: 2.21 pixels, train: 1.64 pixels. We then used a p-cutoff of 0.9 to condition the X,Y coordinates for further analysis. We then used SimBA (Nilsson et al., 2020) to extract the movement distance, average velocity, and proportion of time spent in each of the stimulus zones of the arena (shocked and safe zones, Figure 1c, Supplemental video). Additionally, first entry side and the latency to first entry in the non-shock-associated face side (safe zone) was manually scored from each video. To assess learning performance between wasps trained to color or grayscale faces we used a chi-squared test using the total number of correct and incorrect choices over the testing trials compared to random chance (Sheehan and Tibbetts, 2011, DesJardins and Tibbetts, 2018). Here, chance is assumed to be 50% as trials are classified as either correct or incorrect choices based on which side animals chose first. Only trials in which choices are made are included in this analyses. We analyzed the proportion of trials with any choice separately. We also used mixed effect models to test differences between treatments using continuous data with treatment (color vs. grayscale stimuli), the trial number, the previously shocked side, and their interactions as fixed effects and wasp ID and trainer as a random effects using the lme4 function in R (R Core Team, 2020, Bates et al., 2015).

RESULTS

Wasps trained on color facial images perform better on un-reinforced testing trials

We trained a total of 17 wasps using color images and 16 wasps using grayscale images. All other training protocols were identical (Figure 1b). On testing trials, we found that wasps trained using color faces chose to first enter the safe associated face zone (hereafter referred to as ‘safe’ zone) significantly more than predicted by chance (X2= 7.1, df=1, p=0.008, Figure 2a). However, wasps that were trained using grayscale versions of the same face stimuli did not show a stimulus first-choice preference (X2= 0.019, df=1, p=0.891). Additionally, on testing trials the latency to enter the safe zone was significantly faster in wasps trained to color faces and treatment (color vs grayscale) was the only significant factor for this measure (t-value= 2.476, p=0.013, Figure 2b). While no-choice trials were exclude from above analyses, we additionally found that on testing trials there were significantly more trials in which animals did not make a stimulus zone choice and instead stayed in the center zone for the full 60 unreinforced seconds when wasps were trained using grayscale faces than those trained using color faces (gray: 21 of 128 trials, color: 8 of 136 trials, X2= 6.431, df=1, p=0.011). Thus, by multiple metrics, wasps trained with color facial images outperformed those trained with the grayscale versions of the same image, indicating a role of color in facial discrimination in these wasps.

Fig. 2.

Fig. 2

Testing trial performance of wasps trained across stimulus treatments: color (orange) or grayscale (gray). (a) Total proportion of safe associated stimulus zone first choices for all testing trials for animals trained across treatments. (b) Latency to enter safe associated stimulus zone for animals trained across treatments. (c) Averaged mean velocity (cm/s) for animals trained across treatments. Treatments denoted by coloration. Symbols denote significant differences: ** p<0.01, *p<0.05, n.s. = non-significant. See methods and results for statistical test details.

Computer vision analysis of movement during the trials

Using DeepLabCut and SimBa (Mathis et al., 2018, Nath et al., 2019, Nilsson et al., 2020), we tracked wasps and calculated distance traveled, average velocity, and proportion of time spent in each zone of the arena. On testing trials wasps trained using color faces moved longer distances at a faster mean velocity than wasps trained using grayscale faces (distance: t-value= −3.068, p=0.002; mean velocity: t-value=−3.072, p=0.002, Figure 2c). This could explain the latency differences between treatments, but this cannot explain the first-choice differences. On testing trials we also found that the proportion of time spent in the correct zone was unaffected by treatment (t-value=−1.313, p=0.189), and the proportion of time spent in a zone was only predicted by the side shocked in the previous trial (t-value=2.286, p=0.022).

Reinforced training performance not affected by images used

On training trials, neither treatment performed better than chance based on first-choice decisions (color: X2= 0.068, df=1, p=0.795; grayscale: X2= 0.008, df=1, p=0.929, Figure S2a) and no factor—treatment, trial number, previously shocked side, or their interactions—significantly affected latency to enter the non-shock associated face zone on training trials (p>0.05, Figure S2b). Again while no-choice trials were excluded from the above analyses, we observed no difference in the number of trials in which animals did not leave the center-shocked zone between treatments on training trials (gray: 1 of 266 trials, color: 6 of 272 trials). Not surprisingly reinforcement significantly affected no-choice trials, and no-choice was significantly less common when reinforcement was present (7 of 538 for training trials versus 29 of 236 for testing trials, X2= 35.909, df=1, p=2.67×10−9).

On training trials, we found that distance and mean velocity were unaffected by treatment (t-value= −1.653, p=0.098, Figure S2c) and instead trial number was the only factor that significantly affected the proportion of time spent in the correct zone during reinforced training trials (t-value=0.271, p=0.005, Figure S2d). Also in training trials, trial number was the only factor to significantly impact the distance and velocity at which wasps moved (t-value=−2.950, p=0.003, Figure S2e) with wasps moving slower as wasps learned the assay and trials increased. We attribute this pattern to wasps that learned the assay, moving less and generally remaining in the safe zone after entering. Together, these data suggest that regardless of treatment wasps changed their behavior consistently over trials, but only wasps trained using color stimuli performed better than chance on the assay during unreinforced trials.

DISCUSSION

Facial recognition in the paper wasp is chromatic-dependent – Polistes fuscatus readily discriminated between different wasp face images when color is present but did not discriminate between face images rendered in grayscale (Figure 2). Wasps trained with colored faces made more correct decisions during testing trials and made these decisions more quickly. Wasps trained with color faces were also generally more active during testing trials, both in terms of body movement velocity and total distance travelled. Together, our results support the hypothesis that color may be necessary for facial recognition in Polistes fuscatus and that images of wasp faces in grayscale are treated differently from that of colored face images. Previous studies have already established that P. fuscatus wasps process faces in a holistic manner (Tibbetts et al., 2021). The presence and arrangement of facial features are important for the facial discrimination in this species. This data indicates that color is an important component to the “holistic faceness” underlying facial recognition in paper wasps. However we would also like to point out that color per se is not necessary for general pattern discrimination using similar assays. Sheehan and Tibbetts (2011), showed P. fuscatus can discriminate achromatic geometric shapes but do so a lower level, taking longer and more trials, than that of face images. Thus, we are arguing that color is important when the images are of conspecific wasps. Perhaps given more trials wasps could learn to discriminate grayscale face images similar to that of achromatic geometric shapes, however the specialized-face-recognition abilities of these wasps appears to only be present when color is a component of these images.

Chromatic-dependent facial recognition

In humans and non-human primates, facial processing and recognition is not dependent on color (Rosenfeld and Van Hoesen, 1979, Moscovitch et al., 1997, Chang and Tsao, 2017). Primates can discriminate between faces when presented grayscale faces of conspecifics, much like how we presented grayscale images in our behavioral assay (Rosenfeld and Van Hoesen, 1979, Moscovitch et al., 1997). Primate abilities of facial recognition do not depend on color, but rather depend on detecting differences in spatial relationship of critical facial features across conspecifics (Chang and Tsao, 2017). Wasps utilize unique color patterns as signals of identity (Sheehan and Tibbetts, 2009). Prior to this study, whether color itself was important in wasp facial recognition was unknown. Wasps could utilize the patterns created by the color, but not require the color per se (i.e., shapes created by the color could be the necessary part of the signal, independent of which colors are used). However, our recent neuroanatomical study predicted that color may play a key role in social processing in P. fuscatus (Jernigan et al., 2021), and indeed this appears to be true at least for face acquisition. Additional tests will be required to assess if wasps use all patterning/color features equally when making discriminations. Additionally, it remains possible that after a face is learned, achromatic patterning alone could then be sufficient for wasps to make discriminations as nests are sometimes found in dark crevices for which chromatic information would be reduced.

This study adds to a growing body of literature examining what constitutes a ‘face’ for P. fuscatus wasps. Thus far we know that the correct positions of internal features are required (Sheehan and Tibbetts, 2011) and antennae or antennae and body are needed to be present (Sheehan and Tibbetts, 2011, Tibbetts et al., 2021). This study adds that facial color patterning alone can confer identity as we controlled all other aspects of body position within the images. Most importantly, this study demonstrates that chromatic information is required for facial learning in P. fuscatus. As in primates, faces in the P. fuscatus system appear to be special, holistically processed perceptual objects (Tanaka and Farah, 1993, Tsao and Livingstone, 2008, Tibbetts et al., 2021). While the neural mechanisms of face recognition remain elusive in Polistes fuscatus, the anterior optic tubercle and its downstream targets remain prime candidates due to the fact that it is socially plastic (Jernigan et al., 2021) and knowledge from other insects of its role in chromatic information processing (Mota et al., 2013), figure-ground detection (Aptekar et al., 2015) and female-female aggression (Schretter et al., 2020).

This study provides additional information for which color receptors may be most important in facial recognition in paper wasps based on current knowledge of aculeate hymenoptera visual processing. Wasp cuticle does not reflect UV-violet (Tumulty et al., 2021), and neither do our printed wasp image stimuli. The primary color components present on the wasp image stimuli—yellow, red-brown, and black— are predicted to excite the medium and long wavelength photoreceptors, as modeled in honey bee visual space (Figure S1) (Vorobyev et al., 2001). Thus, at the sensory level facial recognition in this species likely utilizes only medium and long wavelength photoreceptors to discriminate among conspecific faces; possibly relying on medium-long wavelength color opponency (Fig S1). The grayscale images pattern components used show little to no variation and are predicted to equally excite the medium-long photoreceptors suggesting opponency as a potential pathway. Based on this analysis, achromatic channels which include brightness and contrast edge detection seem to be less likely pathways used in this system given presented behavioral data. However, we do not currently know the exact spectral sensitivities of Polistes fuscatus, and it remains possible that there are red-shifted adaptations in the P. fuscatus visual system to allow higher sensitivity in the yellow and red wavelengths to allow better discrimination of wasp colors via individual photoreceptor types or additional computations in downstream circuits. Physiological work will be needed in Polistes fuscatus across the visual hierarchy to disentangle these possible mechanisms.

Training paradigm and animal tracking

A major strength of the presented behavioral assay is the confirmation that unreinforced trials show robust learning performance. Unlike in prior studies of learning in P. fuscatus (Sheehan and Tibbetts, 2011, DesJardins and Tibbetts, 2018, Tibbetts et al., 2021), in this study we added unreinforced testing trials every 3 trials during training, in which animals received no shock. These un-reinforced trials allow animals to display choices that are potentially unbiased by shock-escape responses. Notably, we did not observe significantly increased performance over trials in first-choice data in either the training or testing trials. In the case of the testing trials, this may be an issue of statistical power combined with already high performance on the first testing trial (first test trial: 59% correct choice in color group, 7th test trial: 76% correct in color group). The lack of improvement during training trials is more puzzling. Prior studies with only reinforced trials show clear increased first-choice performance over successive trials across a range of stimuli (Sheehan and Tibbetts, 2011). Given the pattern of two training and then one testing trial, and the fact that wasps trained to color images tended to show decreased performance—perhaps via rapid extinction—after testing trials that is recovered during training. It is possible that this mixed training and testing paradigm lead to observed patterns in training trials (Fig S3). At present disentangle these possibilities will require future study given that this is the first study in this system to use this training approach. Regardless, on unreinforced trials animals show clear patterns of stimulus preference. By including unreinforced trials, possibly as a final set of testing trials, future work in this system will be able to assess perceptual similarity among presented stimuli, via generalization tests (Guerrieri et al., 2005) or identify the salient components of stimuli by testing performance across components of stimulus features independently during testing trials (Riveros et al., 2020) as is common in Probiscis Extension Responses assays in other insects. The inclusion of unreinforced trials will also allow testing of additional phenomenon commonly measured in conditioning assays such as short- and long-term memory (Xia et al., 1998), extinction (Eisenhardt, 2012), and the ability to conduct neural manipulations that assess memory locations (Erber et al., 1980, Packard and White, 1991).

By using computer vision software, we tracked animal body segment positions during testing and training from video recordings. This ability opens a host of new metrics we can use to assess performance in operant conditioning assays in this system. One interesting finding is that animals trained with color stimuli moved significantly more than animals trained with grayscale stimuli. Wasps were randomly assigned to be trained with one image type or the other so this observation cannot be explained by inherent differences between individuals but rather how wasps responded to the different stimuli. Movement differences alone cannot explain the increased performance of color-trained wasps. While tracking animals, we only measured time spent in each area of the arena and we did not measure the precise sequence of moments at stimuli or at borders of shocked and non-shocked zones in the arena, which could shed light on other behavioral or postural differences exhibited by animals as they approached stimuli. Additional assays and more fine-grained analyses of movement sequences will be required to identify potential causes of the movement pattern differences between groups. However, if color images are recognized as “wasps” and grayscale images are not, response to the perceived presence of other individuals might explain this movement difference. Generally, wasps move more in behavioral assays when interacting with a social partner than when in isolation (personal observation). It is also possible that lower rates of movement in the grayscale-trained group could be a form of ‘learned helplessness’ in the animals which cannot discriminate images in the assay (Eisenstein and Carlson, 1997).

Conclusion

Color supports face discrimination abilities in Polistes fuscatus. This work builds upon other recent findings that are beginning to define what is required for individual recognition in this species and suggests that color processing centers of the brain are likely to be important in specialized face processing.

Supplementary Material

Fig_S1

Supplemental Fig. 1 Spectral properties of presented stimuli and modeling of photoreceptor excitation in honey bee visual system. (a-f) Measured reflectance of the three primary wasp pigments: red-brown (a,b), yellow (c,d), and black (e,f) for each presented stimulus face in color (a,c,e) and grayscale (b,d,f) on the left. Modeled relative excitation of each photoreceptor opsin type (S=short, M=medium, L=long) in the honey bee visual system using bright light conditions as done in training. (g-i) Combined modeled relative excitations for all stimuli in color (g), grayscale (h) or combined (i). Modeling was done using the pavo package in R under modeled ideal bright light conditions (Maia et al., 2019). Colored lines represent each face from which the color was sampled. Face Key provided in (a), (blue=face1, gray=face2, orange=face3, magenta=face4, green=face5, black=face6). Color and shape of points in the triangle color space plots denote the color and colorscale of sampled face patches (red= red diamond, yellow= yellow triangle, black= black circle, grayscale red= grey inverted triangle, grayscale yellow= gray triangle, grayscale black= dark gray square).. Each face was only sampled once per color.

Fig_S3

Supplemental Fig. 3. Mean first choice performance of wasps trained using either color wasp images or grayscale wasp images across all trials in sequential order. Trials included two shock-reinforced training trials followed by one unreinforced testing trial. Unreinforced testing trials are denoted by grey bars. Performance did not significantly improve across trials, most likely due to possible rapid extinction of preference following unreinforced testing trials in particular during late trials using color stimuli.

Fig_S2

Supplemental Fig. 2 Training trial performance of wasps trained using color or grayscale stimuli. (a) Total proportion of safe associated stimulus zone first choices for all training trials for animals trained across treatments. (b) Latency to enter safe associated stimulus zone for animals trained across treatments. (c) Averaged mean velocity (cm/s) for animals trained across training treatments. (d) Proportion of time spent in the safe associated stimulus zone across training trials for animals trained in each stimulus treatment. (e) Mean velocity for animals across training trials for animals trained in each stimulus treatment. Error bars denote standard error from the mean. Symbols denote significant differences in panels (a) through (c), n.s. = non-significant. Statistical model used to compare measures in (d) and (e) are presented in each panel. Statistically significant factors are listed in the bottom of each panel; all other factors and factor interactions are non-significant. See methods and results for further statistical test details.

Mov_S1

Supplemental video. Example shocked training trial 1 and unshocked testing trial 6 for one wasp trained to discriminate between color face images. Video is played at 2x speed and shows a beginning segment of each trial.

Download video file (35.8MB, mp4)

Funding information

This work was supported by NIH Grant DP2- GM128202 awarded to MJS.

Footnotes

Competing interests

Authors declare no conflicts of interest.

Ethics statement

All work was conducted in accordance with ethics requirements for invertebrate work at Cornell University.

Consent for publication

All authors have read and approved this manuscript for publication.

Data accessibility

Data, analysis code, and README file is provided as a zip download in supplementary file.

References

  1. APTEKAR JW, KELEŞ MF, LU PM, ZOLOTOVA NM & FRYE MA 2015. Neurons Forming Optic Glomeruli Compute Figure–Ground Discriminations in Drosophila. The Journal of Neuroscience, 35, 7587. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. AVILéS JM, VIKAN JR, FOSSØY F, ANTONOV A, MOKSNES A, RØSKAFT E & STOKKE BG 2010. Avian colour perception predicts behavioural responses to experimental brood parasitism in chaffinches. Journal of Evolutionary Biology, 23, 293–301. [DOI] [PubMed] [Google Scholar]
  3. BATES D, MÄCHLER M, BOLKER B & WALKER S 2015. Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67, 1–48. [Google Scholar]
  4. BEANI L, BAGNÈRES AG, ELIA M, PETROCELLI I, CAPPA F & LORENZI MC 2019. Cuticular hydrocarbons as cues of sex and health condition in Polistes dominula wasps. Insectes Sociaux, 66, 543–553. [Google Scholar]
  5. BEECHER MD 1989. Signalling systems for individual recognition: an information theory approach. Animal Behaviour, 38, 248–261. [Google Scholar]
  6. CAPPA F, CINI A, SIGNOROTTI L & CERVO R 2020. Rethinking recognition: social context in adult life rather than early experience shapes recognition in a social wasp. Philosophical Transactions of the Royal Society B: Biological Sciences, 375, 20190468. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. CAVES EM, STEVENS M, IVERSEN ES & SPOTTISWOODE CN 2015. Hosts of avian brood parasites have evolved egg signatures with elevated information content. Proceedings of the Royal Society B: Biological Sciences, 282, 20150598. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. CHANG L & TSAO DY 2017. The Code for Facial Identity in the Primate Brain. Cell, 169, 1013–1028.e14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. CINI A, CAPPA F, PEPICIELLO I, PLATANIA L, DAPPORTO L & CERVO R 2019. Sight in a Clique, Scent in Society: Plasticity in the Use of Nestmate Recognition Cues Along Colony Development in the Social Wasp Polistes dominula. Frontiers in Ecology and Evolution, 7. [Google Scholar]
  10. DESJARDINS N & TIBBETTS EA 2018. Sex differences in face but not colour learning in Polistes fuscatus paper wasps. Animal Behaviour, 140, 1–6. [Google Scholar]
  11. EISENHARDT D 2012. Extinction Learning in Honey Bees. In: GALIZIA CG, EISENHARDT D & GIURFA M (eds.) Honeybee Neurobiology and Behavior: A Tribute to Randolf Menzel. Dordrecht: Springer Netherlands. [Google Scholar]
  12. EISENSTEIN EM & CARLSON AD 1997. A comparative approach to the behavior called `learned helplessness’. Behavioural Brain Research, 86, 149–160. [DOI] [PubMed] [Google Scholar]
  13. ENTEMAN WM 1904. Coloration in Polistes, Carnegie Institution. [Google Scholar]
  14. ERBER J, MASUHR T & MENZEL R 1980. Localization of short‐term memory in the brain of the bee, Apis mellifera. Physiological Entomology, 5, 343–358. [Google Scholar]
  15. GUERRIERI F, LACHNIT H, GERBER B & GIURFA M 2005. Olfactory blocking and odorant similarity in the honeybee. Learning & Memory, 12, 86–95. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. HIRAMATSU C, MELIN AD, ALLEN WL, DUBUC C & HIGHAM JP 2017. Experimental evidence that primate trichromacy is well suited for detecting primate social colour signals. Proceedings of the Royal Society B: Biological Sciences, 284, 20162458. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. JERNIGAN CM, ZABA NC & SHEEHAN MJ 2021. Age and social experience induced plasticity across brain regions of the paper wasp Polistes fuscatus. Biology Letters, 17, 20210073. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. LOESCHE P, HIGGINS BJ, STODDARD PK & BEECHER MD 1991. Signature Versus Perceptual Adaptations for Individual Vocal Recognition in Swallows. Behaviour, 118, 15–25. [Google Scholar]
  19. MAIA R, GRUSON H, ENDLER JA & WHITE TE 2019. pavo 2: New tools for the spectral and spatial analysis of colour in r. Methods in Ecology and Evolution, 10, 1097–1107. [Google Scholar]
  20. MATHIS A, MAMIDANNA P, CURY KM, ABE T, MURTHY VN, MATHIS MW & BETHGE M 2018. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, 21, 1281–1289. [DOI] [PubMed] [Google Scholar]
  21. MOSCOVITCH M, WINOCUR G & BEHRMANN M 1997. What Is Special about Face Recognition? Nineteen Experiments on a Person with Visual Object Agnosia and Dyslexia but Normal Face Recognition. Journal of Cognitive Neuroscience, 9, 555–604. [DOI] [PubMed] [Google Scholar]
  22. MOTA T, GRONENBERG W, GIURFA M & SANDOZ J-C 2013. Chromatic Processing in the Anterior Optic Tubercle of the Honey Bee Brain. The Journal of Neuroscience, 33, 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. NATH T, MATHIS A, CHEN AC, PATEL A, BETHGE M & MATHIS MW 2019. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nature Protocols, 14, 2152–2176. [DOI] [PubMed] [Google Scholar]
  24. NILSSON SR, GOODWIN NL, CHOONG JJ, HWANG S, WRIGHT HR, NORVILLE ZC, TONG X, LIN D, BENTZLEY BS, ESHEL N, MCLAUGHLIN RJ & GOLDEN SA 2020. Simple Behavioral Analysis (SimBA) – an open source toolkit for computer classification of complex social behaviors in experimental animals. bioRxiv, 2020.04.19.049452. [Google Scholar]
  25. PACKARD MG & WHITE NM 1991. Dissociation of hippocampus and caudate nucleus memory systems by posttraining intracerebral injection of dopamine agonists. Behavioral Neuroscience, 105, 295–306. [DOI] [PubMed] [Google Scholar]
  26. PEITSCH D, FIETZ A, HERTEL H, DE SOUZA J, VENTURA DF & MENZEL R 1992. The spectral input systems of hymenopteran insects and their receptor-based colour vision. Journal of Comparative Physiology A, 170, 23–40. [DOI] [PubMed] [Google Scholar]
  27. POLLARD KA & BLUMSTEIN DT 2011. Social Group Size Predicts the Evolution of Individuality. Current Biology, 21, 413–417. [DOI] [PubMed] [Google Scholar]
  28. R CORE TEAM 2020. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. [Google Scholar]
  29. RIVEROS AJ, LEONARD AS, GRONENBERG W & PAPAJ DR 2020. Learning of bimodal versus unimodal signals in restrained bumble bees. Journal of Experimental Biology, 223. [DOI] [PubMed] [Google Scholar]
  30. ROSENFELD SA & VAN HOESEN GW 1979. Face recognition in the rhesus monkey. Neuropsychologia, 17, 503–509. [DOI] [PubMed] [Google Scholar]
  31. SCHRETTER CE, ASO Y, ROBIE AA, DREHER M, DOLAN M-J, CHEN N, ITO M, YANG T, PAREKH R, BRANSON KM & RUBIN GM 2020. Cell types and neuronal circuitry underlying female aggression in Drosophila. eLife, 9, e58942. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. SHEEHAN MJ, CHOO J & TIBBETTS EA 2017. Heritable variation in colour patterns mediating individual recognition. Royal Society Open Science, 4, 161008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. SHEEHAN MJ, LEE V, CORBETT-DETIG R, BI K, BEYNON RJ, HURST JL & NACHMAN MW 2016. Selection on Coding and Regulatory Variation Maintains Individuality in Major Urinary Protein Scent Marks in Wild Mice. PLOS Genetics, 12, e1005891. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. SHEEHAN MJ & NACHMAN MW 2014. Morphological and population genomic evidence that human faces have evolved to signal individual identity. Nature Communications, 5, 4800. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. SHEEHAN MJ & TIBBETTS EA 2009. Evolution of identity signals: Frequency-dependent benefits of distinctive phenotypes used for individual recognition. Evolution, 63, 3106–3113. [DOI] [PubMed] [Google Scholar]
  36. SHEEHAN MJ & TIBBETTS EA 2011. Specialized Face Learning Is Associated with Individual Recognition in Paper Wasps. Science, 334, 1272–1275. [DOI] [PubMed] [Google Scholar]
  37. STEIGER S, FRANZ R, EGGERT A-K & MÜLLER JK 2008. The Coolidge effect, individual recognition and selection for distinctive cuticular signatures in a burying beetle. Proceedings of the Royal Society B: Biological Sciences, 275, 1831–1838. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. TANAKA JW & FARAH MJ 1993. Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology Section A, 46, 225–245. [DOI] [PubMed] [Google Scholar]
  39. TIBBETTS EA 2002. Visual signals of individual identity in the wasp Polistes fuscatus. Proceedings of the Royal Society of London. Series B: Biological Sciences, 269, 1423–1428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. TIBBETTS EA & DALE J 2007. Individual recognition: it is good to be different. Trends in Ecology & Evolution, 22, 529–537. [DOI] [PubMed] [Google Scholar]
  41. TIBBETTS EA, DESJARDINS E, KOU N & WELLMAN L 2019. Social isolation prevents the development of individual face recognition in paper wasps. Animal Behaviour, 152, 71–77. [Google Scholar]
  42. TIBBETTS EA, PARDO-SANCHEZ J, RAMIREZ-MATIAS J & AVARGUÈS-WEBER A 2021. Individual recognition is associated with holistic face processing in Polistes paper wasps in a species-specific way. Proceedings of the Royal Society B: Biological Sciences, 288, 20203010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. TSAO DY & LIVINGSTONE MS 2008. Mechanisms of Face Perception. Annual Review of Neuroscience, 31, 411–437. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. TUMULTY JP, MILLER SE, VAN BELLEGHEM SM, WELLER HI, JERNIGAN CM, VINCENT S, STAUDENRAUS RJ, LEGAN AW, POLNASZEK TJ, UY FMK, WALTON A & SHEEHAN MJ 2021. Evidence for a selective link between cooperation and individual recognition. bioRxiv, 2021.09.07.459327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. TUMULTY JP & SHEEHAN MJ 2020. What Drives Diversity in Social Recognition Mechanisms? Frontiers in Ecology and Evolution, 7. [Google Scholar]
  46. VOROBYEV M, BRANDT R, PEITSCH D, LAUGHLIN SB & MENZEL R 2001. Colour thresholds and receptor noise: behaviour and physiology compared. Vision Research, 41, 639–653. [DOI] [PubMed] [Google Scholar]
  47. XIA S-Z, FENG C-H & GUO A-K 1998. Temporary Amnesia Induced by Cold Anesthesia and Hypoxia in Drosophila. Physiology & Behavior, 65, 617–623. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Fig_S1

Supplemental Fig. 1 Spectral properties of presented stimuli and modeling of photoreceptor excitation in honey bee visual system. (a-f) Measured reflectance of the three primary wasp pigments: red-brown (a,b), yellow (c,d), and black (e,f) for each presented stimulus face in color (a,c,e) and grayscale (b,d,f) on the left. Modeled relative excitation of each photoreceptor opsin type (S=short, M=medium, L=long) in the honey bee visual system using bright light conditions as done in training. (g-i) Combined modeled relative excitations for all stimuli in color (g), grayscale (h) or combined (i). Modeling was done using the pavo package in R under modeled ideal bright light conditions (Maia et al., 2019). Colored lines represent each face from which the color was sampled. Face Key provided in (a), (blue=face1, gray=face2, orange=face3, magenta=face4, green=face5, black=face6). Color and shape of points in the triangle color space plots denote the color and colorscale of sampled face patches (red= red diamond, yellow= yellow triangle, black= black circle, grayscale red= grey inverted triangle, grayscale yellow= gray triangle, grayscale black= dark gray square).. Each face was only sampled once per color.

Fig_S3

Supplemental Fig. 3. Mean first choice performance of wasps trained using either color wasp images or grayscale wasp images across all trials in sequential order. Trials included two shock-reinforced training trials followed by one unreinforced testing trial. Unreinforced testing trials are denoted by grey bars. Performance did not significantly improve across trials, most likely due to possible rapid extinction of preference following unreinforced testing trials in particular during late trials using color stimuli.

Fig_S2

Supplemental Fig. 2 Training trial performance of wasps trained using color or grayscale stimuli. (a) Total proportion of safe associated stimulus zone first choices for all training trials for animals trained across treatments. (b) Latency to enter safe associated stimulus zone for animals trained across treatments. (c) Averaged mean velocity (cm/s) for animals trained across training treatments. (d) Proportion of time spent in the safe associated stimulus zone across training trials for animals trained in each stimulus treatment. (e) Mean velocity for animals across training trials for animals trained in each stimulus treatment. Error bars denote standard error from the mean. Symbols denote significant differences in panels (a) through (c), n.s. = non-significant. Statistical model used to compare measures in (d) and (e) are presented in each panel. Statistically significant factors are listed in the bottom of each panel; all other factors and factor interactions are non-significant. See methods and results for further statistical test details.

Mov_S1

Supplemental video. Example shocked training trial 1 and unshocked testing trial 6 for one wasp trained to discriminate between color face images. Video is played at 2x speed and shows a beginning segment of each trial.

Download video file (35.8MB, mp4)

Data Availability Statement

Data, analysis code, and README file is provided as a zip download in supplementary file.

RESOURCES