Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2021 Apr 2;118(14):e2024798118. doi: 10.1073/pnas.2024798118

The cospecification of the shape and material properties of light permeable materials

Phillip J Marlow a,1, Barton L Anderson a,1
PMCID: PMC8040810  PMID: 33811143

Significance

One of the longest-standing problems in vision science is understanding how images carry information about the shape and material of surfaces. Previous work has treated shape and material perception as computationally ill posed: It is thought that either material needs to be known to infer shape or shape needs to be known to infer material. Here, we offer an explanation of how the human visual system simultaneously recovers 3D shape and material. We show that there are intersecting physical constraints caused by surface curvature on different “kinds” of image structure—subsurface scattering, specular reflections, and self-occluding contours. The intersecting constraints cause specific patterns of covariation that are characteristic of translucent materials and are also potent cues to 3D shape.

Keywords: surface shading, subsurface scattering, translucency, 3D shape perception, material perception

Abstract

The problem of extracting the three-dimensional (3D) shape and material properties of surfaces from images is considered to be inherently ill posed. It is thought that a priori knowledge about either 3D shape is needed to infer material properties, or knowledge about material properties are needed to derive 3D shape. Here, we show that there is information in images that cospecify both the material composition and 3D shape of light permeable (translucent) materials. Specifically, we show that the intensity gradients generated by subsurface scattering, the shape of self-occluding contours, and the distribution of specular reflections covary in systematic ways that are diagnostic of both the surface’s 3D shape and its material properties. These sources of image covariation emerge from being causally linked to a common environmental source: 3D surface curvature. We show that these sources of covariation take the form of “photogeometric constraints,” which link variations in intensity (photometric constraints) to the sign and direction of 3D surface curvature (geometric constraints). We experimentally demonstrate that this covariation generates emergent cues that the visual system exploits to derive the 3D shape and material properties of translucent surfaces and demonstrate the potency of these cues by constructing counterfeit images that evoke vivid percepts of 3D shape and translucency. The concepts of covariation and cospecification articulated herein suggest a principled conceptual path forward for identifying emergent cues that can be used to solve problems in vision that have historically been assumed to be ill posed.


The surfaces that fill the environment project images that carry information about their intrinsic three-dimensional (3D) shape and material properties. However, image structure also depends on extrinsic sources of image variability: illumination, viewing perspective, and the focal properties of the eye or camera. Our visual experience of the world suggests that our perception of a surface’s intrinsic 3D shape and material properties seems relatively stable to variations in extrinsic sources of image variability (110). This suggests that the visual system somehow manages to disentangle intrinsic and extrinsic sources of image structure. One of the fundamental goals of midlevel vision is understanding how the visual system accomplishes this feat.

From a computational perspective, the problem of extracting the intrinsic shape and material properties of surfaces and substances seems inherently ill posed. The amount of light projected to a particular vantage point depends on the 3D shape of the surface, its material composition, and the distribution of light sources in a scene. Most computational work has only made progress by assuming knowledge about one or more of these variables in order to derive information about the other(s) (1114). For example, shape from shading models require knowing both the direction of illumination and the reflectance properties of a surface (e.g., that it is matte and uniformly pigmented) to compute 3D surface geometry from patterns of shading (1114). Yet it is clear that the human visual system has no access to any of the scene variables; our visual systems extract 3D shape and material properties without any ground truth information about states of the world. This suggests that there must be information contained in images that cospecifies both the 3D shape and material properties of surfaces.

What form does this information take? Although this question is currently unsolved, recent work has suggested that the answer might lie in particular patterns of covariation that arise between different forms of image structure (15, 16). For example, recent work has shown that the intensity of surface shading covaries with the local orientations generated along smooth, self-occluding bounding contours (15). The particular form of this covariation carries information about a surface’s material properties and 3D shape. This covariation arises because the geometry of self-occluding contours and local shading intensity are causally linked to the same world property: local 3D surface orientation. This shared causal dependence links “photometric” variations (here, shading intensity) to variations in a “geometric” property (here, local contour orientation). This “photogeometric constraint is remarkably stable to variations in viewpoint and illumination direction and therefore can serve as a robust cue that cospecifies both the shape and material properties of surfaces. Indeed, psychophysical experiments showed that the human visual system exploits this cue: when this pattern of covariation is present, it evokes a vivid perception of surface shading of a homogeneously colored, matte, 3D surface; when it is absent, the same intensity gradients fail to be perceived as shading, which dramatically alters the perception of 3D shape and the surface’s material composition (15, 16).

To date, the photogeometric constraints identified for extracting shape and material properties have been restricted to those generated by opaque surfaces. However, not all surfaces are opaque; many natural materials are translucent (i.e., light permeable) to varying degrees. Our understanding of how the shape or material properties of translucent materials are recovered remains in its infancy. One reason for our lack of understanding is that it remains unclear how patterns of “shading” generated by translucent materials can provide information about the shape of their surfaces. The light that penetrates a translucent material is internally scattered (“subsurface scattering”) and can re-emerge at surface locations remote from where light penetrated the surface. The dissociation between where light strikes and exits a surface means that there is no simple relationship between local 3D surface orientation and the distribution of intensities that emerge from a translucent material (68, 1721). Indeed, it is currently unknown whether there are any links between the intensities generated by subsurface scattering and its surface geometry or, if such links exist, what form they take.

Here, we present a series of experiments and simulations that provide insights into the photogeometric constraints that characterize translucent materials and how they are exploited by the human visual system. First, we identify a photogeometric constraint that links the gradients generated by patterns of subsurface scattering to its 3D surface geometry. We show that the photogeometric constraints generated by patterns of subsurface scattering can be clearly distinguished from those generated by the diffuse shading of opaque surfaces: Gradients of subsurface scattering are linked to the sign and direction of 3D surface curvature, whereas shading gradients generated by opaque surfaces are linked to variations in local 3D surface orientation. We show that the photogeometric constraints exhibited by translucent materials are robust to variations in both illumination and viewing direction and hence could theoretically provide rich information about both 3D surface shape and material. Psychophysical data support this hypothesis. Second, we show that the perceived 3D shape and/or material properties of translucent materials can be dramatically enhanced when the gradients generated by subsurface scattering are combined with either specular reflections or self-occluding contours. We provide psychophysical evidence that demonstrates that these transformations arise from photogeometric constraints that link the gradients generated by subsurface scattering to the geometry of specular reflections and self-occluding contours. Third, we show that the particular form of covariation exhibited between subsurface scattering, specular reflections, and self-occluding contours are specific to translucent materials, which provides diagnostic information that cospecifies both the 3D shape and subsurface scattering properties of translucent materials. When taken in conjunction with our previous results, the results presented herein suggest a principled path forward in understanding how the visual system manages to solve what appears to be computationally intractable problems.

Results

Surface Curvature Constrains Subsurface Scattering.

Our first goal was to determine if there are photogeometric constraints that link 3D surface geometry to the intensity gradients generated by subsurface scattering. The amount of light returned from a point along the surface to a given viewing position is the sum of all of the light that has traversed inside of the translucent substance and exited from that point along the viewing axis. Given this physical complexity, it is not obvious that any photogeometric constraints exist at all, especially when a translucent substance is backlit. The amount of light that enters from behind a translucent surface and re-emerges toward the viewer depends on the thickness of the surface, which is unknown (and in principle unknowable) to the visual system. We therefore restricted our analysis to contexts where image structure only depends on the 3D shape of the visible portions of a translucent substance (i.e., the “front” of the surface). We accomplished this by restricting the primary direction of illumination to the front hemifield and using a sufficiently thick volume of material to ensure that all light entering from behind was absorbed.

Our simulations reveal that different 3D shape properties constrain subsurface scattering and surface shading. It is well known that the intensity of diffuse shading is primarily linked to local 3D surface orientation relative to the light source (11). Nearly all computational work on reconstructing 3D shape from patterns of surface shading exploit this constraint (1114). Our simulations reveal that there is an analogous constraint linking subsurface scattering and 3D shape: The luminance extrema generated by subsurface scattering (i.e., the locally brightest and darkest image regions) move progressively closer to the geometric extrema (convexities and concavities) as the depth of subsurface scattering increases. Thus, whereas patterns of surface shading encode local 3D surface orientation relative to (an unknown) illumination direction, patterns of subsurface scattering encode the distribution of surface convexities and concavities.

Fig. 1A shows an example surface used in our simulations: a smooth bumpy plane rendered in natural illumination with either subsurface scattering or surface shading. The red lines indicate where the primary direction of curvature (i.e., the direction of most rapid curvature) changes sign. For surface shading, the intensity maxima and minima occur at the sides of the convex and concave regions: shading maxima occur where the outward facing surface normal aligns with the primary illumination direction, and shading minima occur at the opposing side. When this surface is rendered with subsurface scattering, the distribution of intensity extrema depends less on local 3D surface orientation and more on surface curvature; subsurface scattering shifts the luminance extrema toward the center of convexities and concavities.

Fig. 1.

Fig. 1.

Different aspects of 3D shape constrain surface shading and subsurface scattering. Patterns of surface shading are linked to local 3D surface orientation, whereas patterns of subsurface scattering are linked to surface curvature. (A) A bumpy surface rendered with either surface shading or subsurface scattering. The red lines depict the boundary of convex and concave regions of the 3D shape (i.e., where “shape index” changes sign). Intensity maxima and minima of subsurface scattering are shifted toward the center of convexities and concavities (respectively) compared to patterns of surface shading. (B) A simulation testing the relative stability of surface shading and subsurface scattering across different illuminations. The qualitative shape of the intensity profiles (loci of intensity extrema) is more stable for subsurface scattering (for both depths of scattering tested) than surface shading.

The stability of this photogeometric constraint was assessed by varying the primary and secondary illumination directions as well as parameters that control the permeability and absorption of subsurface scattering. Our simulations focused on 3D surface geometry that was smoothly curved in the horizontal direction and fixed in the vertical direction so that the results could be shown as two-dimensional (2D) cross-sections of 3D shape and image intensity. Fig. 1B shows the results for three illumination directions and three different materials: an opaque surface that generates surface shading (depicted in black) and two translucent materials with different depths of subsurface light transport (red that is relatively shallow and blue that is deeper/more translucent). Despite the complexity of the optical events underlying subsurface scattering, the distribution of intensities generated by translucent surfaces is remarkably stable to changes in illumination direction, especially when compared to surface shading. As the depth of subsurface scattering increases, the intensity profile grows smoother (lowering luminance contrast), and intensity maxima and minima are shifted progressively closer to the peaks of convexities and concavities (respectively). The five convexities in the rendered surface generate five intensity maxima, and each concavity generates a local intensity minimum irrespective of illumination direction. In contradistinction, the number and position of luminance extrema generated by surface shading varies considerably with illumination direction. (We refer readers that are curious about the physical causes of the intensity–convexity covariation to a supplemental simulation; see SI Appendix, Supplementary Information Text and Movies S1–S3).

The Cospecification of 3D Shape and Material.

The preceding demonstrates that translucent materials exhibit a specific photogeometric constraint that could theoretically be used to recover 3D shape and material. This constraint can only be exploited if the material properties of the surface are somehow known, or the 3D shape is known. However, the visual system has no access to the ground truth states of either 3D shape or a surface’s material properties in these 2D images; the perception of both 3D shape and material must therefore somehow be derived from information that cospecifies both.

One of the key insights advanced in this paper is that patterns of covariation between subsurface scattering, self-occluding contours, and specular reflections cospecify both the shape and material properties of translucent substances. Consider the series of images depicted in Fig. 2. Each column depicts a different viewpoint of the same bumpy plane; the same natural light map was used to generate all images. The orientation of the viewing axis is parallel to the axis of surface relief in the far-left axis, and the viewing direction becomes increasingly oblique in 15° increments in the columns to the right. The first row depicts the gradients generated by subsurface scattering, the second row depicts the pattern of specular reflections, and the third row depicts the two components presented together. Note that the material properties and 3D shape elicited by patterns of subsurface scattering are weak when presented in isolation; these images appear significantly defocused and fail to evoke any vivid percept of 3D shape until self-occluding contours are visible. The patterns of specular reflections presented alone also evoke little or no perception of 3D shape or material (i.e., it is even unclear that the highlights are actually specular reflections). Remarkably, however, the perception of both material and 3D shape can be vividly experienced when the specular reflections and gradients of subsurface scattering are combined (bottom row).

Fig. 2.

Fig. 2.

Clear percepts of 3D shape and translucency require multiple forms of optical structure to be present. A translucent bumpy plane rendered for different viewing directions. The top row depicts the shading generating by subsurface scattering, the middle row depicts specular reflections, and the bottom row depicts the combined image. Note that the perception of 3D shape is weak in the top two rows in the absence of self-occlusions, or approximations to self-occlusions (13), which increase from left to right; however, a clear perception of a 3D surface is elicited when the two components are combined for all images in the bottom row. The two graphs plot the proportion of times each surface appeared more vividly 3D or translucent averaged across all paired comparisons of the 12 images. Errors bars are SEMs (n = 22).

The dramatic perceptual transformation that occurs in Fig. 2 implies that there must be some information in the combined images that cospecifies the surface’s 3D shape and material properties. In what follows, we describe patterns of covariation that may be responsible for the perceptual interactions between subsurface scattering, self-occluding contours, and specular reflections. The general insight is that the structure of specular reflections, the shape of self-occluding contours, and the intensity gradients generated by subsurface scattering are all causally linked to a common environmental property: surface curvature. This common causal link should therefore generate systematic forms of covariation between these different kinds of image structure.

We begin by considering the covariation generated along the self-occluding contours of translucent substances. We have previously shown that the intensity of surface shading covaries with the local 2D orientation of self-occluding contours (15, 16). Here, we show that self-occluding contours and subsurface scattering are both causally linked to the sign of local surface curvature. Convex self-occluding contours project from locally convex surface patches, whereas concave self-occluding contours project from saddle-shaped surfaces, which have opposing directions of curvature (22). Concave contour segments are therefore a mixture of convex and concave curvature and will therefore be generically darker than convex contour segments, generating a covariation of intensity with the sign of curvature along self-occluding contours of translucent materials.

Fig. 3 illustrates the different forms of covariation exhibited by opaque and translucent surfaces. The figure depicts opaque and translucent variants of the same surface geometry that receives illumination from the right. The wavey bounding contour was generated by a surface that curves smoothly out sight (i.e., a smooth self-occluding contour). The graphs plot intensity as a function of either orientation of the contour or proximity to convex versus concave portions of the contour. We computed the orientation and curvature of the self-occluding contour at 46 equispaced positions along the contour. The proximity of each sample to a convexity is the ratio of two distances: the numerator is the distance of a point along the contour from the center of the (nearest) concave segment, and the denominator is the center-to-center distance between convex and concave segments. The intensity of either surface shading or subsurface scattering was extracted at corresponding locations 10 pixels below the contour (in the direction of the inward pointing contour normal). The four graphs on the right side of Fig. 3 illustrate the different patterns of covariation exhibited by opaque and translucent surfaces. For the opaque surface, the covariation between the intensity of surface shading and contour orientation (top left) exhibits a simpler dependence than its proximity to convex versus concave regions along the contour (bottom left), whereas the pattern is reversed for the translucent surface (right-side graphs). These different patterns of covariation could theoretically be used to distinguish translucent and opaque materials.

Fig. 3.

Fig. 3.

Surface shading and subsurface scattering covary with self-occluding contours in characteristically different ways. The opaque and translucent variants of the bumpy surface have identical 3D shape, viewing direction, specular reflections, and illumination (primarily from the right). The graphs plot the intensity of either surface shading or subsurface scattering as a function of either the orientation of the contour or proximity to convex versus concave regions of the contour. Note that subsurface scattering and surface shading exhibit very different regularities with the shape of the contour. Subsurface scattering covaries more with convexity than with orientation, whereas the relative strength of the two forms of covariation is reversed for surface shading. Contour proximity is the ratio of two distances: the numerator is the distance of a point along the contour from the center of the (nearest) concave segment, and the denominator is the center-to-center distance between convex and concave segments. Contour orientation is depicted by the short red, green, or blue lines drawn perpendicular to local contour orientation. Red, green, and blue correspond to concave, convex, and inflection points in contour curvature.

The gradients generated by subsurface scattering will also systematically covary with the orientation structure generated by patterns of specular reflections. Previous work has shown that specular reflections are elongated along lines of minimum surface curvature (9, 10). [Specifically, specular reflections tend to run parallel to the minimum second derivative of the depth map of a surface (23), which is closely related to surface curvature.] The shared link to surface curvature causes specular reflections to run parallel to the gradients generated by subsurface scattering (Fig. 4). The left image shows specular reflections from a smooth bumpy plane that is primarily illuminated from an angle 45° above the viewing direction. Note that the direction of minimum surface curvature—depicted by the orientations of the short red lines—predicts the local orientations of specular reflections. The second image from the left depicts the pattern of subsurface scattering for the same 3D shape and illumination that generated the specular reflections. The blue lines depict the distribution of local orientations or orientation field (23) of subsurface scattering—that is, the set of orientations orthogonal to the direction of local intensity gradients. Subsurface scattering is also quite well aligned with surface curvature, particularly after smoothing the orientation field of the direction of minimum surface curvature (see Methods). This common link to the same 3D shape property—surface curvature—causes subsurface scattering and specular reflections to generate similar local orientations fields (third column of Fig. 4). Although the distribution of specular reflections varies significantly as a function of illumination direction (e.g., Fig. 5; see also refs. 10 and 23), their local orientations remain linked to the directions of minimum surface curvature, which remain congruent with the local orientations generated by subsurface scattering.

Fig. 4.

Fig. 4.

A pattern of covariation between specular reflections and subsurface scattering. The leftmost image depicts the specular reflections generated by a smooth bumpy surface illuminated from 45° above the surface. The second image from the left depicts the pattern of subsurface scattering for the same 3D shape and illumination. Neither image elicits a compelling percept of a surface with clear 3D shape and material properties; however, apparent 3D shape, gloss, and translucency are greatly enhanced when specular reflections are superimposed on subsurface scattering (second image from the right). The benefit of superimposing the two sources of image structure is lost when the orientation of the specular reflections is perturbed (right image). Orientation fields are shown below these images. The blue lines depict local orientations of intensity gradients generated by subsurface scattering, which tend to be aligned with the local orientations of the direction of minimum surface curvature (shown in red). Specular reflections are also elongated in this direction. The histograms along the bottom row quantify the amount of orientation congruence between specular reflections, subsurface scattering, and direction of minimum surface curvature.

Fig. 5.

Fig. 5.

The covariation exhibited by specular reflections and subsurface scattering greatly enhance the perception of 3D shape. A smooth bumpy surface rendered with either specular reflections, subsurface scattering, or both. Illumination direction differs between rows (the images darken as illumination direction varies from frontal to grazing surface relief). The graphs beneath each image are cross-sections of the perceived 3D shape of 20 tested locations shown by the red dots that overlay the surfaces. Error bars are SEMs of four observers.

The preceding demonstrates that there are photogeometric constraints that link subsurface scattering, smooth self-occluding contours, and the geometry of specular reflections. The experiments presented below were designed to assess whether the human visual system exploits these constraints to derive the 3D shape and material properties of translucent surfaces.

The Enhancement of 3D Shape Induced by the Covariation of Specular Reflections and Subsurface Scattering.

Experiments 1 and 2 assessed the vividness of perceived 3D shape and the translucency of the stimuli depicted in Fig. 2. The stimuli were smooth bumpy planes illuminated in a natural light map with the dominant light source directed obliquely 45° from above. The viewing direction varied in 15° increments from a direction parallel to the axis of surface relief, where no self-occluding contours were present, to an oblique axis where self-occluding contours became visible. The surface was rendered with subsurface scattering, specular reflections, or both. The surfaces shown in the right column and along the bottom row contain subsurface scattering and either specular reflections or self-occluding contours; hence, these conditions contain (at least one of) the patterns of covariation that we predict is used to perceive 3D shape. The other images contain only subsurface scattering or specular reflections, which exhibit no patterns of covariation, and hence should make the 3D shape and material properties harder to perceive. A paired comparison task was performed by 22 naïve observers where each pair of the 12 images in Fig. 2 was presented, and observers judged the image that elicited a more compelling percept of 3D shape (experiment 1) or a more vivid percept of translucency (experiment 2).

The results of experiment 1 demonstrate that observers experienced the most vivid sense of 3D in images where specular reflections and self-occluding contours were combined with gradients of subsurface scattering (F(1,21) = 720; P < 0.01). The perception of shape increased as the viewing direction was rotated, and self-occluding contours emerge (F(1,21) = 388; P < 0.01). Note that self-occluding contours emerge from the brightest regions of subsurface scattering consistent with a convexity–intensity covariation. Self-occluding contours improved the clarity of perceived 3D shape more for surfaces defined by subsurface scattering than those that also had specular reflections (F(1,21) = 45; P < 0.01). The results of experiment 2 showed that all of the surfaces rendered with specular reflections appeared significantly more translucent than when subsurface scattering was presented in isolation (F(1,21) = 8.8; P < 0.01), which was independent of viewing direction and hence the presence or absence of self-occluding contours (F(1,21) = 0.2; P = 0.66). The presence of self-occluding contours therefore did not appear to enhance the vividness of perceived translucency in these experiments; we will return to possible reasons for this in the general discussion below.

Our informal observations suggest that the depth of perceived 3D shape is strongly enhanced when subsurface scattering and specular reflections are combined. Experiment 3 tested this hypothesis using the smooth bumpy plane shown in the left column of Fig. 2 that elicited the weakest perception of 3D shape when specular reflections were absent. The illumination direction was varied instead of the viewing direction and the perception of 3D shape was measured using images containing subsurface scattering (middle column of Fig. 5), specular reflections (left column), or both (right column). Perceived shape was measured using a two-step procedure. The first step was an ordinal judgment of perceived relief where four observers indicated which of two locations on the surface appeared to have higher relief. A pair of red dots were superimposed on a one-dimensional (1D) slice of the surface, and observers indicated which dot appeared higher on the surface (i.e., closer to the observer). This was performed for every pair of 20 dot locations distributed along the 1D slice shown in Fig. 5. The number of times a given position was perceived as having a higher relief is plotted in the graphs presented beneath each surface, which gives an ordinal scale of perceived depth. In the second step, each observer viewed the graphs of their ordinal data and adjusted the amplitude (vertical scale) to generate a quantitative estimate of perceived surface relief.

The results of this experiment (Fig. 5) are consistent with observers’ informal reports of perceived shape: the specular component alone elicits no or a very weak perception of 3D surface shape, the subsurface scattering component produces a weak perception of surface relief and 3D shape, but the combined images generate a vivid perception of surface relief and 3D shape. Linear regression was used to assess whether the combined images elicited percepts of 3D shape that were statistically different from that of the patterns of subsurface scattering and specular reflections presented individually. The depth settings of the four observers were averaged, and the perceived depth of the combined images was regressed against that of the specular reflections and subsurface scattering. The combined images elicited twice as much depth as subsurface scattering presented alone (R2 = 0.9163; F = 635; P < 0.001; 95% CIs of the slope of the regression line were 2.2 to 2.6). The perceived depth of the specular reflections was uncorrelated with that of the combined images (R2 < 0.01; F = 0.0076; P = 0.9309). This suggests that the particular pattern of covariation that arises between specular reflections and subsurface scattering provides a potent cue to 3D shape. Superimposing specular reflections that have orientations that are incongruent with that of subsurface scattering generates weaker percepts of 3D surface shape than the generic case where both are aligned (Fig. 4 and SI Appendix, Supplementary Information Text).

Assessing the Photometric Constraints by Synthesizing “Counterfeit” Translucent Substances.

The preceding simulations and experiments used photorealistic images rendered using state-of-the-art computer graphics software (see Methods). The simulations provide an existence proof of photogeometric constraints that link surface geometry, subsurface scattering, self-occluding contours, and specular reflections. The psychophysical experiments provide correlational evidence that the visual system relies on patterns of covariation generated between these different sources of image structure to recover 3D shape and identify translucent materials. The last set of experiments were designed as a prospective test to assess whether the visual system exploits these constraints to identify translucent materials. If these constraints are in fact used by the visual system, it should be possible to construct compelling counterfeit percepts of translucency by painting opaque surfaces to mimic these constraints.

Fig. 6 illustrates how the artificially translucent stimuli were constructed. We began by calculating the distribution of convexities and concavities for three shapes (an angel figurine, a bumpy sphere, and the Stanford bunny). Specifically, we calculated the magnitudes of the two principal directions of surface curvature (i.e., the maximum—k1—and minimum—k2—curvature) and then calculated a local “shape index” statistic (24),

s=atan2k2+k1k2k1. [1]

This statistic has a value of 1 for perfectly convex surfaces and −1 for perfectly concave surfaces. Image intensities proportional to this statistic were painted onto the diffuse component of an opaque surface and combined with physically correct distributions of specular reflections. Our simulations described above (Fig. 1) demonstrate that the image gradients generated by real subsurface scattering grow increasingly smooth and have lower luminance contrast as the depth of subsurface scattering increases. Hence, we also smoothed the shape index texture to mimic different depths of subsurface light transport (see Methods).

Fig. 6.

Fig. 6.

A compelling percept of translucency arises when surfaces are painted with smooth image gradients that are brighter for convexities than concavities. Surface geometry was texture mapped with image intensities determined by the local shape statistic depicted in the top left of the figure. Specular reflections were superimposed on the texture, which was smoothed to different extents to mimic translucent substances with different depths of subsurface light transport. Intermediate levels of smoothing elicit the most realistic percepts of translucency (left side graphs). Smoother image gradients mimic the appearance of increasingly translucent materials (right side graphs). Error bars are SEMs (n = 11).

Two experiments were used to formally assess the realism and strength of the percepts of translucency evoked by the artificial images. The first varied the amount that the shape index texture was smoothed. Observers viewed pairs of artificial images differing in smoothness and indicated which elicits the most realistic percept of translucency (three separate experiments for each shape tested). The results depicted in the left side graph of Fig. 6 indicate that intermediate levels of smoothing (which also have intermediate levels of rms contrast) elicit the most realistic percepts of translucency. The second experiment was a material matching task that used the three images that appeared most realistic. Observers used a cube rendered with physically correct subsurface scattering to match the amount of translucency that they experienced for our artificial stimuli. A second set of observers performed a calibration experiment that mapped the spatial scale of subsurface scattering within the cube onto a perceptually linear dimension of surface opacity/translucency (see Methods). Fig. 6 shows the results of these two experiments, which are consistent with observers’ informal reports: our artificial stimuli generate a compelling percept of translucency, and the strength of perceived translucency increases as a function of the smoothness of the intensity gradients. The results provide strong evidence that the visual system has learned that the intensity of subsurface scattering is linked to the distribution of convexities and concavities across 3D shapes and exploits this photogeometric constraint to identify what is and is not translucent.

Discussion

The results described herein provide insights into the information that the visual system exploits to extract the 3D shape and material properties of light permeable surfaces. One of the key insights of this work is that there are patterns of covariation that arise from different sources of image structure that cospecify the 3D shape and the material properties of surfaces. More specifically, our results show that there are diagnostic patterns of covariation that arise between the shape of self-occluding contours, the distribution of specular reflections, and the intensity gradients generated by subsurface scattering that provide information about the 3D shape and material properties of translucent surfaces. Our simulations revealed two aspects of 3D surface geometry that constrain the intensity variations generated by subsurface scattering: Subsurface scattering is generically brighter for convex surface regions than concave surface regions, and the orientations of the intensity gradients generated by subsurface scattering are aligned with the direction of minimum surface curvature. The patterns of covariation we observed arise because self-occluding contours and specular reflections depend on these same aspects of surface curvature. These shared dependencies cause the structure of specular reflections to parallel gradients of subsurface scattering and generate variations of light and dark where subsurface scattering terminates along the convex and concave segments of self-occluding contours (respectively).

We hypothesized that the human visual system exploits the patterns of covariation that arise between gradients of subsurface scattering, specular reflections, and self-occluding contours to identify a translucent material and derive its 3D shape. Our results revealed a large improvement in perceived 3D shape when gradients of subsurface scattering are combined with geometrically congruent patterns of specular reflections. We also showed that it was possible to induce illusory percepts of translucency by painting onto surfaces in a manner consistent with the geometric constraints of subsurface scattering identified in our physical simulation. These “counterfeit surfaces” exhibit the patterns of covariation that we hypothesized were used to extract the 3D shape and material properties of surfaces. Our results show that they are sufficient to elicit a compelling percept of translucency and evoke a vivid sense of 3D shape.

It should be noted that our paired comparison task did not reveal an effect of viewing angle (and hence self-occlusion) on translucency judgments for patterns of subsurface scattering presented with or without reflections. We believe that this effect is present but too subtle to be detected in our experiments for two reasons. When specular reflections and subsurface scattering are both present, they induce a strong sense of 3D shape and translucency for all surfaces, which in essence creates a ceiling effect for perceived translucency. For the images of subsurface scattering presented alone, the main perceptual transformations caused by the emergence of self-occlusion are the perception of 3D shape and of optical focus, the latter of which we did not experimentally assess (see ref. 16). We believe a more nuanced method is needed to capture the comparatively subtler transformations in perceived translucency experienced in images such as Fig. 2.

The patterns of covariation provide insight into the results of previous studies of the appearance of translucent surfaces (6, 18). There has been little work to date on how the 3D shape of translucent surfaces can be perceived; nearly all work addressing translucent surfaces has aimed to understand how and how well material properties are perceived (68, 18). A recurring theme in these studies (and Figs. 2, 4, and 5) is that patterns of subsurface scattering elicit stronger percepts of translucency when specular reflections are present (6, 18). This interaction suggested that there may be relationships between the intensity gradients generated by subsurface scattering and the structure of specular reflections that characterize translucent surfaces, but the form of this relationship remained unclear (18). The patterns of covariation shown in Fig. 4 make this relationship explicit by showing how subsurface scattering and specular reflections are interrelated due to shared dependencies on 3D surface curvatures.

The constraints that translucent surfaces exhibit are necessarily different from those that have historically been used to recover the 3D shape of opaque surfaces. There is a large and growing body of work (particularly in machine vision) dedicated to developing algorithms for recovering 3D surface shape from 2D patterns of surface shading (1114). Nearly all of this work has been founded on the photogeometric constraint that links surface intensity to local 3D surface orientation (11). Several studies have shown that patterns of subsurface scattering exhibit gross violations of this photogeometric constraint (6, 17, 18), but it has remained unclear whether there are any consistent photogeometric constraints that link the intensity of subsurface scattering to 3D surface geometry [apart from those exhibited by corners (20)]. Indeed, it has been previously suggested that it was unlikely that any analogous links exist for translucent surfaces (17). However, these analyses were restricted to convex 3D shapes that were devoid of concavities. The more complex shapes studied herein reveal that there is a photogeometric constraint that links the intensity of subsurface scattering to convex and concave surfaces. This result provides a bridge between image structure and surface structure that future computational work may use to recover 3D shape of translucent substances.

One of the conceptually surprising outcomes of our simulations is the stability of subsurface scattering across illumination directions, especially relative to patterns of surface shading. There is no work in machine vision that has attempted to recover shape of translucent materials; shape reconstruction for opaque surfaces has been studied intensively for decades and remains unsolved, particularly in contexts where illumination direction is unknown. Although the forward optics of subsurface scattering is more complex than that of surface shading, our results show that it is more invariant to illumination, and hence, shape reconstruction may actually be easier for translucent than opaque surfaces.

Our results bear on a debate about the validity of the “dark-is-deep” heuristic as a possible basis for 3D shape perception. Our results show that translucent surfaces exhibit a relationship between intensity and surface curvature that is broadly similar to the shading patterns of opaque surfaces in (nongeneric) diffuse illumination. Specifically, it has been suggested that our capacity to perceive shape in such illumination contexts may be based on a dark-is-deep heuristic, which links shading intensity to either depth or convexity (25, 26). However, it has remained unclear how the visual system could potentially learn this heuristic given that diffuse illumination is rare, and concavities of opaque surfaces exhibit small intensity maxima that are inconsistent with the heuristic (27). Our results suggest that correlations between depth and intensity (26) may be a product of our familiarity with translucent surfaces in natural light fields.

The photogeometric constraints identified herein were derived using light maps where the predominant direction of illumination was in the front hemifield. This is the same subset of illumination directions that have been the focus of research into the perception (or computation) of shape from shading. Opaque surfaces that are illuminated principally from behind are covered by dark attached shadows; the visible shading intensities arise from secondary light sources or reflections from the surrounding environment. Translucent surfaces that are backlit appear to glow where they are sufficiently thin, and relatively large amounts of light are transmitted toward the viewer. The pattern of transmitted light depends on the shape of hidden surfaces at the rear and hence is inherently difficult (or perhaps impossible) to link to the 3D shape of visible surfaces at the front. It is currently unclear whether any photogeometric constraints exist for rear-illuminated translucent surfaces that could provide information about 3D shape. Previous research suggests that translucent surfaces that are backlit appear more transmissive than when they are illuminated from the front (6, 7) but that observers are less accurate and precise at matching the material properties in these contexts (7). More work is needed to understand how the visual system derives the shape and material properties of surfaces in these contexts.

The pattern of light reflected or transmitted by a surface carries a conflated mixture of information about its 3D shape and material. The vast majority of work has treated shape perception and material perception as separate inference problems. However, there is a growing body of evidence that shape perception and material perception are inherently linked and derived using the same photogeometric constraints (15, 19, 2831). We have previously shown that an image of a translucent material can be misperceived as opaque if its apparent 3D shape is manipulated in ways that satisfy the photogeometric constraints that opaque surfaces are known to obey (19). The work presented herein demonstrates that there are also photogeometric constraints for translucent materials, and these constraints likewise carry dual information about material and 3D shape. Although many have emphasized the impossibility of “inverse optics,” it seems clear that the visual system has internalized a sophisticated understanding of interdependencies between 3D shape, material, and image structure. These links between image structure and 3D surface structure promise to spur the development of shape recovery algorithms for translucent materials and offer an explanation for how the visual system recognizes that it is viewing a translucent substance.

Methods

The experiments were approved by the University of Sydney, and participants provided informed consent and were debriefed about the aims in adherence with the declaration of Helsinki. Observers in experiments 1 and 2 viewed two of the bumpy planes presented side by side on each trial. Their task was to indicate which elicited a clearer perception of either 3D surface shape (experiment 1) or was more translucent (experiment 2). Every pair of the 12 surfaces shown in Fig. 2 was presented in a different random order for experiment 1. Every pair of the eight surfaces shown in the top and bottom rows of Fig. 2 was presented in experiment 2. The proportion of trials that each image was selected as having clearer 3D shape or translucency is plotted in the right side graphs of Fig. 2. Observers performed the experiments online because of the cessation of face-to-face testing in 2020 at the University of Sydney.

Observers in experiment 3 performed two tasks that assessed the qualitative and quantitative shape of the nine surfaces shown in Fig. 5. On each trial, observers viewed one surface that had two small red dots superimposed along the test array. Observers indicated which dot appeared closer in depth for every pairwise combination of the 20 test locations, which were presented in a different random permutation for each of the four observers. There were nine such blocks of trials—one for each of the nine surfaces. The blocks were presented in a different random permutation for each observer. The second task presented a graph next to the surface that plotted the number of times that the observer had selected each test location as appearing closer. Observers were told that the graph depicted the 3D shape of the surface as viewed from the side; their task was to shrink or stretch the vertical scale of the graph to match the perceived depth of the concave valleys and the height of the convex bumps. They used the up and down arrow keys to adjust the magnitude of depth shown in the graphs and could also use the left and right arrows to adjust the smoothness of the graph. The amount of smoothing was clamped so that the ordinal depth relationships determined in the first step were always preserved. There were nine repeats for each surface, and the order of the 90 trials was a different random permutation for each observer.

See SI Appendix for a full description of the stimuli, physical simulations, and image analyses.

Supplementary Material

Supplementary File
Supplementary File
Download video file (6.4MB, mov)
Supplementary File
Download video file (7.6MB, mov)
Supplementary File
Download video file (15.4MB, mov)

Acknowledgments

This work was supported by grants awarded to B.L.A. from the Australian Research Council.

Footnotes

The authors declare no competing interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.2024798118/-/DCSupplemental.

Data Availability

Anonymized psychophysical data from experiments and MATLAB and C++ code used in this paper have been deposited in Mendeley Data (DOI: 10.17632/53y9f2kdy2.1).

References

  • 1.Egan E. J. L., Todd J. T., The effects of smooth occlusions and directions of illumination on the visual perception of 3-D shape from shading. J. Vis. 15, 24 (2015). [DOI] [PubMed] [Google Scholar]
  • 2.Todd J. T., Reichel F. D., Ordinal structure in the visual perception and cognition of smoothly curved surfaces. Psychol. Rev. 96, 643–657 (1989). [DOI] [PubMed] [Google Scholar]
  • 3.Johnston A., Passmore P. J., Shape from shading. I: Surface curvature and orientation. Perception 23, 169–189 (1994). [DOI] [PubMed] [Google Scholar]
  • 4.Nefs H. T., Koenderink J. J., Kappers A. M. L., The influence of illumination direction on the pictorial reliefs of Lambertian surfaces. Perception 34, 275–287 (2005). [DOI] [PubMed] [Google Scholar]
  • 5.Koenderink J. J., van Doorn A. J., Kappers A. M. L., Surface perception in pictures. Percept. Psychophys. 52, 487–496 (1992). [DOI] [PubMed] [Google Scholar]
  • 6.Fleming R. W., Bülthoff H. H., Low level image cues in the perception of translucent materials. ACM Trans. Appl. Percept. 2, 346–382 (2005). [Google Scholar]
  • 7.Xiao B., et al., Looking against the light: How perception of translucency depends on lighting direction. J. Vis. 14, 17 (2014). [DOI] [PubMed] [Google Scholar]
  • 8.Gkioulekas I., et al., Understanding the role of phase function in translucent appearance. ACM Trans. Graph. 32, 147 (2013). [Google Scholar]
  • 9.Todd J. T., Norman J. F., Mingolla E., Lightness constancy in the presence of specular highlights. Psychol. Sci. 15, 33–39 (2004). [DOI] [PubMed] [Google Scholar]
  • 10.Fleming R. W., Torralba A., Adelson E. H., Specular reflections and the perception of shape. J. Vis. 4, 798–820 (2004). [DOI] [PubMed] [Google Scholar]
  • 11.Horn B. K. P., “Obtaining shape from shading information” in The Psychology of Computer Vision, Winston P. H., Ed. (McGraw-Hill, 1975), pp. 115–155. [Google Scholar]
  • 12.Ikeachi K., Horn B. K. P., Numerical shape from shading and occluding boundaries. Artif. Intell. 17, 141–184 (1981). [Google Scholar]
  • 13.Kunsberg B. S., Zucker S. W., Critical contours: An invariant linking image flow with salient surface organization. arXiv [Preprint] (2017). https://arxiv.org/abs/1705.07329 (Accessed 17 March 2021).
  • 14.Xiong Y., et al., From shading to local shape. arXiv [Preprint] (2014). https://arxiv.org/abs/1310.2916 (Accessed 17 March 2021).
  • 15.Marlow P. J., Mooney S. W. J., Anderson B. L., Photogeometric cues to perceived surface shading. Curr. Biol. 29, 306–311.e3 (2019). [DOI] [PubMed] [Google Scholar]
  • 16.Mooney S. W. J., Marlow P. J., Anderson B. L., The perception and misperception of optical defocus, shading, and shape. eLife 8, e48214 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Koenderink J. J., van Doorn A. J., “Shading in the case of translucent objects” in Proceeding of the SPIE Conference on Human Vision and Electronic Imaging, Rogowitz B. E., Pappas T. N., Eds. (Society of Photo-Optical Instrumentation Engineers, Bellingham, WA, 2001), pp. 312–320. [Google Scholar]
  • 18.Motoyoshi I., Highlight-shading relationship as a cue for the perception of translucent and transparent materials. J. Vis. 10, 6 (2010). [DOI] [PubMed] [Google Scholar]
  • 19.Marlow P. J., Kim J., Anderson B. L., Perception and misperception of surface opacity. Proc. Natl. Acad. Sci. U.S.A. 114, 13840–13845 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Gkioulekas I., Walter B., Adelson E. H., Bala K., Zickler T., “On the appearance of translucent edges” in IEEE Conference on Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, NY, 2015), pp. 5528–5536. [Google Scholar]
  • 21.Chowdhury N. S., Marlow P. J., Kim J., Translucency and the perception of shape. J. Vis. 17, 17 (2017). [DOI] [PubMed] [Google Scholar]
  • 22.Koenderink J. J., What does the occluding contour tell us about solid shape? Perception 13, 321–330 (1984). [DOI] [PubMed] [Google Scholar]
  • 23.Fleming R. W., Torralba A., Adelson E. H., Shape from sheen. http://dspace.mit.edu/bitstream/handle/1721.1/49511/MIT-CSAIL-TR-2009-051.pdf?sequence=1. Accessed 17 March 2021.
  • 24.Koenderink J. J., van Doorn A. J., Surface shape and curvature scales. Image Vis. Comput. 10, 557–564 (1992). [Google Scholar]
  • 25.Langer M. S., Zucker S. W., Shape from shading on a cloudy day. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 11, 467–478 (1994). [Google Scholar]
  • 26.Tyler C. W., Diffuse illumination as a default assumption for shape-from-shading in the absence of shadows. J. Inf. Sci. Technol. 42, 319–325 (1998). [Google Scholar]
  • 27.Todd J. T., Egan E. J. L., Kallie C. S., The darker-is-deeper heuristic for the perception of 3D shape from shading: Is it perceptually or ecologically valid? J. Vis. 15, 2 (2015). [DOI] [PubMed] [Google Scholar]
  • 28.Marlow P. J., Todorović D., Anderson B. L., Coupled computations of three-dimensional shape and material. Curr. Biol. 25, R221–R222 (2015). [DOI] [PubMed] [Google Scholar]
  • 29.Kim M., Wilcox L. M., Murray R. F., Perceived three-dimensional shape toggles perceived glow. Curr. Biol. 26, R350–R351 (2016). [DOI] [PubMed] [Google Scholar]
  • 30.Bloj M. G., Kersten D., Hurlbert A. C., Perception of three-dimensional shape influences colour perception through mutual illumination. Nature 402, 877–879 (1999). [DOI] [PubMed] [Google Scholar]
  • 31.Knill D.C., Kersten D., Apparent surface curvature affects lightness perception. Nature 351, 228–230 (1991). [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary File
Supplementary File
Download video file (6.4MB, mov)
Supplementary File
Download video file (7.6MB, mov)
Supplementary File
Download video file (15.4MB, mov)

Data Availability Statement

Anonymized psychophysical data from experiments and MATLAB and C++ code used in this paper have been deposited in Mendeley Data (DOI: 10.17632/53y9f2kdy2.1).


Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES