Skip to main content
Journal of Anatomy logoLink to Journal of Anatomy
. 2022 Feb 27;241(2):552–564. doi: 10.1111/joa.13647

True‐color 3D rendering of human anatomy using surface‐guided color sampling from cadaver cryosection image data: A practical approach

Jon Jatsu Azkue 1,
PMCID: PMC9296043  PMID: 35224742

Abstract

Three‐dimensional computer graphics are increasingly used for scientific visualization and for communicating anatomical knowledge and data. This study presents a practical method to produce true‐color 3D surface renditions of anatomical structures. The procedure involves extracting the surface geometry of the structure of interest from a stack of cadaver cryosection images, using the extracted surface as a probe to retrieve color information from cryosection data, and mapping sampled colors back onto the surface model to produce a true‐color rendition. Organs and body parts can be rendered separately or in combination to create custom anatomical scenes. By editing the surface probe, structures of interest can be rendered as if they had been previously dissected or prepared for anatomical demonstration. The procedure is highly flexible and nondestructive, offering new opportunities to present and communicate anatomical information and knowledge in a visually realistic manner. The technical procedure is described, including freely available open‐source software tools involved in the production process, and examples of color surface renderings of anatomical structures are provided.

Keywords: 3D graphics, anatomy education, medical visualization, surface rendering

Short abstract

This study reports on a method to produce true‐color 3D surface renditions of anatomical structures by retrieving color information from cadaver cryosection image data. Organs and body parts can be rendered both as a whole and as if they had been dissected in a variety of ways. The technical procedure is described, including freely available open‐source software tools involved in the production process, and examples of true‐color surface renditions are provided.

1. INTRODUCTION

Three‐dimensional (3D) computer graphics are increasingly used for communicating anatomical information and knowledge. Computerized, 3D representations of anatomy can be visualized and manipulated dynamically and thus represent a valuable tool for anatomy learning, as well as for surgical planning and simulation (Assaf & Pasternak, 2008; Fang et al., 2020; Hemminger et al., 2005; Li et al., 2017; Murgitroyd et al., 2015; Preim & Saalfeld, 2018; Soler et al., 2014; Stepan et al., 2017; Triepels et al., 2020; Yammine & Violato, 2015). For digitally rendered anatomical scenes to be visually realistic, not only geometry needs to be represented in adequate detail but ideally also color attributes of tissues and organs should be displayed as close to their natural appearance as possible. A recent systematic analysis of a range of software applications devoted to 3D visualization of human anatomy appraised the 3D models featured in current applications with an average score of 3.04 of 5 in the realism dimension (Zilverschoon et al., 2019). This was a modest score denoting that the realism level of most 3D anatomical models is largely limited to showing some detail in visual/color clarity and nonsimplistic shapes, hence rather far from the highest score corresponding to realistic models both in shape and visual details (Zilverschoon et al., 2019).

The two major classes of approaches to 3D visualization, that is, surface rendering and volume rendering, process and display color differently. In volume rendering, a 2D projection is simulated by computing the absorption and emission of light rays cast through a 3D dataset composed of volumetric pixels or voxels generated from a stack of sectional image data (Drebin et al., 1988; Kaufman, 1996; Levoy, 1988). Gray scale or color data contained in voxels are usually mapped into predetermined opacity and color values using transfer functions (Drebin et al., 1988; Ney et al., 1990). Since volume renderings for medical visualization commonly target anatomical gray scale data such as those from CT and MRI scans, tissues and organs are usually represented using artificially created colors. Still, strikingly realistic renditions of the human body may still be generated even in the absence of color information in the source image data. For instance, the cinematic‐rendering approach produces hyperrealistic renditions out of CT imaging data by computing multiple paths of visible photons through the body tissues (Engel, 2016; Glemser et al., 2018; Paladini et al., 2015). In surface rendering, on the other hand, a structure is represented only by its surface—most commonly a polygon mesh or isosurface—and made visible by showing its appearance upon external illumination with a virtual light source. Although most implementations of surface rendering use arbitrarily assigned solid colors (Cline et al., 1987; Cook et al., 1983; Herman & Liu, 1979), vertices or faces of a polygon mesh can have associated scalar data carrying color information, and thus a surface can also be rendered with a multicolor appearance. For example, in photogrammetry, an anatomical specimen is photographed from multiple viewpoints and then a 3D point cloud of matching points is computed that reflects the object’s 3D geometry and to which color attributes from the original photographs are then mapped to mimic the object’s external visual aspect (Petriceks et al., 2018; Schenk, 2000). In addition, a 3D polygon mesh can be wrapped in a bitmap image displaying colors and textures in order to provide the rendered structure a lifelike appearance (Preim & Saalfeld, 2018; Zilverschoon et al., 2017).

The use of cadaver cryosection images as source data for 3D reconstructions allows to retrieve color information of tissues and organs, providing an avenue to produce 3D renditions of any anatomical structure with its true color appearance as found in the frozen cadaver. Collections of serial, high‐resolution cryosection images of human cadaver specimens both of whole bodies and separate body parts have been made available, including the Visible Human Project from the National Library of Medicine (Ackerman, 1998; Ackerman et al., 2001; Spitzer et al., 1996), the Chinese Visible Human (Zhang et al., 2003, 2006), the Visible Korean Human (Kim et al., 2002; Park et al., 2005), and the Visible Ear datasets (Sørensen et al., 2002). High‐resolution murine cryosection data are also available (Roy et al., 2009; Wilson et al., 2008). In the volume rendering modality, opacity transfer functions can be used to map true color properties of the tissues into opacity levels in order to manipulate the visibility of specific organs. This strategy, termed alpha blending or alpha rendering (Kahrs & Labadie, 2013), allows to highlight specific tissues based on their color properties as well as to remove unwanted components from the scene, for example, superficial tissues or the embedding gel (Gargesha et al., 2009, 2011; Kahrs & Labadie, 2013). A practical limitation of this approach is that specific transfer functions must be set in order to highlight different tissues or organs from each source dataset. In addition, tissues with similar color properties are difficult to single out based solely on opacity transfer functions. This difficulty can be overcome by circumscribing the rendered scene to subsets of the original volumetric data previously segmented from either the original cryosection image stack (Heng et al., 2006) or from registered CT scan data (Robb & Hanson, 2006). Interestingly, organs or body parts segmented from volumetric data can be surface rendered in true color by mapping color attributes from the cryosection image stack back onto the extracted surface model. This approach is advantageous in that it reduces both the storage requirements and the amount of data that has to be rendered as compared with the total number of voxels that enter into computation in volume rendering (Udupa et al., 1991). Endeavors in this direction have been reported in the scope of biomedical image visualization. Robb and Hanson (2006) provided a few examples of renditions of gross segments of the Visible Human Male abdomen after sectioning in several planes. An analogous approach was used to produce surface renderings of the anatomy of the pelvis reconstructed from the Visible Human Project dataset in the context of The Vesalius (TM) Project (Venuti et al., 2004).

Despite the potentialities of realistic color rendering for anatomical visualization, the methods and tools involved in producing true‐color surface models have not yet been described or are not available in the public domain (Dai et al., 2012; Robb & Hanson, 2006; Venuti et al., 2004). The aim of this work was to demonstrate a simple and versatile procedure to create high‐quality, true‐color surface renditions of human anatomical structures by extracting color information from cryosection cadaver image data. Digital models created using this technique can also be rendered as if they had been previously dissected or prepared in a variety of ways for anatomical demonstration, and easily saved and combined. A set of open‐source digital tools are proposed, all of which are available in the public domain, and examples of a variety of digital dissections that may be useful for presenting anatomical information and knowledge are provided.

2. MATERIALS AND METHODS

The technical procedure involved two major steps (general workflow described in Figure 1). The first was creating a surface representation of the target structure from a stack of cadaver serial cryosection images, which was accomplished using semiautomatic segmentation and the marching cubes algorithm for isosurface extraction. The second step involved retrieving color information from the subset of voxels representing the surface layer of the target structure, using the surface mesh generated in the previous step as a probe.

FIGURE 1.

FIGURE 1

Flow diagram for the surface‐guided color sampling procedure. Cryosection images are converted into a volumetric data file, which can then be resliced in any orthogonal plane for segmentation. Segmentation of the submandibular gland (sb, delimited in red color) from the Visible Human Male is shown. The marching cubes algorithm generates a 3D polygon mesh representing the external surface of the segmented target object. At this stage, a surface mesh can be edited to produce a modified version of the target structure that will be rendered as having been dissected or prepared. The color sampling operation retrieves color information (shades of brown) of those voxels representing the external boundaries of the target structure, using the surface mesh outputted by the previous step as a probe (red), and then assigns color attributes to the corresponding polygon vertices. A schematic representation of this operation in a subgroup of voxels is shown. Finally, the VTK renderer creates color on polygon faces by interpolation of vertex colors and renders the surface. b: Body of mandible; c: Carotid arteries; d: Digastric muscle; f: Facial vein; h: Hyoglossus muscle; hb: Hyoid bone; j: Internal jugular vein; m: Masseter muscle; n: Deep cervical lymph nodes; p: Palatopharyngeus and pharyngeal constrictor muscles; pl: Platysma; pt: Medial pterygoid muscle; sb: Submandibular gland; sg: Styloglossus muscle; sm: Submental vein; st: Sternocleidomastoid muscle; t: Tongue

2.1. Cryosection image data

High‐resolution axial plane cryosection images of five cadavers from two separate collections were used. Datasets and target structures were chosen with a criterion of convenience, in order to illustrate the general properties of the method while providing examples of a variety of different looking organs and tissues.

Axial sections of both the male and female cadavers from the Visible Human Project database (National Library of Medicine, Bethesda, Maryland; Ackerman, 1998; Ackerman et al., 2001; Spitzer et al., 1996) had 1 mm and 0.33 mm spacing in the z‐axis, respectively, and images were 2048 × 1216 pixels, where each pixel is defined by 24 bits of color (RGB, one byte each). Cryosection images of the formalin‐preserved head of a 72‐year‐old male donor from the same database were also used (0.147 mm intervals, with image dimensions of 1056 × 1528 pixels and 24 bits of color), whose blood vessels had been filled with araldite‐F (Ratiu et al., 2003). In addition, cryosection images of the Visible Head and Visible Male datasets from the Visible Korean Human collections (Kim et al., 2002; Park et al., 2005) were used (0.1 mm and 0.2 intervals and image dimensions of 4368 × 2912 and 2468 × 1407 pixels, respectively).

2.2. Segmentation and surface postprocessing

Each series of cryosection images containing a given structure of interest was cropped for the corresponding target organ using GNU Image Manipulation Program (GIMP) software, converted into a single volumetric data file in Nearly Raw Raster Data format (NRRD; http://teem.sourceforge.net/nrrd/) using 3D Slicer software (v. 4.10.2 r28257; https://www.slicer.org/), and then imported to ITK‐SNAP Medical Image Segmentation Tool software (v3.4.0; Yushkevich et al., 2006) for segmentation (Table 1). Volumetric data can then be re‐sliced in the x‐, y‐, and z‐axes in ITK‐SNAP to generate sagittal, coronal, and transverse planar views for convenient visualization during segmentation. Segmentation refers here to the process of singling out the subset of voxels within a volumetric file that represent the structure of interest. This allows to compute a 3D surface representing the external boundaries of the target structure. All segmentations were done here semiautomatically using the built‐in active contour algorithm in ITK‐SNAP (Kass et al., 1988), as described previously (Azkue, 2021b). Active contours are computer‐generated curves that propagate within the volumetric data, through a series of iterations, until they adapt to the boundaries of the target object. ITK‐SNAP then uses the marching cubes algorithm (Lorensen & Cline, 1987) to generate a 3D polygon mesh representing the external surface of the target object. Polygon meshes were exported as surface 3D files in Standard Tessellation Language (STL) format and then subjected to Laplacian smoothing using Blender v2.90.0 software. This procedure removes noise locally by smoothing the position of a given vertex of the mesh based on information about its immediate neighbors, while preserving the general shape of the original model (Sorkine et al., 2004). Here, 20–35 iterations and a lambda factor of 0.2–0.3 were used for smoothing. Additional manual smoothing was applied as needed.

TABLE 1.

Software tools used for segmentation, color sampling, and postprocessing of surface models

Subprocess Software tool Platform Availability Website
Cryosection image cropping GIMP MS Windows, MacOS, GNU Linux Free, open source https://www.gimp .org/
Conversion of 2D image stacks into volumetric files Slicer 3D MS Windows, MacOS, GNU Linux Free, open source https://www.slicer.org/
3D segmentations ITK‐SNAP MS Windows, MacOS, GNU Linux Free, open source http://www.itksnap.org
Edition of polygon meshes Blender MS Windows, MacOS, GNU Linux Free, open source https://www.blender.org/
Color sampling, visualization, and file exportation The Visualization Toolkit MS Windows, MacOS, GNU Linux Free, open source https://vtk.org/
Texture baking MeshLab MS Windows, MacOS, GNU Linux Free, open source https:// www.meshlab.net

The above process outputs a 3D polygon mesh that represents the external shape of the anatomical structure that will be ultimately rendered. In addition to the obvious possibility of rendering the target structure as a whole, a polygon mesh may be edited digitally at this stage by cutting off discrete portions of the model in order to produce what would be rendered as a dissected specimen. All such preparations were done here by applying Boolean subtraction operations using Blender.

2.3. Surface‐guided color sampling

Color information was retrieved from those voxels at positions determined by the spatial coordinates of the polygon mesh yielded by the preceding step. The sampling operation was computed here using the Visualization Toolkit (VTK, v 9.0.3), an open‐source software system for 3D computer graphics and scientific visualization (Schroeder et al., 1998; https://vtk.org/). The VTK provides a variety of so‐called filters or elements that receive data from other components in a visualization pipeline, modify those in a variety of ways (e.g., extract, subsample, interpolate, merge, or split input data), and then output the modified data to be handled by other elements or subprocesses. The present approach utilized the vtkProbeFilter, a tool that can retrieve scalar data contained in voxels, such as RGB data, using a polygon mesh as a probe. RGB values (1 byte each) retrieved from sampled voxels are then mapped back to the polygon mesh as color attributes. In order to produce a final surface rendition with the extracted colors, the VTK visualization pipeline assigns color to polygon faces by interpolating vertex colors. The data processing pipeline is shown in Figure 2, and a script in Python language that performs the sampling operation and renders the resulting surface is provided in Appendix A.

FIGURE 2.

FIGURE 2

Implementation of the color sampling operation using VTK. Input data importation, color sampling, rendering, and file saving steps are shown, also indicating the involved filters or functions in the VTK processing pipeline. The vtkProbeFilter, core of the sampling operation, takes the source volumetric data and the polygon mesh extracted by segmentation as inputs. The filter outputs a copy of the input mesh and appends the extracted color data from scalar values to polygon vertices. A separate step converts scalar values into color for visualization. A mapper converts the polygon mesh into a virtual model for surface rendering. The resulting model can be stored as a file. A python script that performs this operation and renders the resulting surface is provided in Appendix A

2.4. Additional processing

The above‐described procedure is sufficient to produce a 3D rendition of the target anatomical structure with its true color appearance. Often, however, saving the model with the attached color information is also desired. The X3D file format, an XML‐based format for representing 3D information developed as an improved version of the Virtual Reality Modeling Language (VRML) was used here to export produced models with the appended color attributes (Python script also provided in Appendix A).

Optionally the model can also be subjected to texture baking or the transferring of color information to a bitmap image, which allows to handle geometry and color data separately and reduce polygon count—and thus file size—while preserving color and texture quality. The former was accomplished here using the Parametrization and Vertex Color to Texture filters in MeshLab v2020.07 software. The resulting model was exported in OBJ format, which also generated a bitmap image file in PNG format and an MTL format file linking the preceding two files. Mesh simplification was carried out using Blender.

3. RESULTS

True‐color surface renditions shown in Figure 1 and Figures 3, 4, 5, 6, 7, 8 illustrate the main properties of the method and provide examples of a variety of different looking organs and body parts.

FIGURE 3.

FIGURE 3

Successful color sampling is dependent on the accuracy of the surface probe. A surface rendition of the Visible Human Female head is shown using correctly sized (100% scale) and distorted probes. Increasing or decreasing probe size by 2%–3% results in sampling colors from voxels outside the body (embedding blue gel) or representing subcutaneous tissues. A cross‐sectional image shows the relative positions of the three probes for reference

FIGURE 4.

FIGURE 4

The level of detail of rendered colors and textures depends on the resolution of the source cryosection images. Reconstructions of the cervical spinal cord from the Visible Head from the Visible Korean dataset (left) and the Visible Human Male from the NLM (right) are shown, and the maximum width of the spinal cord in the corresponding original images is indicated. Dorsal and ventral horns are readily identifiable in the higher resolution model, and relatively blurred in the lower resolution one

FIGURE 5.

FIGURE 5

Kidney in entirety and after editing the surface probe. Surface rendition of the whole left kidney and attached ureter from the Visible Human Male are shown, as well as virtually dissected versions of the kidney, ureter, and vascular supply after sectioning in the axial and coronal planes. The morphology and internal organization of the renal pyramids are clearly discernible. Note an accessory renal artery entering the inferior pole

FIGURE 6.

FIGURE 6

The brain as a whole and after editing the surface probe. Surface renditions of the brain from the isolated head dataset in the Visible Human image collection are shown, both as a whole (top left) and following sectioning in the axial and coronal planes or exposure of the left insular cortex (bottom right) after removal of the overlying frontal, parietal, and temporal cortices. Note whitish discoloration on the posterior short gyrus and long gyrus of the insula due to abrasion during cadaver sectioning

FIGURE 7.

FIGURE 7

Stripping layers off the model. A surface rendition of the head of the Visible Male from the Visible Korean dataset is shown, using a surface probe from which two pieces representing portions of the skin and skull overlying the brain had been removed. Note the tenuous purple impregnation of the skin due to the embedding gel

FIGURE 8.

FIGURE 8

Reducing mesh complexity without losing visual realism. The right temporalis from the Visible Human Female is shown attached to the coronoid process, before texture baking (top). Two versions of the muscle model alone are shown after texture baking and reducing the vertex count by ca. 86% or by ca. 97% (bottom). The vertex count could be reduced dramatically (figures provided in Table 2) without noticeably affecting the overlying bitmap texture. The magnification boxes show the structure of the polygon mesh from approximately the same region at the various levels of simplification

3.1. General characteristics of the method

The rendered structures exhibit shapes and color textures as they are found in the cadaver image dataset, and therefore they may display nonidealized colors or particularities that are usually not represented in medical illustrations, for example, color imprints made by nearby structures such as blood vessels (Figure 1) or bile impregnation of the fat surrounding the gall bladder (not shown). In addition, possible color modifications by effect of cadaver fixation or processing are also captured, such as permeation of the embedding gel (Figures 3 and 7), reddish impregnation by arterial infusion of araldite (Figure 6), or tissue abrasion during sectioning (Figure 6).

The level of detail of the rendered colors and textures is directly related to the resolution of the cryosection images and thus to the amount of information available from the source dataset. An example showing surface renderings of spinal cords reconstructed from cryosections of different original resolutions is provided in Figure 4.

A key aspect of the technical procedure is that successful color sampling is critically dependent on accurately defining the boundaries of the target structure by the surface probe. It is shown that increasing or reducing the size of the surface probe by 2%–3% results in retrieving color data from voxels representing the embedding blue gel or subcutaneous tissues, respectively (Figure 3).

3.2. Capturing external and internal color features

True‐color surface renditions of a number of anatomical structures were produced, including a salivary gland (Figure 1), skin and subcutaneous tissue (Figures 3 and 7), spinal cord (Figure 4), the kidney with attached ureter and blood supply (Figure 5), the brain (Figures 6 and 7), bone (Figures 7 and 8), and muscle (Figure 8).

Editing the surface probe of a given structure prior to color sampling allowed the generation of a range of versions of the model rendered as if dissected in a variety of possible ways. For example, Figure 5 shows surface renditions of the kidney exhibiting the spatial disposition of renal pyramids after cross sectioning in the axial and coronal planes. In addition to planar sectioning, a more complex dissection of the brain was simulated to expose the insular cortex by digitally removing the overlying cortical regions (Figure 6). A different editing approach is shown in Figure 7, which presents a surface rendition of the Visible Korean male after layer‐wise removal of soft tissues and skull bone to expose the underlying brain.

3.3. Texture baking and polygon reduction

Texture baking preserved the general visual appearance of models and allowed exportation in OBJ file format for subsequent mesh simplification. As shown in Figure 8 and Table 2 using a temporalis model, mesh simplification can dramatically reduce file size while entirely preserving external appearance.

TABLE 2.

Metadata of a temporal muscle model subjected to texture baking and two levels of polygon reduction

Vertex count File size
X3D file OBJ file MTL file PNG file
Originally exported 288,546 43.8 MB
Prior to simplification 288,546 90.6 MB 728 bytes 53.1 MB
First‐level simplification 38,234 5.9 MB 278 bytes 2.5 MB
Second‐level simplification 9745 1.4 MB 282 bytes 2.4 MB

Notes: The surface mesh was first exported in X3D format supporting color data appended to polygon vertices. Following the texture baking operation, geometry and color data were exported in OBJ file format with associated MTL and PNG files, and subjected to polygon reduction.

4. DISCUSSION

4.1. Contribution to the state of the art

An advantage of volume rendering in comparison with surface rendering is direct visualization without the need for segmentation. However, volume rendering individual organs without prior segmentation is challenging, and even more so is making them appear as dissected or prepared for anatomical demonstration. On the other hand, volume‐rendered scenes as such cannot be saved to disk and stored for later use. The method presented here allows to produce anatomical models that can be rendered both as a whole and as if previously dissected, can be exported to disk individually, and easily combined to create custom anatomical scenes. Further, it is shown that color‐textured models can be postprocessed for optimizing file size without any discernible loss of visual detail.

Application of bitmap textures to surface models, including real tissue photographs, is a useful approach to mimic the natural appearance of anatomical structures (Preim & Saalfeld, 2018; Zilverschoon et al., 2017). Surface scanning and photogrammetry can be seen as refinements of this strategy, where the 3D surface structure is enriched with actual color properties from photographs of the same object. These are probably ideal approaches for digitizing prosected body parts, as shown by Petriceks et al. (2018), as well as bone specimens (Azkue, 2021a), since 3D reconstructions using such techniques require capturing the geometry of the object of interest from all possible viewpoints and consequently the scanned object needs to be detached from the body. It is noteworthy, however, that the level of detail of reconstructed models is limited by prosection quality, and there are a number of factors that can compromise the quality of models such as difficult angles, poor lighting, and occlusions (Petriceks et al., 2018). The color sampling procedure described here offers significant advantages over previous approaches. First, this is a nondestructive technique that allows to produce multiple renditions of the same anatomical structure in a variety of shapes and preparations, with the only limit of the ability to edit the surface probe digitally. Frail or collapsible structures, such as small vessels, ureters, or bile ducts, that become easily distorted or damaged during actual dissection can be reconstructed and represented accurately, and so do smaller structures difficult to access by dissection, such as the inner ear, provided that cryosection images of adequate resolution are utilized. Second, if digital 3D models are to serve as an educational resource in the context of interactive software (Murgitroyd et al., 2015; Preim & Saalfeld, 2018; Triepels et al., 2020), it is important that models of individual organs and body parts can be combined and used interactively as separate components within a scene, rather than presented as a prosected region or tissue block. Furthermore, the flexibility offered by surface 3D models can be enhanced by including sectional anatomy and registered clinical imaging data such as CT or MRI in the scene (Liu et al., 2013; Mavar‐Haramija et al., 2015; Prats‐Galino et al., 2014; Robb & Hanson, 2006; Shin & Park, 2016). Surface models generated from cadaver cryosection image datasets are ideally suited in this regard since most available collections also include clinical imaging data (Ackerman, 1998; Ackerman et al., 2001; Park et al., 2005; Zhang et al., 2003, 2006).

4.2. Representing human anatomy in a visually realistic manner

Digital anatomical models are external representations of the body, where the correspondence between the representing and represented worlds is established by physical resemblance, based on aspects such as dimensionality, number of structures, spatial relationships between structures, and surface details including texture and color detail (Chan & Cheng, 2011; Palmer, 1978). Little published information is currently available as to the extent to which realism of anatomical models influences anatomy learning. Visual realism of models may be less critical for an undergraduate medical student becoming acquainted with the basic anatomy of a body region, who may also benefit from simplified, low‐technology models (Chan, 2015; Chan & Cheng, 2011). However, a vast majority of anatomists (80%) in a recent survey agreed that using realistic models to teach anatomy is important for students to retain their anatomical knowledge (Balta et al., 2017). Continued efforts to develop fixatives and techniques that preserve the body in a realistic manner (Balta et al., 2015; Jaung et al., 2011; Song & Jo, 2021) are a reflection of the interest among anatomists for preserved specimens to resemble the living tissue as accurately as possible. In addition, it is due to realistic texture and consistence that fresh frozen cadavers are highly valued among experts for both undergraduate education and surgery training (Song & Jo, 2021). Moreover, visual realism and accuracy are essential to postgraduate students in a surgical specialty, who need to master detailed information about specific body parts. Visual realism, however, while being essential in surgery simulation and a major item in evaluating the face validity of a surgical simulation system (Hill et al., 2012; Moglia et al., 2016; Nielsen et al., 2021; Robison et al., 2011), remains a technical challenge and is still addressed at best by texture mapping using synthetic or photographic images of actual organs (Detmer et al., 2017; Lyu et al., 2013; Michel et al., 2002; Robison et al., 2011; Sieber et al., 2021; Tang et al., 2017). A recent systematic analysis showed that virtual anatomical models included in currently available 3D visualization systems for anatomy education are modest as an average in terms of visual realism (Zilverschoon et al., 2019). This report shows that cadaver cryosection images, by carrying both shape and color information, are ideally suited to serve as a source for producing accurate anatomical 3D models with their true color appearance and can thus potentially contribute to advancing this field. An implementation of this potentiality is demonstrated here using datasets and software tools in the public domain, in order to facilitate the production of highly realistic 3D renditions by any user without highly specialized technical insights or software packages.

4.3. Limitations

The present method for true‐color surface rendering can only capture the true colors of the body to the level of the original cadaver tissue, and should not be expected to reflect the exact appearance of the living body. Still, frozen unembalmed cadaver images from available collections present minimal color variations from the living body for most tissues.

Segmentation of anatomical structures requires a background in anatomy and can be a labor‐intensive and time‐consuming task. However, some tissues and organs can be segmented easily based on color properties using automatic or semiautomatic segmentation. Interestingly, in addition, the Visible Korean Human collection has made a massive amount of segmentation data available that can be used for the purpose of true‐color surface rendering as described here.

A limitation of the method described here is related to the availability of high‐quality cryosection image data, as cadaver cryosection image collections are still limited and thus represent too small a sample to address aspects such as normal anatomical variation. It is hoped that additional datasets are made available or new collections are undertaken in the future.

Pedagogical evaluation of true‐color anatomical models as described here is beyond the scope of the present report. There is evidence that 3D computer visualization of digital anatomical models is effective for anatomy learning in terms of both factual and spatial knowledge (Murgitroyd et al., 2015; Triepels et al., 2020; Yammine & Violato, 2015), and visual realism of models is indeed recognized as a desired feature of educational 3D visualization software (Zilverschoon et al., 2019). Future studies are needed to investigate whether and how visually realistic models can contribute in specific educational contexts.

AUTHOR'S CONTRIBUTION

The author conceived and designed the study, carried out the reconstructions, and wrote the manuscript.

ACKNOWLEDGMENTS

The author declares no conflict of interest.

Appendix A. Python (v3.6.9) script for surface‐based color sampling and visualization of volumetric cryosection image data using VTK (v9.0.3). Running the script requires VTK libraries for python (https://pypi.org/project/vtk/)

# Call vtk libraries

import vtk

# Load surface probe file

stlReader = vtk.vtkSTLReader()

stlReader.SetFileName(“SurfaceFile.stl”)

stlReader.Update()

# Load volumetric data file

volreader = vtk.vtkStructuredPointsReader()

volreader.SetFileName(“VolumeFile.vtk”)

volreader.Update()

# Configure and perform color sampling operation

probeFilter = vtk.vtkProbeFilter()

probeFilter.SetInputConnection(stlReader.GetOutputPort());

probeFilter.SetSourceConnection(volreader.GetOutputPort());

surfaceFilter = vtk.vtkDataSetSurfaceFilter()

surfaceFilter.SetInputConnection(probeFilter.GetOutputPort())

surfaceFilter.Update()

# Create a new surface mesh endowed with color information

extractedPolyData = vtk.vtkPolyData()

extractedPolyData = surfaceFilter.GetOutput()

scalarArray = extractedPolyData.GetPointData().GetScalars();

scalars2colors = vtk.vtkScalarsToColors()

colorArray = vtk.vtkUnsignedCharArray()

colorArray = scalars2colors.MapScalars(scalarArray, 2, 1)

extractedPolyData.GetPointData().SetScalars(colorArray);

# Map mesh data to render

polyDataMapper = vtk.vtkPolyDataMapper()

polyDataMapper.SetInputData(extractedPolyData)

polyDataMapper.SetScalarVisibility(1)

polyDataMapper.SetScalarModeToUsePointData()

polyDataMapper.SetColorModeToDirectScalars()

# Create the object to be rendered

actor = vtk.vtkActor()

actor.SetMapper(polyDataMapper);

# Create and configure a window to render the scene

ren = vtk.vtkRenderer()

renWin = vtk.vtkRenderWindow()

renWin.AddRenderer(ren)

iren = vtk.vtkRenderWindowInteractor()

iren.SetInteractorStyle(vtk.vtkInteractorStyleTrackballCamera())

iren.SetRenderWindow(renWin)

iren. Initialize()

# Add model to the rendering window

ren.AddActor(actor)

ren.SetBackground(1, 1, 1)

renWin.SetSize(700, 700)

# Reset the camera to display the model

ren.ResetCamera()

renWin.Render()

# Keep window visible and support user interaction

iren. Start()

# Export the rendered colored mesh to X3D format file

X3DExporter = vtk.vtkX3DExporter()

X3DExporter.SetInput(renWin)

X3DExporter.SetFileName(“OuputFile.x3d”)

X3DExporter.Update()

X3DExporter.Write()

Azkue, JJ . (2022) True‐color 3D rendering of human anatomy using surface‐guided color sampling from cadaver cryosection image data: A practical approach. Journal of Anatomy. 241:552–564. 10.1111/joa.13647

DATA AVAILABILITY STATEMENT

The source data that support the findings of this study are in the public domain.

REFERENCES

  1. Ackerman, M.J. (1998) The Visible Human Project: a resource for anatomical visualization. Studies in Health Technology and Informatics, 52, 1030–1032. [PubMed] [Google Scholar]
  2. Ackerman, M.J. , Yoo, T. & Jenkins, D. (2001) From data to knowledge—the Visible Human Project continues. Studies in Health Technology and Informatics, 84, 887–890. [PubMed] [Google Scholar]
  3. Assaf, Y. & Pasternak, O. (2008) Diffusion tensor imaging (DTI)‐based white matter mapping in brain research: a review. Journal of Molecular Neuroscience, 34, 51–61. [DOI] [PubMed] [Google Scholar]
  4. Azkue, J.J. (2021a) Embedding interactive, three‐dimensional content in portable document format to deliver gross anatomy information and knowledge. Clinical Anatomy, 34, 919–933. [DOI] [PubMed] [Google Scholar]
  5. Azkue, J.J. (2021b) External surface anatomy of the postfolding human embryo: computer‐aided, three‐dimensional reconstruction of printable digital specimens. Journal of Anatomy, 239, 1438–1451. 10.1111/joa.13514 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Balta, J.Y. , Cronin, M. , Cryan, J.F. & O'Mahony, S.M. (2015) Human preservation techniques in anatomy: a 21st century medical education perspective. Clinical Anatomy, 28, 725–734. [DOI] [PubMed] [Google Scholar]
  7. Balta, J.Y. , Cronin, M. , Cryan, J.F. & O'Mahony, S.M. (2017) The utility of cadaver‐based approaches for the teaching of human anatomy: a survey of British and Irish anatomy teachers. Anatomical Sciences Education, 10, 137–143. [DOI] [PubMed] [Google Scholar]
  8. Chan, L.K. (2015) The use of low‐tech models to enhance the learning of anatomy. In: Chan, L.K. & Pawlina, W. (Eds.) Teaching anatomy—a practical guide. Switzerland: Springer, pp. 259–266. [Google Scholar]
  9. Chan, L.K. & Cheng, M.M.W. (2011) An analysis of the educational value of low‐fidelity anatomy models as external representations. Anatomical Sciences Education, 4, 256–263. [DOI] [PubMed] [Google Scholar]
  10. Cline, H.E. , Lorensen, W.E. , Ludke, S. , Crawford, C.R. & Teeter, B.C. (1987) Two algorithms for the three‐dimensional reconstruction of tomograms. Medical Physics, 15, 320–327. [DOI] [PubMed] [Google Scholar]
  11. Cook, L.T. , Dwyer, S.J., III , Batnitzky, S. & Lee, K.R. (1983) A three‐dimensional display system for diagnostic imaging applications. IEEE Computer Graphics and Applications, 3, 13–19. [Google Scholar]
  12. Dai, J.X. , Chung, M.S. , Qu, R.‐M. , Yuan, L. , Liu, S.‐W. & Shin, D.S. (2012) The Visible Human projects in Korea and China with improved images and diverse applications. Surgical and Radiologic Anatomy, 34, 527–534. [DOI] [PubMed] [Google Scholar]
  13. Detmer, G.J. , Hettig, J. , Schindele, D. , Schostak, M. & Hansen, C. (2017) Virtual and augmented reality systems for renal interventions: a systematic review. IEEE Reviews in Biomedical Engineering, 10, 78–94. [DOI] [PubMed] [Google Scholar]
  14. Drebin, R.A. , Carpenter, L. & Hanrahan, P. (1988) Volume rendering. Computer Graphics, 22, 65–74. [Google Scholar]
  15. Eid, M. , De Cecco, C.N. , Nance, J.W. , Caruso, D. , Albrecht, M.H. , Spandorfer, A.J. , et al. (2017) Cinematic Rendering in CT: A Novel, Lifelike 3D Visualization Technique. American Journal of Roentgenology, 209, 370–379. 10.2214/ajr.17.17850 [DOI] [PubMed] [Google Scholar]
  16. Engel, K. (2016) Real‐time Monte‐Carlo path tracing of medical volume data. GPU technology conference, April 4–7, San Jose Convention Center, CA, USA.
  17. Fang, C. , An, J. , Bruno, A. , Cai, X. , Fan, J. , Fujimoto, J. et al. (2020) Consensus recommendations of three‐dimensional visualization for diagnosis and management of liver diseases. Hepatology International, 14, 437–453. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Gargesha, M. , Qutaish, M. , Roy, D. , Steyer, G. , Bartsch, H. & Wilson, D.L. (2009) Enhanced volume rendering techniques for high‐resolution color cryo‐imaging data. Proceedings of SPIE—The International Society for Optical Engineering, 7262, 72655V. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Gargesha, M. , Qutaish, M.Q. , Roy, D. , Steyer, G.J. , Watanabe, M. & Wilson, D.L. (2011) Visualization of color anatomy and molecular fluorescence in whole‐mouse cryo‐imaging. Computerized Medical Imaging and Graphics, 35, 195–205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Glemser, P.A. , Engel, K. , Simons, D. , Steffens, J. , Schlemmer, H.‐P. & Orakcioglu, B. (2018) A new approach for photorealistic visualization of rendered computed tomography images. World Neurosurgery, 114, e283–e292. [DOI] [PubMed] [Google Scholar]
  21. Hemminger, B.M. , Molina, P.L. , Egan, T.M. , Detterbeck, F.C. , Muller, K.E. , Coffey, C.S. et al. (2005) Assessment of real‐time 3D visualization for cardiothoracic diagnostic evaluation and surgery planning. Journal of Digital Imaging, 18, 145–153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Heng, P.A. , Zhang, S.X. , Xie, Y.M. , Wong, T.T. , Chui, Y.P. & Cheng, C.Y. (2006) Photorealistic virtual anatomy based on Chinese Visible Human data. Clinical Anatomy, 19, 232–239. [DOI] [PubMed] [Google Scholar]
  23. Herman, G.T. & Liu, H.K. (1979) Three‐dimensional display of human organs from computed tomograms. Computer Graphics and Image Processing, 9, 1–29. [Google Scholar]
  24. Hill, A. , Horswill, M.S. , Plooy, A.M. , Watson, M.O. , Karamatic, R. , Basit, T.A. et al. (2012) Assessing the realism of colonoscopy simulation: the development of an instrument and systematic comparison of 4 simulators. Gastrointestinal Endoscopy, 75, 631–640. [DOI] [PubMed] [Google Scholar]
  25. Jaung, R. , Cook, P. & Blyth, P. (2011) A comparison of embalming fluids for use in surgical workshops. Clinical Anatomy, 24, 155–161. [DOI] [PubMed] [Google Scholar]
  26. Kahrs, L.A. & Labadie, R.F. (2013) Virtual exploration and comparison of linear mastoid drilling trajectories with true‐color volume rendering and the visible ear dataset. Studies in Health Technology and Informatics, 184, 215–221. [PMC free article] [PubMed] [Google Scholar]
  27. Kass, M. , Witkin, A. & Terzopoulos, D. (1988) Snakes: active contour models. International Journal of Computer Vision, 1, 321–331. [Google Scholar]
  28. Kaufman, A. (1996) Volume visualization. ACM Computing Surveys, 28, 165–167. [Google Scholar]
  29. Kim, J.Y. , Chung, M.S. , Hwang, W.S. , Park, J.S. & Park, H.S. (2002) Visible Korean Human: another trial for making serially‐sectioned images. Studies in Health Technology and Informatics, 85, 228–233. [PubMed] [Google Scholar]
  30. Levoy, M. (1988) Display of surfaces from volume data. IEEE Computer Graphics and Applications, 8, 29–37. [Google Scholar]
  31. Li, L. , Yu, F. , Shi, D. , Shi, J. , Tian, Z. , Yang, J. et al. (2017) Application of virtual reality technology in clinical medicine. American Journal of Translational Research, 15, 3867–3880. [PMC free article] [PubMed] [Google Scholar]
  32. Liu, K. , Fang, B. , Wu, Y. , Li, Y. , Jin, J. , Tan, L. et al. (2013) Anatomical education and surgical simulation based on the Chinese Visible Human: a three‐dimensional virtual model of the larynx region. Anatomical Science International, 88(4), 254–258. [DOI] [PubMed] [Google Scholar]
  33. Lorensen, W.E. & Cline, H.E. (1987) Marching cubes: a high resolution 3D surface construction algorithm. ACM SIGGRAPH Computer Graphics, 21, 163–169. [Google Scholar]
  34. Lyu, S.R. , Lin, Y.K. , Huang, S.T. & Yau, H.T. (2013) Experience‐based virtual training system for knee arthroscopic inspection. Biomedical Engineering Online, 12, 63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Mavar‐Haramija, M. , Prats‐Galino, A. , Juanes Méndez, J.A. , Puigdelívoll‐Sánchez, A. & de Notaris, M. (2015) Interactive 3D‐PDF presentations for the simulation and quantification of extended endoscopic endonasal surgical approaches. Journal of Medical Systems, 39, 127. [DOI] [PubMed] [Google Scholar]
  36. Michel, M.S. , Knoll, T. , Köhrmann, K.U. & Alken, P. (2002) The URO Mentor: development and evaluation of a new computer‐based interactive training system for virtual life‐like simulation of diagnostic and therapeutic endourological procedures. BJU International, 89, 174–177. [DOI] [PubMed] [Google Scholar]
  37. Moglia, A. , Ferrari, V. , Morelli, L. , Ferrari, M. , Mosca, F. & Cuschieri, A. (2016) A systematic review of virtual reality simulators for robot‐assisted surgery. European Urology, 69, 1065–1080. [DOI] [PubMed] [Google Scholar]
  38. Murgitroyd, E. , Madurska, M. , Gonzalez, J. & Watson, A. (2015) 3D digital anatomy modelling—practical or pretty? The Surgeon, 13, 177–180. [DOI] [PubMed] [Google Scholar]
  39. Ney, D.R. , Drebin, R.A. , Fishman, E.K. & Magid, D. (1990) Volumetric rendering of computed tomographic data: principles and techniques. IEEE Computer Graphics and Applications, 10, 24–32. [Google Scholar]
  40. Nielsen, C.A.W. , Lönn, L. , Konge, L. & Taudorf, M. (2021) Simulation‐based virtual‐reality patient‐specific rehearsal prior to endovascular procedures: a systematic review. Diagnostics (Basel), 10, 500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Palmer, S.E. (1978) Fundamental aspects of cognitive representation. In: Roach, E. & Lloyd, B.B. (Eds.) Cognition and categorization. Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 259–303. [Google Scholar]
  42. Park, J.S. , Chung, M.S. , Hwang, S.B. , Lee, Y.S. , Har, D.H. & Park, H.S. (2005) Visible Korean human: improved serially sectioned images of the entire body. IEEE Transactions on Medical Imaging, 24, 352–360. [DOI] [PubMed] [Google Scholar]
  43. Petriceks, A.H. , Peterson, A.S. , Angeles, M. , Brown, W.P. & Srivastava, S. (2018) Photogrammetry of human specimens: An innovation in anatomy education. Journal of Medical Education and Curricular Development, 5, 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Prats‐Galino, A. , Mavar, M. , Reina, M.A. , Puigdellívol‐Sánchez, A. , San‐Molina, J. & De Andrés, J.A. (2014) Three‐dimensional interactive model of lumbar spinal structures. Anaesthesia, 69, 521–521. [DOI] [PubMed] [Google Scholar]
  45. Preim, B. & Saalfeld, P. (2018) A survey of virtual human anatomy education systems. Computers and Graphics, 71, 132–153. [Google Scholar]
  46. Ratiu, P. , Hillen, B. , Glase, J. & Jenkins, D.P. (2003) Visible human 2.0—the next generation. Studies in Health Technology and Informatics, 94, 275–281. [PubMed] [Google Scholar]
  47. Robb, R.A. & Hanson, D.P. (2006) Biomedical image visualization research using the visible human datasets. Clinical Anatomy, 19, 240–253. [DOI] [PubMed] [Google Scholar]
  48. Robison, R.A. , Liu, C.Y. & Apuzzo, M.L.J. (2011) Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery. World Neurosurgery, 76, 419–430. [DOI] [PubMed] [Google Scholar]
  49. Roy, D. , Steyer, G.J. , Gargesha, M. , Stone, M.E. & Wilson, D.L. (2009) 3D Cryo‐imaging: a very high‐resolution view of the whole mouse. The Anatomical Record (Hoboken), 292, 342–351. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Schenk, T. (2000) Object recognition in digital photogrammetry. The Photogrammetric Record, 16, 743–762. [Google Scholar]
  51. Schroeder, W. , Martin, K. & Lorensen, B. (1998) The visualization toolkit An object‐oriented approach to 3D graphics. River, NJ: Prentice‐Hall, Inc. [Google Scholar]
  52. Shin, D.S. & Park, S.K. (2016) Surface reconstruction and optimization of cerebral cortex for application use. The Journal of Craniofacial Surgery, 27, 489–492. [DOI] [PubMed] [Google Scholar]
  53. Sieber, D.M. , Andersen, S.A.W. , Sørensen, M.S. & Mikkelsen, P.T. (2021) OpenEar image data enables case variation in high fidelity virtual reality ear surgery. Otology & Neurotology, 42, 1245–1252. [DOI] [PubMed] [Google Scholar]
  54. Soler, L. , Nicolau, S. , Pessaux, P. , Mutter, D. & Marescaux, J. (2014) Real‐time 3D image reconstruction guidance in liver resection surgery. Hepatobiliary Surgery and Nutrition, 3, 73–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Song, Y.K. & Jo, D.H. (2021) Current and potential use of fresh frozen cadaver in surgical training and anatomical education. Anatomical Sciences Education, 10.1002/ase.2138. Online ahead of print. [DOI] [PubMed] [Google Scholar]
  56. Sørensen, M.S. , Dobrzeniecki, A.B. , Larsen, P. , Frisch, T. , Sporring, J. & Darvann, T.A. (2002) The visible ear: a digital image library of the temporal bone. ORL, 64, 378–381. [DOI] [PubMed] [Google Scholar]
  57. Sorkine, O. , Lipman, Y. , Cohen‐Or, D. , Alexa, M. , Rössl, C. & Seidel, H.‐P. (2004) Laplacian surface editing. In: Scopigno, R. , Zorin, D. , Fellner, D.W. & Spencer, S.N. (Eds.) SGP ’04: Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on geometry processing. New York: Association for Computing Machinery, pp. 179–188. [Google Scholar]
  58. Spitzer, V. , Ackerman, M.J. , Scherzinger, A.L. & Whitlock, D. (1996) The visible human male: a technical report. Journal of the American Medical Informatics Association, 3, 118–130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Stepan, K. , Zeiger, J. , Hanchuk, S. , Del Signore, A. , Shrivastava, R. , Govindaraj, S. et al. (2017) Immersive virtual reality as a teaching tool for neuroanatomy. International Forum of Allergy & Rhinology, 7, 1006–1013. [DOI] [PubMed] [Google Scholar]
  60. Tang, J. , Xu, L. , He, L. , Guan, S. , Ming, X. & Liu, Q. (2017) Virtual laparoscopic training system based on VCH model. Journal of Medical Systems, 41, 58. [DOI] [PubMed] [Google Scholar]
  61. Triepels, C.P.R. , Smeets, C.F.A. , Notten, K.J.B. , Kruitwagen, R.F.P.M. , Futterer, J.J. , Vergeldt, T.F.M. et al. (2020) Does three‐dimensional anatomy improve student understanding? Clinical Anatomy, 33, 25–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Udupa, J.K. , Hung, H.M. & Chuang, K.S. (1991) Surface and volume rendering in three‐dimensional imaging: a comparison. Journal of Digital Imaging, 4, 159–168. [DOI] [PubMed] [Google Scholar]
  63. Venuti, J.M. , Imelińska, C. & Molholt, P. (2004) New views of male pelvic anatomy: role of computer‐generated 3D images. Clinical Anatomy, 17, 261–271. [DOI] [PubMed] [Google Scholar]
  64. Wilson, D. , Roy, D. , Steyer, G. , Gargesha, M. , Stone, M. & McKinley, E. (2008) Whole mouse cryo‐imaging. Proceedings of the International Society for Optical Engineering, 6916, 69161I–69161I‐9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Yammine, K. & Violato, C. (2015) A meta‐analysis of the educational effectiveness of three‐dimensional visualization technologies in teaching anatomy. Anatomical Science Education, 8, 525–538. [DOI] [PubMed] [Google Scholar]
  66. Yushkevich, P.A. , Piven, J. , Hazlett, H.C. , Smith, R.G. , Ho, S. , Gee, J.C. et al. (2006) User‐guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. NeuroImage, 31, 1116–1128. [DOI] [PubMed] [Google Scholar]
  67. Zhang, S.X. , Heng, P.A. & Liu, Z.J. (2006) Chinese Visible Human Project. Clinical Anatomy, 19, 204–215. [DOI] [PubMed] [Google Scholar]
  68. Zhang, S.X. , Heng, P.A. , Liu, Z.J. , Tan, L.W. , Qiu, M.G. , Li, Q.Y. et al. (2003) Creation of the Chinese Visible Human data set. The Anatomical Record, 275, 190–195. [DOI] [PubMed] [Google Scholar]
  69. Zilverschoon, M. , Kotte, E.M.G. , van Esch, B. , Ten Cate, O. , Custers, E.J. & Bleys, R.L.A.W. (2019) Comparing the critical features of e‐applications for three‐dimensional anatomy education. Annals of Anatomy, 222, 28–39. [DOI] [PubMed] [Google Scholar]
  70. Zilverschoon, M. , Vincken, K.L. & Bleys, R.L.A.W. (2017) The virtual dissecting room: creating highly detailed anatomy models for educational purposes. Journal of Biomedical Informatics, 65, 58–75. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The source data that support the findings of this study are in the public domain.


Articles from Journal of Anatomy are provided here courtesy of Anatomical Society of Great Britain and Ireland

RESOURCES