Abstract
Visualization plays a vital role in the analysis of multimodal neuroimaging data. A major challenge in neuroimaging visualization is how to integrate structural, functional, and connectivity data to form a comprehensive visual context for data exploration, quality control, and hypothesis discovery. We develop a new integrated visualization solution for brain imaging data by combining scientific and information visualization techniques within the context of the same anatomical structure. In this paper, new surface texture techniques are developed to map non-spatial attributes onto both 3D brain surfaces and a planar volume map which is generated by the proposed volume rendering technique, spherical volume rendering. Two types of non-spatial information are represented: (1) time series data from resting-state functional MRI measuring brain activation; (2) network properties derived from structural connectivity data for different groups of subjects, which may help guide the detection of differentiation features. Through visual exploration, this integrated solution can help identify brain regions with highly correlated functional activations as well as their activation patterns. Visual detection of differentiation features can also potentially discover image-based phenotypic biomarkers for brain diseases.
Keywords: Brain connectome, Magnetic resonance imaging, Diffusion tensor imaging, Functional magnetic resonance imaging, Visualization
Introduction
Human connectomics [1] is an emerging field that holds great promise for a systematic characterization of human brain connectivity and its relationship with cognition and behavior. The analysis of human brain connectome networks faces two major challenges: (1) how to reliably and accurately identify connectivity patterns related to cognition, behavior, and also neurological conditions based on an unknown set of network characterization and features; (2) how to seamlessly integrate computational methods with human knowledge and how to translate this into user-friendly, interactive software tools that optimally combines human expertise and machine intelligence to enable novel contextually meaningful discoveries. Both challenges require the development of highly interactive and comprehensive visualization tools that can guide researchers through a complex sea of data and information for knowledge discovery.
Scientific visualization has traditionally been playing a role of visually interpreting and displaying complex scientific data, such as medical image data, to reveal structural and material details so as to help the understanding of the scientific phenomena. Example studies include diffusion tensor imaging (DTI) fiber tract visualization [2–7], network visualization [8–11], and multimodal data visualization [12–14]. In this context, recent development in information visualization provides new ways to visualize non-structural attributes or in-depth analysis data, such as graph/network visualization and time series data visualization. These, however, are usually separate visual representations away from the anatomical structures, which are limited at providing effective support for visual exploration of multimodal brain data.
To remedy the visual inefficiency and maximize human cognitive abilities during visual exploration, this paper proposes to integrate the visual representations of the connectome network attributes onto the surfaces of the anatomical structures of human brain. Multiple visual encoding schemes, combined with various interactive visualization tools, can provide an effective and dynamic data exploration environment for neuroscientists to better identify patterns, trends and markers. In addition, we develop a spherical volume rendering (SVR) algorithm using omni-directional ray casting and information-encoded texture mapping. It provides a single 2D map of the entire rendered volume to provide better support for global visual evaluation and feature selection for analysis purpose.
Our primary contributions in this work include:
Development of a method to represent rich attribute information using information-encoded textures.
Development of a new spherical volume rendering (SVR) technique that can generate a complete and camera-invariant view (volume map) of the entire structure.
Application of this approach to human brain visualization. Our experiments show great potential that this approach can be very useful in the analysis of neuroimaging data.
In the rest of this paper, we first discuss previous work related to this topic in Sect. 2. In Sect. 3, we will describe the data we used in this study. In Sect. 4, we will present technical details of encoded textures to visualize rich attribute information. In Sect. 5, we will present technical details and results of the SVR algorithm. Some implementation details and visualization evaluation will be provided in Sect. 6. We conclude the paper in Sect. 7 with our final remarks and future work.
Related work
Human brain connectomics involves several different imaging modalities that require different visualization techniques. More importantly, multimodal visualization techniques need to be developed to combine the multiple modalities and present both details and context for connectome-related data analysis. Margulies et al. [3] provided an excellent overview of the various available visualization tools for brain anatomical and functional connectivity data. Some of these techniques are capable of carrying out multimodal visualization involving magnetic resonance imaging (MRI), fiber tracts as obtained from DTI and overlaying network connections. Various graphics rendering tools, along with special techniques such as edge bundling (to reduce clutter), have been applied to visualize DTI fiber tracts [2–5]. Due to tracking uncertainties in DTI fibers, these deterministic rendering can sometimes be misleading. Hence, rendering techniques for probabilistic DTI tractography have also been proposed [6, 7]. Several techniques have been developed to provide anatomical context around the DTI fiber tracts [12–14]. This typically requires semitransparent rendering with carefully defined transfer functions.
Multimodal visualization is typically applied in the scientific visualization domain. The integration of information visualization and scientific visualization remains a challenge. In brain connectomics, connectome network’s connectivity data are usually visualized as weighted graphs. Graph visualization has been extensively studied in information visualization. There are some works that focus on visual comparison of different weighted graphs. For example, Alper et al. [15] used superimposed matrix representations to visually compare the difference of two connectome networks. Yang et al. [16] improved and designed a two-step hierarchical strategy and NodeTrix representation to get a better result. For connectomics application, the networks can be either visualized as separate graphs, away from the anatomical context, but connected through interactive interfaces [8–11] or embedded into the brain anatomical context [17–19]. The embedded graphs, however, have their nodes constrained to their anatomical locations and therefore do not need a separate graph layout process as in other graph visualization algorithms. Aside from embedded graphs, there has been little work in integrating more sophisticated information visualization, such as time series data and multi-dimensional attributes, within the context of brain anatomical structures.
Many visualization techniques for time series data have been developed in information visualization, such as time series plot [20], spiral curves [21], and ThemeRiver [22], for non-spatial information and time-variant attributes. Several variations of ThemeRiver styled techniques have been applied in different time series visualization applications, in particular in text visualization [23]. Depicting connectivity dynamics has been mostly done via traditional key-frame-based approach [24, 25] or key frames combined with time series plots [26, 27].
Texture-based visualization techniques have been widely used for vector field data, in particular, flow visualization. Typically, a grayscale texture is smeared in the direction of the vector field by a convolution filter, for example, the line integral convolution (LIC), such that the texture reflects the properties of the vector field [28–30]. Similar techniques have also been applied to tensor fields [31, 32].
As to volume datasets, volume rendering is a classic visualization technique. Both image-space and object-space volume rendering algorithms have been thoroughly studied in the past several decades. The typical image-space algorithm is ray casting, which was first proposed by Levoy [33]. Many improvements in ray casting have since been developed [34–37]. Splatting is the most common object-space approach. It directly projects voxels to the 2D screen to create screen footprints which can be blended to form composite images [38–42]. Hybrid approaches such as shear-wrap algorithm [43] and GPU-based algorithms provide significant speedup for interactive applications [44, 45]. Although iso-surfaces are typically extracted from volume data as polygon meshes [46], ray casting methods can also be applied toward volumetric iso-surfacing [47, 48].
There are a few freely available toolkits for visualizing human brain data. MRIcron (http://people.cas.sc.edu/rorden/mricron/) is a convenient tool to view 2D slices of MRI data. TrackVis (http://trackvis.org/) can visualize DTI fiber tracts with MRI data as background in 3D view. FSLView (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FslView) is a submodule of the FSL library which can do 2D/3D rendering of MRI and functional MRI (fMRI) data. Braviz (http://diego0020.github.io/braviz/) is a visual analytics tool supporting the visualization of MRI, fMRI, and DTI data including fiber tracts. While these tools are excellent at visualizing individual modalities separately, they typically do not emphasize on the functionality of an integrated visualization of all kinds of brain data within the context of the same anatomical background.
Brain imaging data and connectome construction
We first describe the MRI and DTI data used in this study, then present our methods for constructing connectome networks from the MRI and DTI data, and finally discuss the resting-state functional MRI (fMRI) data used in our time series visualization study.
MRI and DTI data from the ADNI cohort
The MRI and DTI data used in the preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). The ADNI was launched in 2003 as a public–private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial MRI, positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). For up-to-date information, see www.adni-info.org.
We downloaded the baseline 3T MRI (SPGR) and DTI scans together with the corresponding clinical data of 134 ADNI participants, including 30 cognitively normal older adults without complaints (CN), 31 cognitively normal older adults with significant memory concerns (SMC), 15 early MCI (EMCI), 35 late MCI (LMCI), and 23 AD participants. In our multi-class disease classification experiment, we group these subjects into three categories: healthy control (HC, including both CN and SMC participants, N = 61), MCI (including both EMCI and LMCI participants, N = 50), and AD (N = 23).
Using their MRI and DTI data, we constructed a structural connectivity network for each of the above 134 participants. Our processing pipeline is divided into three major steps described below: (1) generation of regions of interest (ROIs), (2) DTI tractography, and (3) connectivity network construction.
ROI generation. Anatomical parcellation was performed on the high-resolution T1-weighted anatomical MRI scan. The parcellation is an automated operation on each subject to obtain 68 gyral-based ROIs, with 34 cortical ROIs in each hemisphere, using the FreeSurfer software package (http://freesurfer.net/). The Lausanne parcellation scheme [48] was applied to further subdivide these ROIs into smaller ROIs, so that brain networks at different scales (e.g., = 83, 129, 234, 463, or 1015 ROIs/nodes) could be constructed. The T1-weighted MRI image was registered to the low resolution b0 image of DTI data using the FLIRT toolbox in FSL, and the warping parameters were applied to the ROIs so that a new set of ROIs in the DTI image space were created. These new ROIs were used for constructing the structural network.
DTI tractography. The DTI data were analyzed using FSL. Preprocessing included correction for motion and eddy current effects in DTI images. The processed images were then output to Diffusion Toolkit (http://trackvis.org/) for fiber tracking, using the streamline tractography algorithm called FACT (fiber assignment by continuous tracking). The FACT algorithm initializes tracks from many seed points and propagates these tracks along the vector of the largest principle axis within each voxel until certain termination criteria are met. In our study, stop angle threshold was set to 35 degree, which meant if the angle change between two voxels was greater than 35 degree, the tracking process stopped. A spline filtering was then applied to smooth the tracks.
Network Construction. Nodes and edges are defined from the previous results in constructing the weighted, undirected network. The nodes are chosen to be ROIs obtained from Lausanne parcellation. The weight of the edge between each pair of nodes is defined as the density of the fibers connecting the pair, which is the number of tracks between two ROIs divided by the mean volume of two ROIs [49]. A fiber is considered to connect two ROIs if and only if its end points fall in two ROIs, respectively. The weighted network can be described by a matrix. The rows and columns correspond to the nodes, and the elements of the matrix correspond to the weights.
To demonstrate our visualization scheme for integrative exploration of the time series of resting-state fMRI (rs-fMRI) data with brain anatomy, we employed an additional local (non-ADNI) subject, who was scanned in a Siemens PRISMA 3T scanner (Erlangen Germany). A T1-weighted sagittal MP-RAGE was obtained (TE = 2.98 ms, TR partition = 2300 ms, TI = 900 ms, flip angle = 9°, 128 slices with 1 × 1 × 1 mm voxels). A resting-state session of 10 min was also obtained. Subject was asked to stay still and awake and to keep eyes closed. BOLD acquisition parameters were: TE = 29 ms, TR = 1.25 s, flip angle = 79°, 41 contiguous interleaved 2.5 mm axial slices, with in-plane resolution = 2.5 × 2.5 mm. BOLD time series acquired were then processed according to the following steps (for details see [50]): mode 1000 normalization; z-scoring and detrending; regression of 18 detrended nuisance variables (6 motion regressors [X Y Z pitch jaw roll], average gray matter (GM), white matter (WM) and cerebral spinal fluid (CSF) signals, and all their derivatives computed as backwards difference); band-pass filter of 0.009 to 0.08 Hz using a zero-phase second-order Butterworth filter; spatial blurring using a Gaussian filter (FWHM = 2 mm); regression of the first 3 principal components of WM (mask eroded 3 times) and CSF (ventricles only, mask eroded 1 time). The Desikan-Killiany Atlas (68 cortical ROIs, as available in the FreeSurfer software) was registered to the subject. The resulting processed BOLD time series where then averaged for each ROI. Note that the Lausanne parcellation scheme (mentioned above) at the level of = 83 consists of the above 68 cortical ROIs together with the brain stem (as 1 ROI) and 14 subcortical ROIs. As a result, we will use 68 time series (one for each cortical ROI) in our time series visualization experiments.
Information visualization: methods and results
In this section, we propose a few information visualization methods. We have implemented and packaged these methods into a software tool named as BECA, standing for Brain Explorer for Connectomic Analysis. A prototype software is available at http://www.iu.edu/~beca/.
Visualizing structural connectivity networks
3D visualization of a connectivity network within an anatomical structure can provide valuable insight and better understanding of the brain networks and their functions. In a brain network, we render nodes as ROI surfaces, which are generated using an iso-surface extraction algorithm from the MRI voxel sets of the ROIs. Drawing the network edges is, however, more challenging since straight edges will be buried inside the brain structures. We apply the cubic Bezier curves to draw curved edges above the brain structure. The four control points of each edge are defined by the centers of the ROI surfaces and the extension points from the centroid of the brain, as shown in Fig. 1. Figure 2 shows visualization examples of a connectome network, along with the cortical surface, the ROIs, and the DTI fibers.
Brain connectivity networks obtained through the above pipeline can be further taken into complex network analysis. Network measures (e.g., node degree, betweenness, closeness) can be calculated from individuals or average of a population. Different measures may characterize different aspects of the brain connectivity [51]. In order to visualize these network attributes, we propose a surface texture-based approach. The main idea is to take advantage of the available surface area of each ROI and encode the attribute information in a texture image, and then texture-map this image to the ROI surface. Since the surface shape of each ROI (as a triangle mesh) is highly irregular, it becomes difficult to assign texture coordinates for mapping the texture images. We apply a simple projection plane technique. A projection plane of an ROI is defined as the plane with a normal vector that connects the center of the ROI surface and the centroid of the entire brain. The ROI surface can then be projected onto its projection plane, and the reverse projection defines the texture mapping process. Thus, we can define our attribute-encoded texture image on this project plane to depict a visual pattern on the ROI surface. Visually encoding attribute information onto a texture image is an effective way to represent multiple attributes or time series attributes. Below we will demonstrate this idea in two different scenarios: time series data from rs-fMRI and multi-class disease classification.
Visualizing fMRI data and functional connectivity
As a functional imaging method, rs-fMRI can measure interactions between ROIs when a subject is resting [52]. Resting brain activity is observed through changes in blood flow in the brain which can be measured using fMRI. The resting-state approach is useful to explore the brain’s functional organization and to examine whether it is altered in neurological or psychiatric diseases. Brain activation levels in each ROI represent a time series that can be analyzed to compute correlations between different ROIs. This correlation-based network represents the functional connectivity networks, and analogously to structural connectivity, it may be represented as a square symmetric matrix.
Using the surface texture mapping approach, we need to first encode this time series data on a 2D texture image. We propose an offset contour method to generate patterns of contours based on the boundary of each projected ROI. The offset contours are generated by offsetting the boundary curve toward the interior of the region, creating multiple offset boundary curves, as shown in Fig. 3. There are several offset curve algorithms available in curve/surface modeling. Since in our application, the offset curves do not need to be very accurate, we opt to use a simple image erosion algorithm [53] directly on the 2D image of the map to generate the offset contours.
In time series data visualization, the time dimension can be divided into multiple time intervals and represented by the offset contours. Varying shades of a color hue can be used to represent the attribute changes over time. Figure 4 shows the steps for constructing the contour-based texture. First, we map each ROI onto a projection plane perpendicular to the line connecting the centroid of the brain and the center of this ROI. The algorithm then iteratively erodes the mapped shape and assigns colors according to the activity level of this ROI at each time point. Lastly we overlay the eroded regions to generate a contour-based texture. We also apply a Gaussian filter to smooth the eroded texture image to generate more gradual changes in the activities over time. Figure 5 shows a few examples of the offset contours mapped to the ROIs. The original data have 632 time points, which will be divided evenly across the contours depending on the number of contours that can be fitted into the available pixels within the projected ROI.
Visualizing discriminative patterns among multiple classes
In this case study, we performed the experiment on the ADNI cohort mentioned before, including 61 HC, 50 MCI, and 23 AD participants. The goal is to generate intuitive visualization to provide cognitively intuitive evidence for discriminating ROIs that can separate subjects in different classes. This can be the first step of a diagnostic biomarker discovery process.
The goal of the visual encoding in this case is to generate a color pattern that can easily distinguish bias toward any of the three classes. To do so, we first assign a distinct color to each class. Various color patterns can be generated using different color blending and distribution methods. In our experiment, a noise pattern is applied with three colors representing the three classes. The same noise pattern approach can also accommodate more colors.
Since color blending is involved in a noise pattern, we choose to use an RYB color model, instead of the RGB model. This is because color mix using RYB model is more intuitive in a way that the mixed colors still carry the proper amount of color hues of the original color components. For example, red and yellow mix to form orange, and blue and red mix to form purple. Thus, RYB model can create color mixtures that more closely resemble the expectations of a viewer. Of course these RYB colors still need to be eventually converted into the RGB values for display. For the conversion between these two color models, we adopt the approach proposed in [54, 55], in which a color cube is used to model the relationship between RYB and RGB values. For each RYB color, its approximated RGB value can be computed by a trilinear interpolation in the RYB color cube.
We first construct noise patterns to create a random variation in color intensity, similar to the approach in [54]. Different color hues are used to represent the attributes in different classes of subjects. Any network measurement can be used for color mapping. In our experiment, we use the node degrees averaged across subjects in each class. A turbulence function [56] is used to generate the noise patterns of different frequencies (sizes of the subregions of the noise pattern). An example is shown in Fig. 6; we blend RYB channels with weights 0.5, 0.25, and 0.25, respectively. The blended texture is red-dominated with a little yellow and blue color.
Figure 7 shows some examples of the texture mapped views of the three classes: HC (red), MCI (yellow), and AD (blue). The colors of the edges also represent the blended RYB color values, based on the average edge weights in the three classes. From the resulting images, we can identify a specific ROI that exhibits bias toward one or two base colors. This can be a potential indication that this ROI may be a good candidate for further analysis as a potential imaging phenotypic biomarker.
Spherical volume rendering (SVR)
In previous sections, we mapped attributes onto the ROI surface. However, each rendering shows only one perspective, and subcortical structures remain unseen. Therefore, it does not provide an overall view of the complete structure. In this section, we develop a spherical volume rendering algorithm that provides a single 2D map of the entire brain volume to provide better support for global visual evaluation and feature selection for analysis purpose.
Traditional volume rendering projects voxels to a 2D screen defined in a specific viewing direction. Each new viewing direction will require a new rendering. Therefore, users need to continuously rotate and transform the volumetric object to generate different views, but never have the complete view in one image. Spherical volume rendering employs a spherical camera with a spherical screen. Thus, the projection process only happens once, providing a complete image from all angles.
Spherical ray casting
A spherical ray casting approach is taken to produce a rendering image on a spherical surface. A map projection will then be applied to construct a planar image (volume map). The algorithm includes three main steps:
Define a globe as a sphere containing the volume. The center and radius of the sphere may be predefined or adjusted interactively.
Apply spherical ray casting to produce an image on the globe’s spherical surface (ray casting algorithm).
Apply a map projection to unwrap the spherical surface onto a planar image (similar to the world map).
Rays are casted toward the center of the global from each latitude–longitude grid point on the sphere surface. In brain applications, the center of the global needs to be carefully defined so that the resulting image preserves proper symmetry, as shown in Fig. 8.
Along each ray, the sampling, shading, and blending process is very similar to the regular ray casting algorithm [33, 36]. The image produced by this ray casting process on the spherical surface will be mapped to a planar image using a map projection transformation, which projects each latitude–longitude grid point on the spherical surface into a location on a planar image. There are many types of map projections, each preserving some properties while tolerating some distortions. For our application, we choose to use Hammer–Aitoff Projection, which preserves areas but not angles. Details of this map projection can be found in [57].
Layered rendering
Volume rendering often cannot clearly show the deep interior structures. One remedy is to use layered rendering. When objects within the volume are labeled (e.g., segmented brain regions), we can first sort the objects in the spherical viewing direction (i.e., along the radius of the sphere) and then render one layer at a time.
The spherical viewing order can usually be established by the ray casting process itself as the rays travel through the first layer of objects first, and then the second layer, etc. If we record the orders in which rays travel through these objects, we can construct a directed graph based on their occlusion relationships, as shown in Fig. 9. Applying a topological sorting on the nodes of this graph will lead to the correct viewing order.
Since the shapes of these labeled objects may not be regular or even convex, the occlusion orders recorded by the rays may contradict each other (e.g., cyclic occlusions). Our solution is to define the weight of each directed edge as the number of rays that recorded this occlusion relationship. During the topologic sorting, the node with minimum combined incoming edge weight will be picked each time. This way, incorrect occlusion relationship will be kept to the minimum.
Brain map by SVR
Using a spherical volume rendering algorithm, we can generate a 2D brain map that contains all the ROIs in one image. This allows the users to view clearly relationships between different ROIs and the global distributions of network attributes and measurements for feature selection and comparison.
Figure 10a shows a brain map generated by SVR without any ROI labeling. Figure 10b shows the same brain map with color coded ROI labels.
Layered rendering was also applied to brain ROIs. With opacity at 1, Fig. 10 shows the first layer of the ROIs. Figure 11 shows all the layers. Different scaling factors are applied to the layers to adjust their relative sizes. This is necessary because the spherical ray casting will create enlarged internal ROIs and just like perspective projection will make closer objects larger, except that in this case the order is reversed.
In the following two subsections, we demonstrate two approaches to overlay additional information on top of the brain map: (1) encoding attribute information onto a texture image and then mapping the texture to the ROI surface; (2) drawing network edges directly over the brain map. Below, we apply the first approach to an application of visualizing discriminative patterns among multiple classes. In addition, we combine both approaches to visualize fMRI data and the corresponding functional connectivity network.
Visualizing fMRI data and discriminative pattern
Figure 12 shows the fMRI textured brain map for the first two layers. Figure 14 shows the network edges across multiple layers for both time series and multi-disease textures (Fig. 13).
User interface and interaction
Compared with traditional volume rendering in the native 3D space, this approach views the brain from its center. On the one hand, this can reduce the volume depth it sees through. On the other hand, it renders ROIs in a polar fashion and arranges ROIs more effectively in a bigger space. With more space available, it is easier to map attributes onto the ROIs and plot the brain networks among ROIs. Compared with traditional 2D image slice view, this approach can render the entire brain using much fewer layers. The user interface (Fig. 15) is flexible enough for users to adjust camera locations and viewing direction. Users can conveniently place the camera into an ideal location to get an optimized view. Users can also easily navigate not only inside but also outside the brain volume to focus on the structures of their interest or view the brain from a unique angle of their interest (Fig. 16).
Implementation, performance, and evaluation
An overview of the architecture of the prototype software BECA is illustrated in Fig. 17. The user interface of the prototype software BECA is built with Qt library [58]. The fiber tracts are rendered as polylines by VTK library [59]. The surfaces of brain structures are extracted from MRI scans by vtkMarchingCubes filter and then rendered as vtkPolyData in VTK. fMRI textures are then generated and mapped on the mesh as vtkTexture. The SVR algorithm is implemented on GPU with OpenCL [60] on NVIDIA GeForce GTX 970 graphics card with 4 GB memory. We pass the volume data to kernel function as image3d_t objects in OpenCL in order to make use of the hardware-accelerated bilinear interpolation when sampling along each ray. The normal of each voxel, which is required in Blinn–Phong shading model, is pre-calculated on CPU when the MRI volume is loaded. The normal is also treated as a color image3d objects in OpenCL, which can save lots on time on interpolation. We make each ray one OpenCL work-item in order to render each pixel in parallel. The global work-item size is the size of the viewport. The performance depends on the output image size, which is shown in Table 1. With an 800 × 600 viewport size, the performance is around 29.41 frames per second.
Table 1.
Output resolution | Avg. fps |
---|---|
640 × 480 | 45.45 |
800 × 600 | 29.41 |
1024 × 768 | 11.76 |
1600 × 1200 | 7.04 |
We have developed tools using Qt framework and VTK to allow user to interact with the 2D map. Users can drag the sphere camera around in the 3D view, and the 2D map will update in real-time. A screenshot of the user interface is shown in Fig. 15. The upper half is the brain in 3D perspective view, while the lower half is the 2D brain map generated by the SVR algorithm. When user moves the position of spherical camera (intersection of the white lines in Fig. 15) in the 3D view, the 2D map will change accordingly. The software enables user to navigate in the 3D brain and builds the visual correspondence between the 3D and 2D representation. We also provide users with a switch to reverse the direction of rays. As shown in Fig. 16a, rays are travels outward and we can see the exterior of the brain. On the contrary, when we reverse the direction of the ray in Fig. 16b, we can see the interior structures of the brain.
We demonstrated our prototype system and the resulting visualization to the domain experts in IU Center for Neuroimaging. The following is a summary of their evaluation comments.
Evaluation on the visualization of the discriminative pattern
The discriminative pattern shown in Fig. 13 has the promise to guide further detailed analysis for identifying disease-relevant network biomarkers. For example, in a recent Nature Review Neuroscience paper [35], C. Stam reviewed modern network science findings in neurological disorders including Alzheimer’s disease. The most consistent pattern the author identified is the disruption of hub nodes in the temporal, parietal, and frontal regions. In Fig. 13, red regions in superior temporal gyri and inferior temporal gyri indicate that these regions have higher connectivity in HC than MCI and AD. This is in accordance with the findings reported in [35]. In addition, in Fig. 13, the left rostral middle frontal gyrus shows higher connectivity in HC (i.e., red color), while the right rostral middle frontal gyrus shows higher connectivity in AD (i.e., blue color). This also matches the pattern shown in figure 3 of [35], where the hubs at left middle frontal gyrus (MFG) were reported in controls and those at right MFG were reported in AD patients. These encouraging observations demonstrate that our visual discriminative patterns have the potential to guide subsequent analyses.
Evaluation on the visualization of fMRI data and functional network
It is helpful to see all the fMRI signals on the entire brain in a single 2D image (Fig. 14). Drawing a functional network directly on the flattened spherical volume rendering image (Fig. 14) offers an alternative and effective strategy to present the brain networks. Compared with traditional approach of direct rendering in the 3D brain space, while still maintaining an intuitive anatomically meaningful spatial arrangement, this new approach has more spatial room to work with to render an attractive network visualization on the background of interpretable brain anatomy. The network plot on a multi-layer visualization (Fig. 14) renders the brain connectivity data more clearly and effectively.
Evaluation on the user interface and interaction
Compared with traditional volume rendering in the native 3D space, this approach views the brain from its center. On the one hand, this can reduce the volume depth it sees through. On the other hand, it renders ROIs in a polar fashion and arranges ROIs more effectively in a bigger space. With more space available, it is easier to map attributes onto the ROIs and plot the brain networks among ROIs. Compared with traditional 2D image slice view, this approach can render the entire brain using much fewer layers (4 in our case) than the number of image slices (e.g., 256 slices in a conformed 1 mm3 isotropic brain volume). The user interface (Fig. 15) is flexible enough for users to adjust camera locations and viewing direction. Users can conveniently place the camera into an ideal location to get an optimized view. Users can also easily navigate not only inside but also outside the brain volume to focus on the structures of their interest or view the brain from a unique angle of their interest.
Conclusions
We have presented an integrated visualization solution for human brain connectome data. Multiple modalities of images are involved including MRI, DTI, and fMRI. Our focus is on the integration of analysis properties of the connectome networks into the anatomical brain structures. We apply a surface texture-based approach to encode network properties and attributes onto the surfaces of the brain structures to establish visual connections and context. Surface texture is an effective approach to integrate information visualization and scientific visualization since scientific data typically have spatial structures containing surface areas, which can be taken advantage of for visual encoding.
In the future, we would like to continue developing the integrated visualization tool for public domain distribution. Currently, a prototype BECA software tool is available at http://www.iu.edu/~beca/, and we will continue improving it. We would also like to study interesting visual analytic topics to compare multiple networks from different network construction procedures, in particular, between structural networks and functional networks.
Acknowledgements
This work was supported by NIH R01 EB022574, R01 LM011360, U01 AG024904, RC2 AG036535, R01 AG19771, P30 AG10133, UL1 TR001108, R01 AG 042437, and R01 AG046171; DOD W81XWH-14-2-0151, W81XWH-13-1-0259, and W81XWH-12-2-0012; NCAA 14132004; and IUPUI ITDP Program. Data collection and sharing for this project were funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense Award Number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: Abbott, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.
Biographies
Huang Li
holds a B.S. and M.S. degree in Computer Science from Sun Yat-Sen University. He is pursuing his Ph.D. degree in Computer Science at Indiana University Purdue University Indianapolis, and is a research assistant in Center for Neuroimaging at Indiana University School of Medicine. His research interests include visualization, visual analytics, brain imaging, and imaging genomics.
Shiaofen Fang
holds a B.S. degree in Mathematics and an M.S. degree from Applied Mathematics, both from Zhejiang University, China. He received his Ph.D. degree in Computer Science from University of Utah. He is a Professor and the Department Chair of Computer and Information Science at Indiana University Purdue University Indianapolis. His research interests include data visualization, biomedical image analysis, visual analytics, and volume graphics.
Joey A. Contreras
holds a B.S. degree in Cognitive Psychology from University of California, Irvine and an M.S. degree in Biology from California State Los Angeles. She is pursuing her Ph.D. degree in Medical Neuroscience at Indiana University School of Medicine, and is a research assistant in Center for Neuroimaging at Indiana University School of Medicine. Her research interests include using fMRI and DWI imaging data to characterize and identify functional and structural changes in early stages of Alzheimer’s Disease within a brain connectomic framework.
John D. West
holds a B.S. degree from the College of New Jersey, and an M.S. degree in Engineering Science from Dartmouth College. John is Systems Administrator and lead Imaging Analyst for the Center for Neuroimaging (CfN) at Indiana University School of Medicine. He is in charge of keeping all CfN servers and workstations up and running. He also does analyses for various studies using image processing packages such as SPM, Freesurfer, FSL, and AFNI. John also creates and maintains various programming scripts in Matlab for automatic and batch preprocessing of various imaging datasets.
Shannon L. Risacher
holds a B.S. degree in Psychology from Indiana University Purdue University Indianapolis, and a Ph.D. degree in Medical Neuroscience from Indiana University School of Medicine. She is an Assistant Professor of Radiology and Imaging Sciences at Indiana University School of Medicine. Her main research interests involve evaluating imaging and non-imaging biomarkers of Alzheimer’s disease (AD) for utility in early detection and diagnosis. In particular, she is interested in evaluating which biomarkers are most sensitive in the earliest stages of disease, both for detecting pathophysiological changes and for predicting future clinical outcomes. She is primarily focused on structural, functional and molecular imaging biomarkers of AD, but have additional interest in novel biomarkers such as sensory and perceptual tests.
Yang Wang
holds a B.S. degree in Medical Science from Tongji Medical University, an M.D. degree in Human Medicine from Tongji Medical University and a Ph.D. degree in Imaging Science from University of Ulm. He is an Associate Professor of Radiology and Biophysics at Medical College of Wisconsin. His research interests include neuroimaging of structural and functional pathways in clinical translational applications.
Olaf Sporns
holds a B.S. degree in Biochemistry from Univeritat Tubingen, and a Ph.D. degree in Neuroscience from Rockefeller University. He is Distinguished Professor, Provost Professor, and Robert H. Shaffer Chair in the Department of Psychological and Brain Sciences at Indiana University Bloomington. His research interests include computational and cognitive neuroscience, connectomics, network models of the human brain, brain networks across the lifespan, computational models of brain dynamics, and embodied cognitive science.
Andrew J. Saykin
holds a B.A. degree in Psychology from University of Massachusetts Amherst, and an M.S. degree in Clinical Psychology and a Psy.D. degree in Clinical Neuropsychology from Hahnemann Medical College. He is the Raymond C. Beeler Professor of Radiology and Professor of Medical and Molecular Genetics at Indiana University School of Medicine. His expertise is in the areas of multimodal neuroimaging research, human genetics and neuropsychology/ cognitive neuroscience. He has a longstanding interest in the structural, functional and molecular substrates of cognitive deficits in Alzheimer’s disease, cancer, brain injury, schizophrenia and other neurological and neuropsychiatric disorders. The major thrust of his current research program is on integrating advanced brain imaging and genomic data to enhance the understanding of disorders affecting memory.
Joaquín Goñi
holds a B.S. degree in Computer Engineering from University of the Basque Country, an M.S. degree in Computer Science from University of the Basque Country, and a Ph.D. degree from University of Navarra. He is an Assistant Professor of Industrial Engineering and Biomedical Engineering at Purdue University. His research interests include complex systems, graph theory and network science, information theory, and neuroimaging and brain connectomics.
Li Shen
holds a B.S. degree from Xi’an Jiao Tong University, an M.S. degree from Shanghai Jiao Tong University, and a Ph.D. degree from Dartmouth College, all in Computer Science. He is an Associate Professor of Radiology and Imaging Sciences at Indiana University School of Medicine. His research interests include medical image computing, bioinformatics, data mining, network science, systems biology, brain imaging genomics, and brain connectomics.
Footnotes
Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wpcontent/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf
References
- 1.Behrens TE, Sporns O. Human connectomics. Curr Opin Neurobiol. 2012;22(1):144–153. doi: 10.1016/j.conb.2011.08.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Petrovic V, et al. Visualizing whole-brain DTI tractography with GPU-based tuboids and LoD management. IEEE Trans Vis Comput Graph. 2007;13:1488–1495. doi: 10.1109/TVCG.2007.70532. [DOI] [PubMed] [Google Scholar]
- 3.Stoll C et al (2005) Visualization with stylized line primitives. In: IEEE Visualization, pp 695–702
- 4.Merhof D, et al. Hybrid visualization for white matter tracts using triangle strips and point sprites. IEEE Trans Vis Comput Graph. 2006;12:1181–1188. doi: 10.1109/TVCG.2006.151. [DOI] [PubMed] [Google Scholar]
- 5.Peeters T et al (2006) Visualization of DTI fibers using hair-rendering techniques. In: Proceedings of ASCI, pp 66–73
- 6.Parker GJ, et al. A framework for a streamline-based probabilistic index of connectivity (PICo) using a structural interpretation of MRI diffusion measurements. J Magn Reson Imaging. 2003;18:242–254. doi: 10.1002/jmri.10350. [DOI] [PubMed] [Google Scholar]
- 7.Kapri AV et al (2010) Evaluating a visualization of uncertainty in probabilistic tractography. In: Proceedings of SPIE medical imaging visualization image-guided procedures and modeling, p 7625
- 8.Achard S, et al. A resilient, low frequency, small-world human brain functional network with highly connected association cortical hubs. J Neurosci. 2006;26:63–72. doi: 10.1523/JNEUROSCI.3874-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Salvador R, Suckling J, Schwarzbauer C, Bullmore E. Undirected graphs of frequency-dependent functional connectivity in whole brain networks. Philos Trans R Soc Lond B Biol Sci. 2005;360:937–946. doi: 10.1098/rstb.2005.1645. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Schwarz AJ, McGonigle J. Negative edges and soft thresholding in complex network analysis of resting state functional connectivity data. NeuroImage. 2011;55:1132–1146. doi: 10.1016/j.neuroimage.2010.12.047. [DOI] [PubMed] [Google Scholar]
- 11.Van Horn JD, Irimia A, Torgerson CM, Chambers MC, Kikinis R, Toga AW. Mapping connectivity damage in the case of Phineas Gage. PLoS ONE. 2012;7(5):e37454. doi: 10.1371/journal.pone.0037454. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Schurade R et al (2010) Visualizing white matter fiber tracts with optimally fitted curved dissection surfaces. In: Eurographics workshop on visual computing for biology and medicine, pp 41–48
- 13.Eichelbaum S, et al. LineAO: improved three dimensional line rendering. IEEE Trans Vis Comput Graph. 2013;19:433–445. doi: 10.1109/TVCG.2012.142. [DOI] [PubMed] [Google Scholar]
- 14.Svetachov P et al (2010) DTI in context: illustrating brain fiber tracts in situ. In: EuroVis, pp 1023–1032
- 15.Alper B, Bach B, Riche NH, Isenberg T, Fekete J (2013) Weighted graph comparison techniques for brain connectivity analysis. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 483–492
- 16.Yang X, Shi L, Daianu M, Tong H, Liu Q, Thompson P. Blockwise human brain network visual comparison using NodeTrix representation. IEEE Trans Vis Comput Graph. 2017;23(1):181–190. doi: 10.1109/TVCG.2016.2598472. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Foucher JR, Vidailhet P, Chanraud S, Gounot D, Grucker D, Pins D, Damsa C, Danion J-M. Functional integration in schizophrenia: too little or too much? Preliminary results on fMRI data. NeuroImage. 2005;26:374–388. doi: 10.1016/j.neuroimage.2005.01.042. [DOI] [PubMed] [Google Scholar]
- 18.Worsley KJ, Chen J-I, Lerch J, Evans AC. Comparing functional connectivity via thresholding correlations and singular value decomposition. Philos. Trans R Soc Lond B Biol Sci. 2005;360:913–920. doi: 10.1098/rstb.2005.1637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Zuo XN, Ehmke R, Mennes M, Imperati D, Castellanos FX, Sporns O, Milham MP. Network centrality in the human functional connectome. Cereb Cortex. 2012;22:1862–1875. doi: 10.1093/cercor/bhr269. [DOI] [PubMed] [Google Scholar]
- 20.Tufte Edward R. The visual display of quantitative information. Cheshire: Graphics Press; 1983. [Google Scholar]
- 21.Weber M, Alexa M, Müller W (2001) Visualizing time-series on spirals. In: IEEE information visualization, pp 7–13
- 22.Havre S, Richland WA, Hetzler B (2000) ThemeRiver: visualizing theme changes over time. In: IEEE information visualization, pp 115–123
- 23.Cui W, Liu S, Tan L, Shi C, Song Y, Gao Z, Qu H, Tong X. Textflow: towards better understanding of evolving topics in text. IEEE Trans Vis Comput Graph. 2011;17(12):2412–2421. doi: 10.1109/TVCG.2011.239. [DOI] [PubMed] [Google Scholar]
- 24.Fox MD, Snyder AZ, Vincent JL, Corbetta M, Van Essen DC, Raichle ME. The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proc Natl Acad Sci USA. 2005;102:9673–9678. doi: 10.1073/pnas.0504136102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Chang C, Glover GH. Time-frequency dynamics of resting-state brain connectivity measured with fMRI. NeuroImage. 2010;50:81–98. doi: 10.1016/j.neuroimage.2009.12.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Handwerker DA, Roopchansingh V, Gonzalez-Castillo J, Bandettini PA. Periodic changes in fMRI connectivity. NeuroImage. 2012;63:1712–1719. doi: 10.1016/j.neuroimage.2012.06.078. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Allen EA, Damaraju E, Plis SM, Erhardt EB, Eichele T, Calhoun VD. Tracking whole-brain connectivity dynamics in the resting state. Cereb Cortex. 2013;24(3):663–676. doi: 10.1093/cercor/bhs352. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Cabral B, Leedom C (1993) Imaging vector fields using line integral convolution. In: Poceedings of ACM SIGGRAPH, Annual Conference Series, pp 263–272
- 29.Stalling D, Hege H (1995) Fast and resolution independent line integral convolution. In: Proceedings of ACM SIGGRAPH, Annual Conference Series, pp 249–256
- 30.Laramee RS, Hauser H, Doleisch H, Post FH, Vrolijk B, Weiskopf D. The state of the art in flow visualization: dense and texture-based techniques. Comput Graph Forum. 2004;3(2):203–221. doi: 10.1111/j.1467-8659.2004.00753.x. [DOI] [Google Scholar]
- 31.McGraw T, Nadar M (2007) Fast texture-based tensor field visualization for DT-MRI. In: 4th IEEE international symposium on biomedical imaging: macro to nano, pp 760–763
- 32.Auer C, Stripf C, Kratz A, Hotz I (2012) Glyph- and texture-based visualization of segmented tensor fields. In: Proceedings of international conference on information visualization theory and applications, pp 670–677
- 33.Levoy M. Display of surfaces from volume data. IEEE Comput Graph Appl. 1988;8(3):29–37. doi: 10.1109/38.511. [DOI] [Google Scholar]
- 34.Kreeger K, Bitter I, Dachille F, Chen B, Kaufman A (1998) Adaptive perspective ray casting. In: Volume visualization symposium, pp 55–62
- 35.Knoll A, Hijazi Y, Westerteiger R, Schott M, Hansen C. Volume ray casting with peak finding and differential sampling. IEEE Trans Vis Graph. 2009;15(6):1571–1578. doi: 10.1109/TVCG.2009.204. [DOI] [PubMed] [Google Scholar]
- 36.Levoy M. Efficient ray tracing of volume data. ACM Trans Comput Graph. 1990;9(3):245–261. doi: 10.1145/78964.78965. [DOI] [Google Scholar]
- 37.Pfister H, Hardenbergh J, Knittel J, Lauer H, Seiler L (1999) The VolumePro real-time raycasting system. In: Proceedings of SIGGRAPH, pp 251–260
- 38.Mueller K, Yagel R (1996) Fast perspective volume rendering with splatting by using a ray-driven approach. In: Proceedings of IEEE visualization, pp 65–72
- 39.Westover L (1990) Footprint evaluation for volume rendering. In: Proceedings of SIGGRAPH, pp 367–376
- 40.Westover L (1989) Interactive volume rendering. In: Chapel hill volume visualization workshop, pp 9–16
- 41.Westover L (1991) SPLATTING: a parallel, feed-forward volume rendering algorithm. PhD Dissert., UNC-Chapel Hill
- 42.Zwicker M, Pfister H, Baar J, Gross M (2001) EWA volume splatting. In: Proceedings of IEEE visualization, pp 29–538
- 43.Lacroute P, Levoy M (1994) Fast volume rendering using a shear-warp factorization of the viewing transformation. In: SIGGRAPH, pp 451–458
- 44.Krüger J, Westermann R (2003) Acceleration techniques for GPU-based volume rendering. In: Proceedings of IEEE visualization, pp 287–292
- 45.Röttger S, Guthe S, Weiskopf D, Ertl T, Strasser W (2003) Smart hardware-accelerated volume rendering. In: Proceedings of the symposium on data visualization, pp 231–238
- 46.Lorensen E, Cline H (1987) Marching cubes: a high resolution 3D surface construction algorithm. In: Proceedings of SIGGRAPH, pp 163–169
- 47.Hadwiger M, Sigg C, Scharsach H, Bühler K, Gross M. Real time ray-casting and advanced shading of discrete isosurfaces. Comput Graph Forum. 2005;24(3):303–312. doi: 10.1111/j.1467-8659.2005.00855.x. [DOI] [Google Scholar]
- 48.Sramek M (1994) Fast surface rendering from raster data by voxel traversal using-chessboard distance. In: Proceedings of IEEE visualization, pp 188–195
- 49.Hagmann P, et al. Mapping human whole-brain structural networks with diffusion MRI. PLoS One. 2007;2:7. doi: 10.1371/journal.pone.0000597. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Power JD, et al. Methods to detect, characterize, and remove motion artifact in resting state fMRI. Neuroimage. 2014;84:320–341. doi: 10.1016/j.neuroimage.2013.08.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Rubinov M, Sporns O. Complex network measures of brain connectivity: uses and interpretations. Neuroimage. 2010;52(3):1059–1069. doi: 10.1016/j.neuroimage.2009.10.003. [DOI] [PubMed] [Google Scholar]
- 52.Biswal BB. Resting state fMRI: a personal history. Neuroimage. 2012;62(2):938–944. doi: 10.1016/j.neuroimage.2012.01.090. [DOI] [PubMed] [Google Scholar]
- 53.Rosenfeld A, Kak AC. Digital picture processing. New York: Academic Press; 1982. [Google Scholar]
- 54.Gossett N, Chen B (2004) Paint inspired color mixing and compositing for visualization. In: IEEE symposium on information visualization, pp 113–118
- 55.Liang Y et al (2014) Brain connectome visualization for feature classification. In: Proceedings of IEEE visualization
- 56.Perlin K (1985) An image synthesizer. In: Proceedings of SIGGRAPH85, pp 287–296
- 57.Snyder JP. Flattening the earth: two thousand years of map projections. Chicago: University of Chicago Press; 1993. [Google Scholar]
- 58.The Qt Company (2017) Qt Framework. https://www.qt.io. Accessed 30 July 2017
- 59.Kitware Inc (2017) The Visualization Toolkit. http://www.vtk.org/. Accessed 30 July 2017
- 60.Munshi A (2012) The OpenCL Specification Version 1.2. https://www.khronos.org/registry/cl/specs/opencl-1.2.pdf. Accessed 30 July 2017