Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Oct 25.
Published in final edited form as: Parallel Comput. 2015 Dec 15;55:9–16. doi: 10.1016/j.parco.2015.10.016

Visualizing multiphysics, fluid-structure interaction phenomena in intracranial aneurysms

Paris Perdikaris a, Joseph A Insley e, Leopold Grinberg c, Yue Yu d, Michael E Papka e, George Em Karniadakis b,*
PMCID: PMC5656261  NIHMSID: NIHMS878119  PMID: 29081561

Abstract

This work presents recent advances in visualizing multi-physics, fluid-structure interaction (FSI) phenomena in cerebral aneurysms. Realistic FSI simulations produce very large and complex data sets, yielding the need for parallel data processing and visualization. Here we present our efforts to develop an interactive visualization tool which enables the visualization of such FSI simulation data. Specifically, we present a ParaView–NekTar interface that couples the ParaView visualization engine with NekTar’s parallel libraries, which are employed for the calculation of derived fields in both the fluid and solid domains with spectral accuracy. This interface allows the flexibility of independently choosing the resolution for visualizing both the volume data and the surface data from each of the solid and fluid domains, which significantly facilitates the visualization of complex structures under large deformations. The animation of the fluid and structure data is synchronized in time, while the ParaView–NekTar interface enables the visualization of different fields to be superimposed, e.g. fluid jet and structural stress, to better understand the interactions in this multi-physics environment. Such visualizations are key towards elucidating important biophysical interactions in health and disease, as well as disseminating the insight gained from our simulations and further engaging the medical community in this effort of bringing computational science to the bedside.

Keywords: Fluid-structure interactions, Blood flow, Cerebral aneurysms, High performance computing, Parallel visualization

1. Introduction

The cardiovascular system of the human body is the envy of every engineer. In just one minute the average heart beats about 70 times, pumping the entire blood supply of 5 l through 62,0 0 0 miles of vessels. That is one-fourth the distance between the Moon and the Earth. The brain, in particular, although less than 2% of a human’s body weight, receives about 20% of the resting cardiac output of blood and 25% of the body’s oxygen supply.

Interactions of blood flow in the human brain occur between different scales, determined by flow features in the large arteries (diameter of 0.5 mm or larger), the smaller arteries and arterioles (500 μm to 10 μm), and the capillaries (mean diameter of 5 μm) all being coupled to cellular and sub-cellular biological processes. While many biological aspects have been studied systematically, surprisingly less effort has been put into studying blood flow patterns within the brain; a matter of particular relevance to the clinical evolution of life threatening conditions such as cerebral aneurysms.

Cerebral aneurysms (CAs) [1] occur in up to 5% of the general population, leading to strokes for over 40,0 0 0 Americans each year. They are manifested as a pathological dilation of the vascular wall, due to the congenital or acquired structural weakening of the wall media. A potential rupture of a CA can cause subarachnoid hemorrhage, a devastating event that leaves over 60% of the afflicted patients dead or disabled, with less than 50% of survivors being able to return to their former jobs or level of activity.

Currently, there exists no formal framework towards the diagnosis of CAs, and most cerebral aneurysms go unnoticed until they rupture or are detected by brain imaging that may have been obtained for another condition. Moreover, assessing the risk of a potential rupture of a diagnosed CA is mostly empirical as the biomechanics governing the time-evolution of CAs is far from being fully understood. However, it is believed that factors such as location, geometry, blood flow patterns and resulting mechanical stresses within the aneurysm play a prominent role.

Recent advances in computational science now allow for high-fidelity numerical simulations of flow-structure interactions (FSI) in compliant CAs. The ability to simulate and visualize such cases enables us to study in-detail the flow patterns and the resulting mechanical stress field in the aneurysm, shedding new light on the conditions for its eventual rupture. Here, we demonstrate the potential of this pipeline by presenting visualization results for flow-structure interactions in a patient-specific aneurysm. The aneurysm is located in the cavernous segment of the right internal carotid artery and its detailed geometry was extracted using Magnetic Resonance Imaging (MRI) (see Fig. 1 ).

Fig. 1.

Fig. 1

Flow-structure interactions in an aneurysm. Shown on the left is the patient-specific cranial arterial network where the spectral element equations are solved. Shown in the inset ( top ) is the structure domain representing the elastic tissue of the aneurysm wall. Shown in the inset ( bottom ) is the local stress field in the aneurysm wall due to interactions with recirculating blood flow.

This paper is structured as follows. In Section 2 we briefly describe the multi-physics FSI computational framework. In Section 3 we discuss the visualization components that were developed to enable the interactive visual exploration of the resulting FSI data. This comprises a custom ParaView–NekTar plug-in for parallel data processing and visualization. ParaView is leveraged for performing standard parallel visualization algorithms and rendering. The key contribution here is reading the Nek- Tar data in its native simulation format, applying transformations for visualizing moving unstructured meshes subject to large displacements and calculating derived quantities of high-order volume data, such as the stress tensor inside solid domains. Finally, in Section 4 we present results for a representative case study involving fluid-structure interactions in a patient-specific brain aneurysm.

2. FSI simulation modeling

Fluid-structure interaction phenomena are ubiquitous in nature. Their accurate resolution is of critical importance in applications spanning a broad spectrum of scientific, engineering and medical disciplines, e.g., the fluttering of aircraft wings, the deflection of wind turbines, blood flow dynamics in aneurysms, etc. Simulations of such nonlinear multi-physics interactions require a code-and-model coupling approach that raises a number of challenges, including stable interface conditions, data communication between the coupled solvers, in-situ data analytics, etc.

Modeling flow-structure interaction (FSI) in biological systems, where elastic tissues and fluids have almost identical density (low mass ratio problem), is one of the most challenging problems in computational mechanics. One of the major challenges in FSI simulations at low mass ratio is the stability of semi-implicit coupled fluid-structure solvers. Based on recent advances in developing stable numerical algorithms we have developed a solver capable of tackling problems with mass ratios approaching zero, enabling high-resolution FSI simulations of biological systems, such as blood flow in compliant arteries [5,6] .

Here, we follow a partitioned strategy in which the FSI solver is based on coupling the open source parallel codes NekTar [2] and StressNekTar [3,4] , both of which are based on the spectral/ hp element method. The fluid domain governed by the Navier– Stokes equations is assumed to be incompressible and Newtonian. The response of the solid domain is described by the elasto-dynamics equations [6] . To account for moving boundaries we employ the Arbitrary Lagrangian–Eulerian framework (ALE). The strong coupling of the two models is achieved by using sub-iterative approach with convergence accelerated by Aitken relaxation [6] . Finally, the formulation is closed by appropriate boundary, initial and fluid-solid interface coupling conditions [5,6] .

The main challenge here is to tackle cases where the fluid and structure densities are close and convergence of the coupled solver becomes problematic due to the so-called added mass effect [5] . The aforementioned modeling framework has enabled us for the first time to study FSI in a patient-specific model of brain vasculature, where arterial wall dynamics are simulated at very high resolution, and relatively large structural deformations are possible.

Both fluid and solid domains are discretized in space using the spectral/ hp element method (SEM) [2] , according to which, the computational domain Ω is decomposed into a set of polymorphic non-overlapping elements Ωei ⊂, i = 1, …, Nel, as depicted in Fig. 2 .

Fig. 2.

Fig. 2

Illustration of the unstructured surface grid and the polynomial basis employed in NekTar. The solution domain is decomposed into non-overlapping elements. Within each element the solution is approximated by vertex, edge, face and (in 3D) interior modes. The shape functions associated with the vertex, edge and face modes for fourth-order polynomial expansion defined on triangular and quadrilateral elements are shown in color. (For interpretation of the references to colour in this figure text, the reader is referred to the web version of this article).

Within each element, the solution is approximated as a linear combination of hierarchical, mixed-order, semi-orthogonal Jacobi polynomial expansions. This hierarchical structure consists of separated vertex (linear term), edge, face and interior (or bubble) modes. According to this decomposition, the polynomial representation of a field ue (t , x ) within an element is given by the linear combination of the basis functions multiplied by the corresponding modal amplitudes:

ue(t,x)=ku¯ke(t)ϕke(x), (1)

where u¯ke is an amplitude of the k th shape function ϕke(x) in the element e .

The solution computed by NekTar is saved in modal space, i.e., only the values of u¯ke being stored for each field variable. The polynomial expansion representing the solution in each spectral element is exploited by the visualization software to obtain the solution (or derived quantities) on a given number of points (vertices) within each spectral element.

3. ParaView–NekTar Interface

Leveraging ParaView’s plug-in interface, and building on previous work reported in [7] , we enable the visualization of fluid-structure interaction data with high-order spectral accuracy.

NekTar’s geometry and solution data are stored separately in MESH and SOLUTION files, respectively. The MESH file contains the initial coordinates of elements on the continuum mesh, the connectivity of the elements, boundary condition information, as well as information on the curved boundaries. The SOLUTION files contain the calculated values, which can be used both for restart and also for data analysis. The size of the MESH and SOLUTION files depends on the number of elements in the mesh, spectral resolution and number of field variables considered in the computation, i.e., fluid velocity, pressure, mesh displacement, etc. There is a separate MESH and set of SOLUTION files for each fluid and solid portion of the domain.

3.1. Plugin overview

Processing of NekTar’s data is accomplished in two steps. In the first step the MESH data (a connected list of polymorphic elements) is read and the computational domain is partitioned across processes (with METIS [8] ).

The plugin analyzes the SOLUTION file to see what field variables are included in the data and populates the ParaView GUI based on this information. Along with the variables present in the SOLUTION file, interfaces for calculating derived fields (with high-order accuracy) from the stored variables are added. These will be specific interfaces for solid and fluid domains.

In the second step each process reads and stores the solution corresponding to the elements included in its partition. The modal values of the solution are cached in memory, and are used to transform the solution from modal to physical space. Using the GUI the user specifies what data should be made available and at what resolution. The resolution for the continuum mesh and the boundary surface, which can optionally be extracted from the fluid continuum mesh, can be set independently. These two steps are applied to process the data for each domain. The second step is repeated for each time step. The task of loading field variables, transforming the solution from modal to physical space, computing derived properties and interpolating data on vertices of linear elements representing the visualization mesh is performed independently in each partition, resulting in linear scaling. Processing of NekTar’s data is performed with the custom reader using the functionality provided by the NekTar library. To reduce data redundancy at the interface between elements, duplicate vertices shared by adjacent elements are removed, using the vtk filter vtkCleanToGrid. Finally, the cleaned vtkUnstructuredGrid is passed on to the ParaView pipeline where all of the standard visualization algorithms can be applied to it.

3.2. Processing high-order spectral element data from NekTar

In computing derived quantities it is important to take into account that high spatial resolution achieved by P-refinement (i.e. using higher polynomial order) allows for the use of relatively large elements, thus using high-order operators for computing derived quantities is more appropriate.

The data computed by NekTar are processed with high-order spectral accuracy, i.e., the interpolation, integration and differentiation are performed on the quadrature grid consistent with the simulation resolution. The C0 spatial discretization in NekTar guarantees solution continuity across elements’ boundaries, while it allows discontinuities of the derivatives. To obtain continuity of derived fields based on solution derivatives (e.g., vorticity, stress tensor) the user can turn on “high-order projection calculation”. Beneath the hood of the “projection calculator” is a high-order parallel projection solver using L-2 projection to transform a discontinuous field into C0 continuous (See Fig. 3 ). The global projection uses a parallel conjugate gradient solver, whose scalability has been tuned to tens of thousands processors.

Fig. 3.

Fig. 3

This image shows Lambda2 calculated without projection (A) and with (B).

Besides calculating vorticity and Lambda2 for the fluid mesh, users can additionally choose to extract the boundary surface from the continuum mesh and calculate the wall shear stress (WSS). For solid mesh solutions which contain the solid mesh displacement, the stress tensor can also be calculated. Scaling performance of this stress tensor calculation can be seen in Fig. 4 . This quantity is computed at the element resolution of the stored data, as shown by the calc_time line of the graph. This data is then projected onto the element resolution requested by the user. The graph shows this interpolation time for element resolutions of two and ten, along with the total times (calc_time plus interpolation time) for the two resolutions. In both cases linear strong scaling is achieved.

Fig. 4.

Fig. 4

Strong scaling performance of the stress tensor calculation for solid domains in the ParaView–NekTar reader plug-in.

At any time the user can choose (through GUI) to change the number of vertices where data is visualized. Changes in resolution parameters automatically propagate to NekTar’s functions, which use the solution data already cached in memory in its original resolution, to perform interpolation onto a different set of points and update the vtkUnstructuredGrid . Similarly, when the user animates through time steps of the data set the geometry of the elements and information on their connectivity are reused, and only the data from the new SOLUTION file needs to be read from disk, transformed to physical space and interpolated. As new time steps are read in the cleaned vtkUnstructuredGrid s for previous time steps are cached in memory so that they don’t need to be reread from disk each time they are to be displayed.

3.3. Visualizing fluid-structure interactions

The FSI calculations we consider here include deformation of the elements of the mesh over time. These are stored in the SOLUTION files as displacement values for each coordinate of the mesh. If the user chooses to display the dynamic mesh, via a control on the GUI, the original coordinates of the mesh are first backed up in memory, so that the original mesh can be restored without needing to re-read it from disk should the user decide to turn off the dynamic mesh. For each time step the mesh displacement values from the SOLUTION file are used to update the coordinates of the continuum mesh. The coordinates are then interpolated onto the requested element resolution and used to update the vtkUnstructuredGrid . The results of this process can be seen in Fig. 5 . Tile A) shows the original mesh while B) shows the same structure with the deformation applied. In tile C) we display both the original mesh in white together with the deformed mesh in gold to highlight the differences.

Fig. 5.

Fig. 5

Illustrations of the application of mesh displacement. (A) shows the original mesh. (B) shows the deformed mesh due to fluid flow. (C) shows both meshes together to highlight their differences. In tile (D) the values of the displacement have been scaled to accentuate the effect.

In cases where the displacements are small the user can accentuate the differences by applying a scaling factor set on the GUI. This is illustrated in tile D) of Fig. 5 , where the deformation is exaggerated to illustrate the effects of the fluid on the structure.

4. Aneurysm case study

We present here visualization results for fluid-structure interactions in the patient-specific aneurysm introduced earlier. The aneurysm is located in the cavernous segment of the right internal carotid artery and its detailed geometry was extracted using Magnetic Resonance Imaging (MRI). The fluid-solid mass ratio is 0.5 with the fluid mesh (artery and aneurysm) consisting of 42,065 tetrahedral elements, while its solid counter-part (arterial wall) contains 16, 912 prismatic elements. The data output of the coupled NekTar solver comprises of the corresponding solution fields for the fluid and structure domains. We employed 2 nd order accurate time integration schemes and 3 rd order polynomial expansions, resulting to about 30 million degrees of freedom. The simulation output consists of thousands of files with approximately 100 Gb of data, while the computation of two cardiac cycles took about 24 h on 1024 cores of an IBM BG/Q. This is a mid-sized, yet state-of-the-art FSI problem, with larger more complex simulations planned for the future.

The results loaded into ParaView contain the fluid velocity and pressure, as well as the mesh displacement, velocity and acceleration in each of the domains. Using the ParaView–NekTar interface we compute the fluid vorticity, wall shear stress on the arterial wall boundary surface, and the full volume representation of the stress tensor in the arterial wall.

Visualization of the blood flow streamlines and vorticity iso-contours can reveal the complex flow patterns in the aneurysm, while rendering the stress tensor on the arterial walls highlights the local stresses in the aneurysm due to the multi-physics, flow-structure interactions. Looping though animation frames offers invaluable information of the evolving dynamics in a global setting, such as tracing and tracking possible hydrodynamic instabilities, detecting oscillatory responses, and identifying the effective dimensionality of the system. Furthermore, in a local setting, visualization allows researchers to zoom in and extract high-resolution spatio-temporal analytics of the flow field and derived quantities that may be of vital importance to the medical practitioner. For instance, visual identification and monitoring of the location of maximum wall shear stress within the aneurysm sack during a full heartbeat could ultimately provide clear cut evidence towards the assessment of a potential treatment.

Although visualization offers a flexible platform for debugging, post-processing, and data analytics, its true power lies in the fact that it can often transform a perplexing simulation into a lucid story, accessible by a very broad audience. In the particular case of studying CAs, visualization can deliver content of both educational and practical value, with the audience ranging from aspiring biomedical engineers to seasoned neurosurgeons.

A manifestation of the capabilities and functionality of the ParaView–NekTar plugin is presented in Figs. 6 and 7 . Fig. 6 depicts the blood streamlines at the systolic peak of the cardiac cycle. These velocity streamlines are colored by the velocity magnitude and include arrow glyphs which indicate the direction of the flow. Fig. 7 contains vorticity isocontours that are colored by pressure, the gradient of which is the driving force of the flow. Swirling flow inside the aneurysm sack induces deformation of the arterial wall due to mechanical stress. By rendering the derived stress tensor data, visible on the wall of the artery in both Figures, we can provide a clear visual identification of the regions of high stress, hence the regions of susceptible rupture. The visualizations clearly establish the correlation between complex flow patterns and stress on the vascular wall and provide insight on this fundamental biomechanical interplay.

Fig. 6.

Fig. 6

Blood flow in a compliant cerebral aneurysm containing blood flow streamlines (colored by velocity magnitude), and stress tensor magnitude on the walls of the artery. Arrow glyphs along the streamlines indicate the direction of the flow. (For interpretation of the references to colour in this figure text, the reader is referred to the web version of this article).

Fig. 7.

Fig. 7

Blood flow in a compliant cerebral aneurysm containing blood flow streamlines (colored by velocity magnitude), and stress tensor magnitude on the walls of the artery. Isocontours of the vorticity magnitude, which indicate areas of swirling flow, are colored by the pressure. (For interpretation of the references to colour in this figure text, the reader is referred to the web version of this article).

5. Discussion

Computational resources follow an ever-increasing path, while developments in sophisticated models and algorithms create new directions to elucidating the mechanisms that drive many physical and biological systems. This promising loop has been increasingly drawing the interest of scientists and engineers across a broad range of disciplines, leading to an unprecedented volume of simulation data output. Designing efficient, application-driven tools for accurately analyzing, visualizing and interacting with these complex datasets is of crucial importance for gaining a better understanding of the underlying processes [9,10] .

Here, we presented visualization results using a new NekTar reader plug-in designed to process and visualize fluid-structure interaction data computed by a coupled high-order spectral/ hp element solver. The software was used to study the interaction between unsteady blood flow and the resulting stresses in the elastic tissue of a brain aneurysm, revealing the intricate interactions within this pathology. The generated spatio-temporal visualizations provide a key enabling technology towards not only revealing elucidating aspects of the underlying physics, but also potentially providing invaluable input on whether and how a clinical practitioner should intervene.

Acknowledgments

An award of computer time was provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357 . This work also received partial support by the Air Force Office of Scientific Research under Grant no. FA9550-12-1-0463 , and the National Institutes of Health under Grant no. 1U01HL116323-01 . Last but not least, we thank Kitware Inc. for support in developing and distributing the NekTar–ParaView plug-in.

References

  • 1.Aneurysms NC. 2015 http://www.ninds.nih.gov/disorders/cerebral_aneurysm/cerebral_aneurysms.htm)
  • 2.Karniadakis G, Sherwin SJ. Spectral/hp Element Methods for CFD. 2. Oxford University Press; 2005. [Google Scholar]
  • 3.Dong S, Yosibash Z. A parallel spectral element method for dynamic three-dimensional nonlinear elasticity problems. Comput Struct. 2009;87:59–72. [Google Scholar]
  • 4.Yu Y, Baek H, Bittencourt ML, Karniadakis GE. Mixed spectral/hp element formulation for nonlinear elasticity. Comput Methods Appl Mech Eng. 2012;213:42–57. [Google Scholar]
  • 5.Yu Y, Baek H, Karniadakis GE. Generalized fictitious methods for fluid-structure interactions: analysis and simulations. J Comput Phys. 2013;245:317–346. [Google Scholar]
  • 6.Baek H, Karniadakis GE. Sub-iteration leads to accuracy and stability enhancements of semi-implicit schemes for the navierÏstokes equations. J Comput Phys. 2011;230:4384–4402. [Google Scholar]
  • 7.Insley J, Grinberg L, Papka M. Visualizing multiscale, multiphysics simulation data: brain blood flow. Large Data Analysis and Visualization (LDAV), IEEE Symposium on; 2011; pp. 3–7. [Google Scholar]
  • 8.Schloegel K, Karypis G, Kumar V. Parallel static and dynamic multi-constraint graph partitioning. Concurr Comput: Pract Exp. 2002;14:219–240. [Google Scholar]
  • 9.Ma KL, Wang C, Yu H, Moreland K, Huang J, Ross R. Next-generation visualization technologies: enabling discoveries at extreme scale. SciDAC Rev. 2009;12:12–21. [Google Scholar]
  • 10.Fuchs R, Hauser H. Visualization of multi-variate scientific data. Comput Graph Forum. 2009:28. [Google Scholar]

RESOURCES