Abstract
Web-based molecular graphics have transformed the interactive visualization of molecular data, leveraging modern web technologies that provide GPU acceleration, optimized JavaScript engines, and seamless access across devices without additional software installation.
We present the graphics engine at the core of the Mol* toolkit, a high-performance, open-source framework that is widely adopted in academia and industry, including by the Protein Data Bank, UniProt, EMDB, and AlphaFold DB.
The engine combines a comprehensive styling system with a suite of optimized rendering primitives, including real-time surface generation, to deliver both flexibility and visual fidelity. Efficient handling of large-scale molecular scenes is achieved through level-of-detail management, GPU instancing, spatial acceleration structures, and frustum/occlusion culling. A screen-space global illumination model provides scalable, high-quality lighting, while integrated AR/VR support enables immersive molecular exploration.
Together, these capabilities enable engaging, real-time, high-fidelity visualization of molecular systems, across a wide range of scales, from single atoms to billion-atom mesoscale assemblies, demonstrating the strengths of a bespoke web-native rendering engine for molecular graphics, available at https://molstar.org.
Introduction
Molecular data from biology, chemistry, and materials science is inherently 3D and often requires interactive visualization for analysis, exploration, and communication. Increasingly, this is done on the web, providing wide access, sharing, and collaboration. Modern web platforms enable this through GPU access via WebGL [Khronos] and high-performance JavaScript [TC39] engines, delivering speeds comparable to compiled languages on desktops, laptops, and mobile devices without additional software.
Mol* is a high-performance, open-source molecular visualization toolkit built on these capabilities. It is used in academia and industry, including RCSB PDB, PDBe, UniProt, AlphaFold DB, and numerous biotech and pharmaceutical applications. Mol* is MIT-licensed, openly developed, and freely available (https://molstar.org).
The Mol* Viewer [Sehnal 2021] continues the lineage of NGL [Rose 2015] and LiteMol [Sehnal 2017], and joins other web-based molecular graphics tools, including 3Dmol.js [Rego 2015], ChemDoodle Web Components [iChemLabs 2025], Molmil [Bekker 2016], iCn3D [Wang 2019], Miew [EPAM 2025], and JSmol [Hanson 2025], as well as specialized viewers like Speck [Terrell 2015], Simularium Viewer [Lyons 2022], Vol-E [Allen Institute 2025], and MolecularWebXR [Rodríguez 2025]. Most rely on WebGL, except JSmol, which primarily uses a slower software renderer. Feature support varies: sphere and cylinder impostors are common, direct-volume rendering is available in 3Dmol.js, Miew, Vol-E, and Mol*, with Vol-E also supporting multi-channel volumes and path tracing. The Simularium Viewer and Mol* implement a level-of-detail (LOD) system for efficient rendering of large scenes. MolecularWebXR and Mol* offer Augmented Reality (AR) and Virtual Reality (VR) experiences, that the former implements with Three.js (https://threejs.org/).
Molecular data typically includes structural data, volumetric maps, and spatial transformations. Structural data may be represented with spheres and cylinders for atoms and bonds, more abstract cartoons for proteins, carbohydrate symbols, or molecular surfaces. Volumetric maps, such as electron densities, are rendered as isosurfaces, slices, or via direct volume rendering. Spatial transformations arise naturally, as in crystal symmetry, or for simplification, as in periodic simulation boxes or rigid particle models. Molecular dynamics uses periodic boundaries to reduce edge artifacts. Large complexes, like viral capsids, exploit symmetry, while mesoscale models pack biomolecules as rigid particles with position and rotation variations, mimicking repeated transformations. Finally, illustrative rendering styles [Goodsell 2019] are often employed to emphasize molecular shape, clarify spatial relationships, and reduce visual clutter in complex structures.
The Mol* graphics engine is designed to support these use cases across a wide range of scales, from single atoms to billions of atoms, encompassing both structural and volumetric data, with interactive performance at each scale. In this paper, we describe the architecture and techniques of the engine, detailing how it achieves high-performance web-based molecular visualization across diverse scientific domains.
Capabilities
Key capabilities of our engine for web molecular graphics include: versatile geometry rendering and styling, GPU-accelerated surface computation, scalable handling of large models, realistic lighting, and immersive AR/VR support.
Versatile Visuals
At its core, the engine combines optimized geometry primitives, a unified styling system, and expressive post-processing effects, enabling high-framerate web visualization of molecular data with excellent image quality and extensive customization. These techniques are designed not only to produce publication-quality images, but also to improve depth perception, spatial comprehension, and visual separation during interactive exploration of complex molecular scenes.
The engine provides the following geometry primitives (Fig. 1): meshes, spheres & cylinders, points & lines, text, and volumes & images. Triangle meshes represent arbitrary surfaces, with input data from attributes or GPU data textures (e.g., for isosurface extraction). Spheres and cylinders use ray-casted impostors for minimal memory use and fast, high-quality rendering. Points & lines have fixed pixel widths for fast exploration of large datasets. Direct volumes are raymarched 3D scalar fields, while images interpolate field slices. Text uses signed-distance fields for scalable, pixel-perfect rendering.
Figure 1.

a) All geometry primitives, clockwise: sphere impostors, cylinder impostors, direct-volume, mesh (from texture), lines & points, mesh, and the chain labels are SDF text. b) Materials, clockwise: metallic golden protein and red bumpy water, plastic purple protein and red bumpy water, matte green protein and red bumpy water.
Color assignment granularity: Per c) object, d) instance, and e) group.
Backfaces of transparent surfaces: f) Omitted, g) transparent, and h) opaque. Small molecule with surface: i) X-ray effect reveals internal details while preserving an object’s silhouette, and j) outline supported for transparent objects.
Geometry instancing is supported for all primitives, sharing geometry while allowing per-instance position and rotation. This enables efficient visualization of symmetric structures like virus capsids or particle-based mesoscale models.
The unified styling system controls color, transparency, emissivity, and material properties via IDs or 3D volumes, allowing consistent theming across any representation.
For transparency, the engine supports three methods: Weighted Blended Order Independent Transparency (WBOIT) for balanced quality and performance, simple blending for speed at the cost of overlap artifacts, and Depth Peeling OIT (DPOIT) for maximum accuracy at higher cost. Configurable backface transparency (Fig. 1f, g, h) and an X-ray effect (Fig. 1i) are also available. Transparency is integrated with outlines (Fig. 1j) and ambient occlusion.
Post-processing effects include ambient occlusion, outlines, bloom, depth of field, local shadows, and flexible backgrounds.
To reduce aliasing, the engine provides standard image- and sample-based anti-aliasing, suitable for dense, varying molecular geometry where uniform colors make jagged edges more noticeable. Techniques can be combined to balance hardware performance and visual quality.
Real-time Surfaces
The renderer supports three GPU compute functions, Gaussian-density accumulation, marching-cubes isosurface extraction, and volumetric smoothing of styling properties (Fig. 2). The functions work independently but can be chained together to keep all data GPU resident and avoid expensive data transfer to the CPU.
Figure 2.

a) GAIN domain tethered agonist exposure [Beliu 2020]. Whole protein and inset with closeup of the cleavage site showing per-pixel transparency and smooth coloring. b) Isosurface at sigma 2 threshold extracted from the electron density map for PDB entry 4V5A with inset showing details.
Our volumetric smoothing approach can handle any styling property (color, transparency, etc.) and only requires a set of points with associated styling properties, usually vertices of a mesh. To control quality and performance we provide two parameters: grid resolution and vertex sample stride. Since this is a volumetric approach, any spatially close properties are averaged, even if they originally belong to different surfaces. With appropriate grid resolutions, we have not seen this to be a problem in practice.
Doing the whole pipeline for Gaussian surface visualization on the GPU is about an order of magnitude faster than on the CPU. The pipeline steps are 1) calculate Gaussian density, 2) extract isosurface, 3) smooth styling properties, and 4) render mesh. Isosurface extraction on the GPU is an order of magnitude faster compared to doing it on the CPU. This is excluding the time to upload the data to the GPU. When including it, the gain is much smaller, so doing it on the GPU is most beneficial when multiple isosurfaces are extracted.
Memory available to the GPU is usually more limited than what is available to the CPU. Hence, Mol* applications implement an automatic fallback to calculating surfaces on the CPU whenever the required memory likely exceeds what is available to the GPU.
Large Scenes
The engine scales to mesoscale molecular models from viruses to cell organelles, containing hundreds of thousands of protein instances and billions of atoms represented as spheres (Fig. 3). This is achieved through custom optimizations beyond standard culling and level-of-detail (LOD). A two-level spatial acceleration structure accelerates LOD selection, frustum, and occlusion culling within web constraints. The sphere-specific LOD system adds no extra memory and requires only automatic attribute reordering. Multi-scale ambient occlusion (AO) extends standard AO to capture both fine crevices and large cavities, enhancing depth perception. Together, these features enable interactive visualization of billion-atom models with greatly reduced rendering cost and memory usage.
Figure 3.

Top: Graphics presets for a large scene, the presynaptic bouton [Rammner 2022, Wilhelm 2014]. Full views with inset details. a) Quality with SSGI, b) Quality, c) Balanced, d) Performance. Bottom: Details of the same scene rendered with only ambient light, outlines, and the following three different levels of ambient occlusion. e) None, f) Standard, and g) Multi-scale. For interactive examples visit https://molstar.org/me/.
The Mesoscale Explorer [Rose 2024] showcases these capabilities in a web application for large-scale models. It employs LOD, culling, and multi-scale AO for scalable performance across desktops and mobile devices. Default settings enable frustum and occlusion culling and limit styling to instance level for efficiency. Quality presets adjust LOD thresholds, resolution, and sphere impostor approximations, illustrating how the engine’s features integrate in an end-user tool.
The LOD system also improves usability for smaller scenes. An automatic LOD mode shows more detailed representations near the camera while fading out coarser ones using stochastic transparency. For example, the Mol* Viewer implements this as a preset (Automatic Detail) with Gaussian surface, cartoon, and ball-and-stick representations for intuitive macromolecular exploration.
Interactive Illumination
We integrated a custom screen-space global illumination (SSGI) solution for high-quality lighting of opaque geometry, with transparent elements blended over. Unlike standard lighting, SSGI treats emissive geometry as light sources, adding ambience and enabling emissive-lit scenes (Fig. 4). Compared to traditional lighting with or without AO, SSGI produces brighter, more balanced results, improving the visibility of binding pockets, cavities, and spatial relationships that are important for molecular interpretation during interactive exploration. Light bounces illuminate surfaces naturally, while enclosed regions remain appropriately shaded without AO’s overdarkening (Fig. 4a, b, c).
Figure 4.

Top: Protein with ligand scene showing PDB entry 6AU3. a) Plain looks flat, with little depth information conveyed. b) SSAO has crevices easily distinguishable, but overly dark. c) SSGI has light bouncing around for a more natural-looking appearance. Bottom: Dramatic renderings with SSGI and emissive objects. d) Luciferase is lit only by red glowing luciferin. e) Cloverite with added DoF, and bloom. f) CellPack model of mature ISG [Autin 2022] with inset showing ATPase. For interactive examples visit https://molstar.org/illumination/.
Because illumination is computed in screen space, performance is independent of scene complexity, scaling from single molecules to mesoscale models.
True thickness information is missing in screen-space lighting because the renderer knows only the pixel positions on the screen, not the actual depth or extent of the objects. We address this by (1) estimating a base thickness from front- and back-face depth, and (2) applying per-object density factors based on representation type (e.g., consistent cylinder thickness in stick models), providing accurate, automatic thickness estimation without user tuning.
SSGI supports interactive exploration: images appear noisy during motion but converge as samples accumulate. Illumination quality and sample count adjust automatically for responsiveness, and users can toggle SSGI on or off instantly. For screenshots, fixed high-quality settings ensure reproducible images across devices.
Immersive AR/VR
The engine provides unobtrusive AR/VR integration via the WebXR API, requiring no application changes. Users can enter or exit AR/VR at any time with minimal impact on the scene (Fig. 5a). For performance, post-processing effects are disabled by default, and view-dependent clipping is turned off to avoid disorientation because depth perception in AR/VR makes it unnecessary. Users can scale molecular scenes with simple gestures to fit their environment. With suitable quality presets, even large molecules and mesoscale models can be explored immersively (Fig. 5b). The system also supports animated molecular data, including molecular dynamics and authored stories [Slaninakova 2025], in fully immersive environments (Fig. 5c).
Figure 5.

Stills from example AR sessions available at https://molstar.org/xr/. a) Checking the fit density of a Rhodopsin crystal structure (PDB entry 3PQR). b) A Quick look at the CellPack HIV model [Johnson 2014]. c) Playing through the MolViewStories Kinase story.
Discussion
By combining broad feature support with high visual fidelity, the Mol* graphics engine provides a flexible foundation for web-based molecular visualization across many scientific applications.
The Mol* Viewer is widely customized and integrated across diverse molecular visualization applications, demonstrating the versatility of the underlying graphics engine. Major resources such as RCSB PDB and PDBe use tailored viewers for 3D structures, electron density maps, validation reports, and assembly symmetries. AlphaFoldDB displays predicted models with integrated quality metrics [Fleming 2025]. Other examples include PePr2Vis for protein protrusions [Reuter Lab 2022], CaverWeb for tunnel visualization [Marques 2025], PDBTM for transmembrane regions [Dobson 2024], RNAspider for RNA entanglements [Luwanski 2022], DNATCO for nucleic acid annotation [Černý 2026], and Iambic’s Envision for molecular orbital rendering [Iambic 2023].
The Mesoscale Explorer showcases the engine’s scalability, enabling whole-cell visualization directly in the web browser. This was previously possible only in desktop tools like cellVIEW [Le Muzic 2015]. Our approach similarly employs occlusion culling, custom sphere LODs, and illustrative shading, but is adapted to web constraints. Due to WebGL limitations, CPU-based culling replaces GPU methods, making it costlier and less precise because occlusion data lags by a few frames.
Our screen-space global illumination (SSGI) provides high-quality, natural-looking lighting for any molecular data, including illustrations (Fig. 6). It produces imagery comparable to specialized tools like Speck and QuteMol [Tarini 2006] while supporting all geometry types and scene sizes.
Figure 6.

Reimagination of illustrations from the 111th Molecule of the Month: Hydrogenase [Goodsell 2009] using a) SSGI to convey depth and b) monolayer transparency to reveal the interior.
Finally, immersive AR/VR support enhances spatial perception and engagement. Thanks to WebXR, such experiences are instantly accessible across compatible headsets without requiring separate native applications or distribution via device-specific app stores.
Future
The evolving web platform continues to open new possibilities. WebGPU, a next-generation graphics API with GPU compute capabilities, will enable faster, GPU-based calculations for tasks like molecular surface generation and culling, further expanding the capabilities available to web-based molecular visualization engines. WebXR support remains incomplete across browsers (e.g., Safari, Firefox), but broader adoption could justify integrating support for building immersive user interfaces directly at the engine level. While user interfaces are generally not part of the graphics engine, in AR/VR they are part of the 3D scene and warrant integration.
Methods
To support the capabilities outlined above, we implemented a set of integrated rendering and computation methods. Here, we detail the design and operation of the Mol* graphics engine, from scene organization to GPU-based rendering and AR/VR integration.
Scene
The scene and rendering data are organized in a flat, non-hierarchical layout, with all renderable objects stored in a single list. Each object is self-contained, encapsulating all data needed for rendering (geometry, colors, etc.). Spatial organization, visibility, level-of-detail, and transformations are handled by the host application, which has deeper knowledge of the molecular data. This design minimizes coupling between the renderer and host, enabling independent development and maintenance.
Each renderable object represents a single primitive type for efficient GPU sorting and dispatch. Instancing is handled via ID lists and transformation matrices. Supported primitive types include triangle meshes, ray-casted impostor spheres and cylinders [Sigg 2006], rasterized points and lines, signed-distance-field text, ray-marched volumes, and per-pixel interpolated images (Fig. 1a).
Every primitive type has a dedicated shader program that shares a common set of inputs. These include object, instance, group IDs, transformation matrices, and styling parameters such as color and transparency, along with any type-specific data. This unified interface allows extensive shader code reuse across different rendering methods.
Styling properties (color, transparency, etc.) are stored in data textures accessed by instance and group IDs, decoupling geometry from appearance. This avoids redundant styling data, e.g., all triangles representing a protein residue can share a single color value. The same IDs also support GPU picking, which provides pixel-perfect selection without additional acceleration structures. Picking works by rendering IDs instead of colors, allowing each pixel to uniquely identify the corresponding primitive component.
Styling
Appearance
The appearance of geometry primitives is defined by color, transparency, substance, and emissivity. Substance includes three parameters: metalness, roughness, and bumpiness [Mikkelsen 2010]. Together, they enable a wide range of surface finishes from glossy metals to matte polymers (Fig. 1b). A size property applies to spheres, cylinders, points, lines, and text.
Properties can be specified at the object, instance, group, or instanced-group level. Object-level properties are provided as uniforms, while others use textures indexed by group and instance IDs, minimizing data use. For example, coloring by symmetry operator uses instance granularity, allowing large protein chains to share color data per symmetry copy (Fig. 1c, d, e). This ID-based texture access replaces UV mapping, a common graphics technique in which 2D texture coordinates are assigned to surface vertices. Avoiding UV mapping simplifies geometry generation for complex molecular surfaces and enables a unified styling approach across all primitive types.
Styling can also be defined volumetrically via a 3D texture sampled in world space, for example, by coloring a molecular surface by a 3D electrostatic potential map. Finally, the renderer supports per-object effects: no-light for stylized looks and flat shading, which derives normals from partial derivatives for a cleaner style on low-resolution geometry.
Transparency
For transparent geometry, the draw order affects correctness. Objects that are farther from the camera must be rendered first. However, sorting them precisely is too costly in real time, so approximate transparency methods are commonly used [Kakkar 2025]. The engine provides three such methods (Fig. 7): Weighted Blended OIT (WBOIT) [McGuire 2013], which avoids sorting by accumulating color and alpha in weighted buffers before compositing. Dual Depth Peeling OIT (DPOIT) [Bavoil & Myers 2008], which reconstructs correct depth order by iteratively peeling layers from front and back. A variant of DPOIT, monolayer transparency, can be used to resolve a single transparent layer accurately with lower computational cost. Blended Transparency, which forgoes sorting and simply blends transparent geometry, using separate passes for front and back faces to reduce artifacts.
Figure 7.

Pros and cons of transparency methods. Pros, top row: a) Blended: Fast and produces good results in scenes with low alpha values and few overlapping transparent elements. b) WBOIT: Fast and effectively handles multiple overlapping elements as long as alpha values remain low. c) DPOIT: Performs well for low and high alpha values, and manages medium number of overlapping elements. d) Monolayer: Fast and produces clean outputs, as it renders only the first transparency layer. Cons, bottom row: e) Blended: Can be confusing and prone to artifacts when many overlapping elements are present, especially at high alpha values. f) WBOIT: Produces poor results when high alpha values are used. g) DPOIT: Generally slow, with higher-quality results achieved at the cost of performance; artifacts may also occur if too few layers are peeled. h) Monolayer: Ineffective in scenes with multiple overlapping transparent objects that must remain visible.
An X-ray effect (and its inverse) is also available, modulating transparency by the angle between the camera direction and surface normal (Fig. 1i).
Post-processing
Ambient occlusion uses a screen-space technique [Filion & McNaughton 2008]. Randomly rotated sample points are scaled to a given radius, projected into view space, and used to sample the depth buffer for occlusion accumulation. Our multi-scale extension evaluates multiple radii and keeps the maximum occlusion per pixel, capturing effects across scales (Fig. 3e, f, g). For transparent samples, occlusion is scaled by opacity.
Outlines are generated via Sobel edge detection, with opaque and transparent objects processed separately, using the depth of the first transparent layer and the combined opacity of all layers (Fig. 1j).
Bloom extracts bright regions from either image luminance or an emissivity material property, blurs them, and adds the result back as a glow.
Depth-of-field applies a configurable blur to out-of-focus areas [Flick 2018], allowing control over the size, position, and shape of the focus region.
Local shadows are computed in screen space by ray-marching toward the light direction [Karabelas 2022]. Occlusion along the ray produces shadowing; step count and ray length are capped for performance, and an edge-fade term reduces border artifacts. The background can be a static color, skybox, responsive image that adapts to canvas size, or a radial/linear gradient.
Anti-aliasing
The engine supports FXAA [Lottes 2009], SMAA [Jimenez 2012], and a temporal multi-sampling method. The latter blends successive, sub-pixel–jittered frames of a static scene and camera, using either a few samples per animation frame for interactivity or all samples in one frame for high-quality output.
Compute
The engine provides three GPU compute functions: Gaussian-density accumulation [Krone 2012], marching-cubes isosurface extraction [Dyken 2008], and our own volumetric smoothing of styling properties. These functions can run independently or be chained to keep data GPU-resident and avoid costly CPU transfers.
Volumetric smoothing starts by iterating over mesh vertices, accumulating styling properties (color, transparency, etc.) and counts into a 3D volume. In a second pass, counts are used to average the properties. Grid resolution and vertex sample stride control quality and performance. The averaged values are then sampled by vertex or fragment shaders for any primitive, producing smooth, spatially averaged styling (Fig. 8). A CPU fallback implementation is also available.
Figure 8.

Gaussian surface for PDB entry 1CRN colored by chemical elements (carbon grey, oxygen red, nitrogen blue). a) Colored by vertex color. b) Colored by smoothed volume. c) Smoothed volume grid slices of color values accumulated and averaged from mesh vertices.
Scaling
Scaling rendering to hundreds of thousands of geometry instances, such as protein chains in whole-cell models, requires strategies beyond brute force. We reduce rendering cost using culling and level-of-detail (LOD) selection [Luebke 2002]. Instances outside the camera frustum or fully occluded do not contribute to the image and are removed by applying frustum and occlusion culling [Greene 1993]. Additionally, distant instances occupy fewer pixels, so selecting an appropriate LOD based on camera distance significantly reduces GPU workload.
Instance Grid
Culling and LOD selection require spatial grouping of instances. Iterating over each instance every frame is costly: larger groups reduce CPU work but make culling and LOD less effective, while smaller groups increase CPU overhead. To balance CPU and GPU load, we use a two-level grid based on instance bounding spheres (Fig. 9a). Instances are first grouped in a bottom-level grid with small cells, then in a top-level grid with larger cells. During culling and LOD selection, the top-level grid is checked first to avoid unnecessary bottom-level processing (Fig. 9b).
Figure 9.

a) Schematic depiction of our two-level grid. Instance bounding spheres (orange) are first grouped (black circles) in a bottom-level grid (black) with a smaller cell size and then in a top-level grid (blue) with larger cell size. b) Frustum culling example. The camera frustum (black trapezoid) only overlaps with top-level cell 3, and within it, only with bottom-level cells a, c, d. All instances in other cells are culled. c) Schematic depiction of sphere LODs. At level 0 all spheres are drawn (black). At level 1, spheres 2, 4, and 7 are drawn enlarged (green). At level 2, sphere 4 is drawn further enlarged (blue). d) Sphere LODs reordering example. Spheres are ordered by the level they first appear in.
Sphere LODs
Spheres efficiently represent molecular systems from atoms to whole-cell models. When only spheres are used, a bespoke LOD system is more efficient than the generic approach. Unlike the generic system, which maintains separate geometries per LOD, our bespoke system creates a hierarchy within a single sphere geometry. Lower-detail levels replace groups of spheres with representative ones. Existing spheres are reused, typically every nth sphere, since proteins are space-filling curves (Fig. 9c). Attribute buffers are reordered so lower-detail spheres come first, allowing LOD selection by range and a scaling uniform (Fig. 9d). This approach shares almost all GPU resources between LODs and reduces memory use by roughly half compared to separate-geometry LODs.
SSGI
To calculate screen-space global illumination [Ritschel 2009], we use a backwards Monte Carlo approach, tracing rays from the camera through each pixel. Rays hitting nothing return the background color; rays hitting a surface spawn a cosine-weighted reflected ray over the hemisphere. Rays accumulate color: the first hit contributes the shaded color, subsequent hits add diffuse color, and emissive surfaces contribute directly. Rays terminate on missing surfaces or after a configurable bounce limit, preventing stuck rays from contributing.
Because we trace in screen space, thickness behind surfaces must be estimated to avoid overly dark results (Fig. 10). We compute a conservative base thickness from front- and back-face depth and scale it by a per-object density factor, with an optional fixed density for manual control.
Figure 10.

Thickness calculation for SSGI. a) Depiction of depth for an image slice. Front (solid) and back (dashed) depth samples are shown next to scene geometry (grey disks). Using only front depth, the tracing rays miss the gap between spheres (solid arrows) but find it with back depth (dashed arrow). b) Scene traced with front depth only. c) Scene traced with front and back depth.
To balance quality and interactivity, samples are traced iteratively: early results are noisy but improve over time. Adaptive denoising smooths noise with stronger blur when fewer samples are available, and sample count is automatically adjusted based on prior render time, optimizing performance across GPUs. Screenshots always use high-quality tracing with multi-sample antialiasing. Because colors and normals are precomputed, camera jittering is applied each iteration, with colors and normals recalculated per frame and blended to produce antialiased edges.
WebXR
Using the WebXR API, the engine supports AR and VR devices by rendering the scene twice, once per eye, using headset pose data. To ensure seamless entry and exit from XR mode, several integrations are required.
First, the molecular scene is scaled to fit real-world space, e.g., within a 30 cm sphere in front of the user. A scaling factor is computed for the target size, applied once the molecular data is fully under the engine’s control, so the application remains unaware of AR/VR scaling details.
Second, head movement must be handled carefully. Text is normally rendered as camera-facing billboards, but in XR this can be disorienting when it follows head movement, so the text remains aligned to the initial camera orientation. Backgrounds, however, keep the skybox fixed relative to head movement for immersion.
Finally, GPU picking switches from screen-based to ray-based picking, casting rays from XR input devices. This is achieved by rendering a narrow view from the ray’s origin and direction to a small off-screen target for precise intersection testing.
Performance
Performance measurements are done with Google Chrome (v141) on a desktop (AMD 7900X CPU, Nvidia 4070 Ti GPU) at 2560×1440 resolution and a laptop (Intel i7–1065G7 CPU with integrated Iris Plus GPU) at 1920×1200 resolution.
Calculating the colored Gaussian surface of the GAIN domain with 2838 atoms (Fig. 2a) is about an order of magnitude faster on the GPU (desktop: w/ smoothing ~12ms/~280ms on GPU/CPU and w/o smoothing ~9ms/~170ms; laptop: w/ smoothing ~230ms/~1100ms on GPU/CPU and w/o smoothing ~100ms/~750ms).
Extracting an isosurface from the 128×128×128 volume of the electron density map for PDB entry 4v5a (Fig. 2b) is an order of magnitude faster on the GPU (desktop: ~40ms/~550ms on GPU/CPU; laptop: ~70ms/~1400ms on GPU/CPU).
The presynaptic bouton scene with >3 billion atoms in >577 thousand instances (Fig. 3) can be viewed in real time on the desktop (quality preset: ~14ms zoomed-out, ~23ms zoomed-in) and interactively on the laptop (performance preset: ~70ms zoomed-out, ~70ms zoomed-in) using the graphics presets from the Mesoscale Explorer [Rose 2024]. Without any LOD, rendering slows by an order of magnitude on the desktop (~370ms zoomed-out and ~240ms zoomed-in).
The protein with ligand scene (Fig. 4a, b, c) renders a single SSGI iteration in real time on the desktop, while in motion (~7ms including 4 traces) and when converging (~14ms including 7 traces). On the laptop a single iteration is rendered interactively, while in motion and when converging (both ~48ms including 1 trace).
Supplementary Material
The file “figure-sessions.pdf” contains links to interactive Mol* sessions corresponding to each figure, allowing exploration of the associated 3D structures.
Acknowledgments
A.S.R. was supported until June 2020 by grants for RCSB PDB core operations jointly funded by the US National Science Foundation (DBI-2321666, PI: S.K. Burley), the US Department of Energy (DE-SC0019749, PI: S.K. Burley), and the National Cancer Institute, the National Institute of Allergy and Infectious Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health (R01GM157729, PI: S.K. Burley). D.S. acknowledges funding from the Grant Agency of Czech Republic JuniorStar project (22-30571M). L.A. is supported by NIH grants GM120604 and 5U54AI170855.
References
- Khronos. Low-level 3D graphics API based on OpenGL ES. https://www.khronos.org/webgl/.
- TC39. Specifying JavaScript. https://tc39.es/.
- Sehnal D, Bittrich S, Deshpande M, Svobodova R, Berka K, Bazgier V, et al. Mol* viewer: modern web app for 3D visualization and analysis of large biomolecular structures. Nucleic Acids Res. 2021;49:W431–7. 10.1093/nar/gkab314. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rose AS, Hildebrand PW. NGL Viewer: a web application for molecular visualization. Nucleic Acids Res. Volume 43, Issue W1, 1 July 2015, Pages W576–W579. 10.1093/nar/gkv402. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sehnal D, Deshpande M, Varekova RS, Mir S, Berka K, Midlik A, et al. LiteMol suite: interactive web-based visualization of large-scale macromolecular structure data. Nat Methods. 2017; 14:1121–2. 10.1038/nmeth.4499 [DOI] [PubMed] [Google Scholar]
- Rego N, Koes D. 3Dmol.js: molecular visualization with WebGL. Bioinformatics. Volume 31, Issue 8, April 2015, Pages 1322–1324. 10.1093/bioinformatics/btu829 [DOI] [PMC free article] [PubMed] [Google Scholar]
- iChemLabs. ChemDoodle Web Components, v11.0.0, 2025. https://web.chemdoodle.com/.
- Bekker GJ, Nakamura H, Kinjo AR. Molmil: a molecular viewer for the PDB and beyond. J Cheminform. Volume 8, 2016, Article number 42. 10.1186/s13321-016-0155-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang J, Youkharibache P, Zhang D, Lanczycki CJ, Geer RC, Madej T, Phan L, Ward M, Lu S, Marchler GH, Wang Y, Bryant SH, Geer LY, Marchler-Bauer A. iCn3D, a web-based 3D viewer for sharing 1D/2D/3D representations of biomolecular structures. Bioinformatics, Volume 36, Issue 1, January 2020, Pages 131–135. 10.1093/bioinformatics/btz502. [DOI] [PMC free article] [PubMed] [Google Scholar]
- EPAM. Miew, v0.11.1, 2025. https://lifescience.opensource.epam.com/miew/.
- Jmol: an open-source Java viewer for chemical structures in 3D. http://www.jmol.org/.
- Terrell R Speck, 2015. https://wwwtyro.github.io/speck/.
- Lyons B, Isaac E, Choi NH, Do TP, Domingus J, Iwasa J, et al. The Simularium viewer: an interactive online tool for sharing spatiotemporal biological models. Nat Methods. 2022;19:513–5. 10.1038/s41592-022-01442-1. [DOI] [PubMed] [Google Scholar]
- Allen Institute. Vol-E, v2.15.0, 2025. https://vol-e.allencell.org/.
- Rodríguez FJC, Frattini G, Phloi-Montri S, Meireles FTP, Terrien DA, Cruz-León S, Dal Peraro M, Schier E, Lindorff-Larsen K, Limpanuparb T, Moreno DM, Abriata LA. MolecularWebXR: Multiuser discussions in chemistry and biology through immersive and inclusive augmented and virtual reality. Journal of Molecular Graphics and Modelling, Volume 135, 2025, 108932. 10.1016/j.jmgm.2024.108932. [DOI] [PubMed] [Google Scholar]
- Goodsell DS, Austin L, Olson AJ. Illustrate: Software for Biomolecular Illustration. Structure, Volume 27, Issue 11, p1716–1720.e1, November 05, 2019. 10.1016/j.str.2019.08.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rose A, Sehnal D, Goodsell DS, Autin L. Mesoscale explorer: Visual exploration of large-scale molecular models. Protein Science. 2024;33(10):e5177. 10.1002/pro.5177. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Slaninakova T, Charlop-Powers Z, Doshchenko V, Rose AS, Midlik A, Sekuła A, Forseca N, Fleming J, Vallat B, Autin L, Sehnal D. MolViewStories: Interactive Molecular Storytelling. https://molstar.org/mol-view-stories/. [DOI] [PubMed]
- Fleming J, Magana P, Nair S, Tsenkov M, Bertoni D, Pidruchna I, Afonso MQL, Midlik A, Paramval U, Žídek A, Laydon A, Kovalevskiy O, Pan J, Cheng J, Avsec Z, Bycroft C, Wong LH, Last M, Mirdita M, Steinegger M, Kohli P, Váradi M, Velankar S. AlphaFold Protein Structure Database and 3D-Beacons: New Data and Capabilities. Journal of Molecular Biology, Volume 437, Issue 15, 2025. 10.1016/j.jmb.2025.168967. [DOI] [PubMed] [Google Scholar]
- Reuter Lab. PePr2Vis Peripheral Protein Protrusion Visualisation, v1.3, 2022. https://reuter-group.github.io/peprmint/pepr2vis/.
- Marques SM, Borko S, Vavra O, Dvorsky J, Kohout P, Kabourek P, Hejtmanek L, Damborsky J, Bednar D. Caver Web 2.0: analysis of tunnels and ligand transport in dynamic ensembles of proteins. Nucleic Acids Research, Volume 53, Issue W1, 7 July 2025, Pages W132–W142, 10.1093/nar/gkaf399. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dobson L, Gerdán C, Tusnády S, Szekeres L, Kuffa K, Langó T, Zeke A, Tusnády GE. UniTmp: unified resources for transmembrane proteins. Nucleic Acids Research, Volume 52, Issue D1, 5 January 2024, Pages D572–D578. 10.1093/nar/gkad897. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luwanski K, Hlushchenko V, Popenda M, Zok T, Sarzynska J, Martsich D, Szachniuk M, Antczak M. RNAspider: a webserver to analyze entanglements in RNA 3D structures. Nucleic Acids Research, Volume 50, Issue W1, 5 July 2022, Pages W663–W669. 10.1093/nar/gkac218. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Černý J, Malý M, Božíková P, Prchalová T, Svoboda J, Biedermannová L, Schneider B. DNATCO v5.0: integrated web platform for 3D nucleic acid structure analysis. Nucleic Acids Research, Volume 54, Issue 1, 13 January 2026, gkaf1491. 10.1093/nar/gkaf1491. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Iambic. Envision, 2023. https://www.iambic-envision.com/.
- Le Muzic M, Autin L, Parulek J, Viola I. cellVIEW: a tool for illustrative and multi-scale rendering of large biomolecular datasets. Eurographics Workshop. Vis Comput Biomed 2015; 2015: 61–70. 10.5555/2853955.2853964. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tarini M; Cignoni P; Montani C. Ambient Occlusion and Edge Cueing for Enhancing Real Time Molecular Visualization. IEEE Transactions on Visualization and Computer Graphics, Volume: 12, Issue: 5, Sept.-Oct. 2006. 10.1109/TVCG.2006.115. [DOI] [PubMed] [Google Scholar]
- Sigg C, Weyrich T, Botsch M, Gross M. GPU-Based Ray-Casting of Quadratic Surfaces. The Eurographics Association, 2006, ISSN 1811–7813. 10.2312/SPBG/SPBG06/059-065. [DOI] [Google Scholar]
- Mikkelsen M Bump Mapping Unparametrized Surfaces on the GPU. Journal of Graphics, GPU, and Game Tools, Volume 15, 2010, Issue 1, Pages 49–61. 10.1080/2151237X.2010.10390651. [DOI] [Google Scholar]
- Kakkar P, Rao SK, Maurer M, Mane V. Advancements in Order Independent Transparency: A Survey for Real-Time Rendering Practitioners. 2025 7th International Conference on Software Engineering and Computer Science (CSECS), Taicang, China, 2025, pp. 1–7. 10.1109/CSECS64665.2025.11009764. [DOI] [Google Scholar]
- McGuire M, Bavoil L. Weighted Blended Order-Independent Transparency, Journal of Computer Graphics Techniques (JCGT), vol. 2, no. 2, 122–141, 2013. https://jcgt.org/published/0002/02/09/. [Google Scholar]
- Bavoil L, Myers K. Order Independent Transparency with Dual Depth Peeling. NVIDIA OpenGL SDK 10 (12), 2–4, 2008. https://developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf [Google Scholar]
- Filion D, McNaughton R. Effects & techniques. ACM SIGGRAPH 2008 games. New York, USA: ACM; 2008. p. 133–64. 10.1145/1404435.1404441. [DOI] [Google Scholar]
- Flick J Depth of Field, 2018. https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/.
- Karabelas P Screen space shadows, 2020. https://panoskarabelas.com/posts/screen_space_shadows/.
- Lottes T: FXAA. Tech. rep, NVIDIA, 2011. 2, 4. https://developer.download.nvidia.com/assets/gamedev/files/sdk/11/FXAA_WhitePaper.pdf. [Google Scholar]
- Jimenez J, Echevarria JI, Sousa T, Gutierrez D. SMAA: Enhanced Subpixel Morphological Antialiasing. Computer Graphics Forum, 31: 355–364. 10.1111/j.1467-8659.2012.03014.x. [DOI] [Google Scholar]
- Luebke D, Reddy M, Cohen J, Varshney A, Watson B, Huebner R, Level of detail for 3D graphics. 2002. Morgan-Kaufmann, San Francisco. [Google Scholar]
- Greene N, Kass M, Miller G. 1993. Hierarchical Z-buffer visibility. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques (SIGGRAPH ‘93). Association for Computing Machinery, New York, NY, USA, 231–238. 10.1145/166117.166147. [DOI] [Google Scholar]
- Krone M, Stone J, Ertl T, Schulten K. Fast Visualization of Gaussian Density Surfaces for Molecular Dynamics and Particle System Trajectories. EuroVis - Short Papers, 2012, ISBN 978-3-905673-91-3. 10.2312/PE/EuroVisShort/EuroVisShort2012/067-071. [DOI] [Google Scholar]
- Dyken C, Ziegler G, Theobalt C, Seidel HP. High-speed Marching Cubes using HistoPyramids. Computer Graphics Forum, 27: 2028–2039. 10.1111/j.1467-8659.2008.01182.x. [DOI] [Google Scholar]
- Ritschel T, Grosch T, Seidel HP. Approximating dynamic global illumination in image space. In Proceedings of the 2009 symposium on Interactive 3D graphics and games (I3D ‘09). Association for Computing Machinery, New York, NY, USA, 75–82. 10.1145/1507149.1507161. [DOI] [Google Scholar]
- Beliu G, Altrichter S, Guixà-González R, Hemberger M, Brauer I, Dahse AK, Scholz N, Wieduwild R, Kuhlemann A, Batebi H, Seufert F, Pérez-Hernández G, Hildebrand PW, Sauer M, Langenhan T. Tethered agonist exposure in intact adhesion/class B2 GPCRs through intrinsic structural flexibility of the GAIN domain. Molecular Cell, Volume 81, Issue 5, P905–921.E5, March 04, 2021. 10.1016/j.molcel.2020.12.042. [DOI] [PubMed] [Google Scholar]
- Rammner B, Ozvoldik K, Krieger E. Model of the presynaptic bouton, 2022. http://download.yasara.org/petworld/presynapse/index.html.
- Wilhelm BG, Mandad S, Truckenbrodt S, Kröhnert K, Schäfer C, Rammner B, Koo SJ, Claßen GA, Krauss M, Haucke V, Urlaub H, Rizzoli SO. Composition of isolated synaptic boutons reveals the amounts of vesicle trafficking proteins. Science 344,1023–1028, 2014. 10.1126/science.1252884. [DOI] [PubMed] [Google Scholar]
- Autin L, Barbaro BA, Jewett AI, et al. Integrative structural modelling and visualisation of a cellular organelle. QRB Discovery. 2022;3:e11. 10.1017/qrd.2022.10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnson GT, Goodsell DS, Autin L, Forli S, Sanner MF, Olson AJ. 3D molecular models of whole HIV-1 virions generated with cellPACK. Faraday Discuss. 2014;169:23–44. 10.1039/C4FD00017J. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goodsell DS. Molecule of the Month: Hydrogenase. March 2009. 10.2210/rcsb_pdb/mom_2009_3. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
