Skip to main content
PLOS One logoLink to PLOS One
. 2020 Nov 18;15(11):e0242265. doi: 10.1371/journal.pone.0242265

Adaptive volumetric light and atmospheric scattering

Tan shihan 1,*, Zhang jianwei 1, Lin yi 1, Liu hong 1, Yang menglong 1, Ge wenyi 1
Editor: Gulistan Raja2
PMCID: PMC7673549  PMID: 33206695

Abstract

An adaptive sampling-based atmospheric scattering and volumetric light framework for flight simulator (FS) is proposed to enhance the immersion and realism in real-time. The framework comprises epipolar sampling (ES), visible factor culling (VFC), interactive participating media density estimating (IPMDE). The main process of proposed architecture is as follows: the scene is divided into two levels according to the distance from the camera. In the high-level pipeline, the layer close to the camera, more samples, and smaller sampling step size is used to improve image quality. Further, the IPMDE method is designed to enhance realism by achieving interactivity with the participating media and multiple light sources. Further optimization is performed by the lookup table and 3D volumetric textures, by which we can compute the density of participating media and the scattering coefficient in parallel. In the low-level pipeline, when the samples are far away from the camera, its influence on the final output is also reduced, which results in fewer samples and a bigger sampling step size. The improved ES method further reduces the number of samples involved in ray marching using the features of epipolar geometry. It then generates global light effects and shadows of distant terrain. The VFC method uses an acceleration structure to quickly find the lit segments which eliminate samples blocked by obstacles. The experimental results demonstrate our architecture achieves a better sense of reality in real-time and is very suitable for FS.

Introduction

This chapter presents adaptive volumetric light and atmospheric scattering, a technique developed for FS. In this paper, we propose a new adaptive volumetric light model and an improved ES algorithm by using a simpler and faster 1D min-max mipmaps, which enables us to incorporate volumetric light into integral, while maintaining real-time performance. We observe that by applying LOD (level of detail) and adaptive sampling, the fixed sampling interval is not indispensable when the camera frustum is zoomed out, which allows us to use fewer samples without sacrificing the image quality. The framework comprises epipolar sampling (ES), visible factor culling (VFC), interactive participating media density estimating (IPMDE). Among them, adaptive sampling framework and IPMDE are proposed in this paper. The ES and VFC are inherited from other researchers [1, 2] in which we have made targeted modifications to adapt to FS or other outdoor open space applications. The workflow of proposed architecture is as follows: In the low-level pipeline (far away from the camera), the improved ES smartly marches along the epipolar lines instead of performing expensive pixel-by-pixel traversal. Then, the VFC is used to eliminate samples blocked by terrain or other obstacles that further promote real-time performance. At the core of the VFC algorithm is the use of an acceleration structure (a 1D min-max binary tree), which allows us to quickly find the lit segments for all pixels in an epipolar slice in parallel. Finally, in the high-level pipeline, the IPMDE algorithm is proposed to support the effect of multiple light sources and volumetric fog near the camera, which achieves a better immersion and interaction by using series optimized methods and reasonable sampling strategy.

The proposed framework renders atmospheric scattering and volumetric light, which is essential for producing compelling virtual scenes. However, simulating all the scattering events is prohibitively expensive, especially for real-time applications. The most crucial advantage of proposed method is the introduction of adaptive sampling for volumetric light, which assigns samples involved in ray marching according to the distance from the camera and weight of the final output effect. Moreover, a series of subsequent optimizations can be performed based on this framework. In short, our original contributions can be described as follows:

  • a)

    Our technical contribution is to eliminate avoidable samples, replacing it with a larger sampling intervals according to the distance from the camera, which allows all samples effective for the final result. After adaptive sampling, a smooth transition between different pipelines is applied to accumulate the color and transparency.

  • b)

    We observe that by applying epipolar geometry to the shadow map, each camera ray only travels through a single row of the shadow map, which allows us to find the visible factor by considering only 1D depth fields. Then, we convert all the visibility tests within a 1D depth map without rectification, which allows us to avoid the singular values close to the epipolar line.

  • c)

    A participating media density estimating algorithm is proposed to enhance distance perceive by introducing variable density of participating media and multiple light sources. To achieve good performance, we use the 3D volumetric texture for hardware acceleration, which allows us to compute participating media density and scattering coefficient in parallel.

Related works

The atmospheric scattering model

An atmospheric stratification model was proposed [3] and summarized [4] in which the atmosphere was divided into parallel planes assuming the same density in each layer. Kaneda K et al. [5] conceived a new algorithm [6, 7] regarding the atmosphere as a spherical shell where the density of air particles was exponentially decreasing along with the altitude. The model of Rayleigh scattering was studied [8] using an empirical function. To accurately simulated the atmospheric scattering under different weather conditions, a more complete model was developed [9]. Qualitative and quantitative evaluation was given [10] for a clear sky. Later contributions were made by Zuliang A et al. for introducing the sky illumination model by multiple scattering [11]. The papers above were dedicated to solving the scattering model and integral. However, these methods did not take shadowing into account, which was often required for realism and was the effect on which we concentrated.

Pre-calculation and lookup table

Nishita et al. described how to optimize the performance of optical depth integral using lookup tables [8]. The 3D lookup table was then proposed [12] for volumetric clouds by introducing depth value as the third dimension. Further, Neyret et al. designed a 4D parameter table [13], which simulated multiple scattering and reflection at the same time. Later contributions were made by Klehm et al. using the prefiltered method.

Epipolar sampling

Epipolar sampling method was proposed [1] which sampled sparsely along epipolar lines and interpolated between samples. The scattered radiance at all other points was interpolated along these lines, causing temporally-varying artifacts.

Volumetric light

The volumetric light was the same as obtaining 2D medical images from 3D magnetic resonance(MRI) or computed tomography (CT) data. The primary previous studies on volumetric light were found in References [1419]. Later improvement was proposed by volumetric shadow method, which can be combined with epipolar sampling for further acceleration. [2022]. Single scattering with inhomogeneous media was studied [23] by shadow map-based polygon mesh which took into account the interaction with inhomogeneous media. A later contributions to this approach were made [24] to generate volumetric fog effect. Then the Monte Carlo algorithm and neural networks were also used [25] to achieve volumetric cloud and light. Still, these methods were expensive and we did not observe a significantly realistic improvement.

1D min-max binary tree

Previous studies of data structure emphasize potential advantages to visibility test, such as 1D min-max mipmaps [2] by Chen et al. However, this implementation does not support all directions from camera ray and to compare with their method, we used a more efficient approach without epipolar rectification. Tevs et al. [26] also reap huge fruits from 1D min-max mipmaps. However, it mainly focused on large-scale height fields of terrain system.

Voxelized shadow volumes

According to the similar epipolar space sampling, voxelized shadow volumes [27] played an important role in the evolution of performance. Later contributions were made by the same team using the improved VSV [28], by which the approach could be scaled to many lights. However, this implementation aliased near singularity and needed to pay more attention for robustness. To compare with their method, we used a shared component of GPU, which allowed us to reap huge fruits from sharing of intermediate results.

Image-based methods

The image-based methods [29, 30] used fewer sampling points to obtain the same quality output by anti-aliasing and interpolation. More contributions to this approach were proposed to reduce samples and deformation marching along the ray [31]. Further improvement for soft filtering shadows was studied [3235] using mipmaps. Other atypical methods were proposed [3638] including radiance transfer, compressed light fields and ray-box. However, these methods cannot handle object boundaries and shadows very well.

Overview

Atmospheric scattering effects and volumetric light are paramount to create the realism of the outdoor scenes. It also generates the a clean sky as well as the optical phenomenon at different time of day. Crepuscular rays or god rays are also performed by the similar optical model in the presence of obstacles. Facing massive computation and complexity by the nested integral, series of simplified models are studied, such as analytical exponential fog, screen space-based solutions, methods based on artistic configuration, and particle systems. However, only simplified light models are utilized in real-time rendering, which has a lot of disadvantages. Computer graphics researchers try to use a more precise model to generate those effects by which can not only significantly enhance the realism of the scene, but also establish a visual distance perception between objects and light. For real-time rendering applications such as FS, it is usually necessary to simplify and approximate the model. As introduced in the related work in the previous section, the rendering of atmospheric scattering and volumetric light includes image-based methods, particle and billboard methods driven by artistic effects, and modern ray marching based solutions. Various methods have their advantages and disadvantages. However, we find that ray marching-based approaches are strictly in accordance with the complete physical model and can be performed by modern GPU in parallel. With hardware developments, more programmable pipelines are available, which prompts researchers to develop intelligent approaches for solving the existing problems. Consequently, an improved ray marching-based approach, which is used to approximate optical model, is an indispensable part and a strong support of the proposed architecture.

Studies of the ES approaches suggest that by applying the epipolar geometry, they perform sampling only along the epipolar lines instead of per-pixel calculation, while maintaining the same visual quality. The remaining pixels can be obtained by interpolation. However, this implementation does not support multiple lights and to compare with their method, the proposed IPMDE approach is applied to interact with variable density of participating media by many light sources.

The 1D min-max mipmaps method [2] can significantly improve the efficiency of visibility test and avoid the occluded sampling points from participating in expensive ray marching. Comparatively, the 1D min-max mipmap in this paper is significantly improved by using adaptive sampling. Moreover, to avoid rectification that leads to paying careful attention to the area near the epipolar line, we implement it with a general and efficient way.

The work proposed [27] by Chris Wyman, used a very similar approach by epipolar space sampling, which aligns the samples of ray marching in memory. Then, many lights are supported by their subsequent works [28]. According to the VSV approaches, epipolar space sampling also plays an important role in the evolution of performance, which is described in the previous section. However, we find that some samples near the singular point are not strictly in accordance with the regulations. In this paper, we achieve a significant improvement by using a universal component shared by many stages, which allows us to prevent from aliasing near to the singular point.

Studies of interaction between air medium and volumetric light plays a paramount role in this paper. However, we find that the existed approaches are not strictly in accordance with the regulations and only contain uniform density of participating media. For our proposal of IPMDE, by introducing variable density model of media instead of volumetric Perlin noise [39] and Gaussian noise [40], we enhance the immersion and realism, which allows us to construct a visual distance perception. Moreover, pseudo spheroids and the Gaussian blob model are introduced to avoid repeated calculations when air particles overlap. By applying pseudo-spheroids with a radius, we can easily count its contribution to the shadow.

At the beginning of our study, we tried to use existing volumetric light and ray marching methods for FS. However, as mentioned above, existing methods have their inherent limitations. We summarized as follows:

  • a)

    Based on existing LOD approach, the rendering of terrain and 3D models can easily manage the scene complexity, which reduces the number of triangular strips far from the camera when the camera is zoomed out. Unfortunately, the existing approaches do not take into account the LOD and adaptive sampling for volumetric light, which leads to maintaining real-time performance, remains challenging.

  • b)

    Note that for complex scenes, the existing ES can only support sun as the single light source and leads to incorrect interaction between multiple lights and objects.

  • c)

    According to the 1D min-max mipmaps, epipolar rectification plays an important role in the VFC method. However, it is sophisticated and lies upon a singular value decomposition of the scattering term, which leads to paying careful attention to the area near the epipole line.

  • d)

    From the perspective, the density of participating media must change according to the external environment and user control. However, the existing approaches do not support variable density model, which leads to monotonous and unreal scene.

To address the above problems, we propose the adaptive sampling-based framework. The ultimate goal of the proposed system is to allow us to use fewer samples along epipolar lines without sacrificing image quality. At the core of our algorithm is the use of improved ES and 1D min-max mipmaps, which allows us to quickly find the lit segments within each epipolar slice in parallel. Our technical contribution is to eliminate the epipolar rectification used for integration, replacing it with a quick min-max mipmaps, allows all rays to be processed in parallel.

Materials and methods

The proposed framework comprises: epipolar sampling (ES), visible factor culling (VFC) and interactive participating media density estimation (IPMDE). In the first place, we start from the screen space and accumulate the light intensity reaching the screen pixel. To avoid performing all the screen pixels, we utilize the characteristics of epipolar geometry that the intensity of scattered light changes uniformly along the epipolar line. There is one more point that we need to project a ray from the camera through each epipolar sample. Then, We convert each view ray to shadow map space and perform a visibility test by the proposed VCF, which prevents the occluded ray segments from participating in the calculation. The last but not the least, the proposed IPMDE method realizes more realistic and detailed interactive features by supporting variable participating media density.

As shown in Fig 1, the proposed architecture uses the following per-frame procedure:

Fig 1. Adaptive volumetric light architecture.

Fig 1

  1. According to the distance from the camera, we divide the camera perspective space into two levels.

  2. In the low-level pipeline, fewer samples and bigger sampling step size are used.

  3. Marches along the epipolar lines on the screen instead of performing on every screen pixel by the ES method.

  4. Eliminate samples blocked by terrain or other obstacles by an acceleration structure (a 1D min-max binary tree).

  5. In the high-level pipeline, more samples and smaller sampling step sizes are used for ray marching which provides more details of volumetric light.

  6. Change the density of the participating media by the proposed model in the IPMDE subsystem.

  7. The light from multiple sources interacts with variable density of participating media.

  8. Speed up the calculation of ray marching by 3D volumetric texture and other optimizations.

  9. Perform ray marching on each view ray, and accumulate the transparency of each sample. When the accumulated transparency reaches 1, the traversal is terminated.

Compared with the traditional method, according to the distance from the camera, we reasonably designed the sampling step size and its distribution, which further formed the proposed adaptive sampling framework as described in Algorithm 1.

Algorithm 1 Adaptive sampling of proposed framwork

// TN:total number of samples; RatioL1: ratio of samples in L1 level

T1 = TN * RatioL1

// T1: number of samples in L1; T2: number of samples in L2

T2 = TNTN * RatioL1

// NCP:near clip plane; FCP: far clip plane; StepL1Base: fixed step of samples in L1

StepL1Base = (NCP − 0)/T1

StepL2Base = (FCPNCP)/T2

for i = 0 to T2 − 1 do

 //StepL2[i]: every sample in L2 using adaptive step based on StepL2Base

StepL2[i] = StepL2Base * (iT1)/T2 * 2/(T2 + 1)

end for

Epipolar sampling

In practice, there are two main types of problems that are associated with existing ES method, and they constitute a bottleneck in sample distribution and need to be addressed with the proposed approach. The first and foremost, by applying variable density of participating media, we construct clear sense of distance. There is one more point that we support multiple light sources instead of a single light source by sun, which allows volumetric light to interact with objects of various scenes correctly.

In this section, we describe the proposed sampling strategy that determines where the inscattering term is computed, and where it is interpolated from nearby samples as summarized in the following steps:

  • a)

    A configuration of epipolar lines is defined, then sampling refinement of initial equidistant sampling along these lines is applied to capture depth discontinuities.

  • b)

    Perform ray marching along the view ray and accumulate the color and transparency.

  • c)

    Interpolation is performed along and between the epipolar line segments.

  • d)

    The color and transparency of multiple light sources are accumulated into a 2D texture which is mapped to the screen space.

The generation of the epipolar lines and samples is the most important process shown in Fig 2. When sun position is within the screen, we place the samples and initial point for each line where it should be (Fig 3 Bottom left). When the sun is off the screen, we place the original point at the intersection of the epipolar line and the boundary (Fig 3 Top left). The intersections of these epipolar slices (planes) with the image plane are epipolar lines, along which streaks of lighting appear to emanate. We place the epipolar sampling points evenly along each epipolar line. The results of ray marching only depend on the epipolar samples within the same slice, which implies that we can process the slices in parallel.

Fig 2. Ray march sampling and epipolar slice.

Fig 2

Fig 3. Rays from camera through the samples of the epipolar line.

Fig 3

The sample refinement is performed by searching for depth discontinuities using the depth buffer of the scene. After placing and refining the sampling points, the interpolation is performed to reconstruct the in-scattering for each pixel of the final image. Finally, the results are used together with the 2D texture from the IPMDE method to accumulate color and transparency.

Our technical contribution is to eliminate the single light sources used for integration, replacing it with a multiple light sources, which allows all rays to be processed in parallel according to the scene, as shown in Algorithm 2.

Algorithm 2 Ray marching of the improved ES

// Calculate the pos of current samples in the world coordinate

WorldPos.xyz = CalculatePos(ThreadID.xyz);

// Calculate the horizontal density at the same height using IPMDE

HorizontalDensity = CalculateHorizontalDensity(WorldPos);

// Calculate the scattering coefficient,

RayDir.xyz = (Objext.xyzCamera.xyz)/NStep;

ds = ||RayDir.xyz||;

// Initial vertical density, rlgh and mie scattering

VerticalDensity = 0;

for i = 0 to NStep do

CurrentPos = Object.xyz+ RayDir.xyz*i;

h = abs(CurrentPosEarthCenter) − EarthRadius;

 //Calculate the final density by combining height and horizontal density changes

VerticalDensity = eh/H;

 //Generate lookup table for density from the surface to the top of the atmosphere

 cosφ = NEarth.xyzSunDir.xyz;

VerticalDensityAP = D[h, cosφ];

 // Accumulate the participating media density

VerticalDensityPC+ = VerticalDensity*ds;

VerticalDensityAPC = VerticalDensityAP+ VerticalDensityPC;

if(CurrentPos.z < Threshold)

 // Accumulate horizontal density changes near ground

FinalDensity = VerticalDensityAPC + HorizontalDensity;

 // Calculate the optical depth

OpticalDepth=FinalDensity*βRMe.xyz;

Attenuation=e(TR.xyz+TM.xyz);

DiffScattering=FianleDensity*βRMs.rgb+Attenuation.rgb*ds;

FinalScattering+ = DiffScattering*Visibility;

end for

// Calculate the contribution of multiple light sources

SumLight = GetSunLight(WorldPos)*PhaseFunction(Raydir, SunDir);

for j = 0 to NumLights do

SumLight+ = GetLocalLight(j, WorldPos)*PhaseFunction(Raydir, LightDir);

end for

FinalOutput = vec4(SumLight*FinalScatteirng.rgb, Attenuation);

Visible factor culling

At this point, we finish the selection of sampling points on each eippolar line and compute the scattering integral, while taking shadow into account. According to VFC, 1D min-max mipmaps introduced by Chen et al. plays an important role to quickly find lit segments for all view rays. However, it is rather sophisticated and depends on a singular term disintegration of the scattering term, which leads to paying more attention to the region close to the epipolar line. Our main contribution is to eliminate the epipolar retification, replacing it with a static and simple manner, which allows for massive parallelism. The pipeline of VFC can be described as follows: In the first place, we project the view ray to the shadow map space where camera rays cat throught samples on epipolar lines. There is one more point that the view rays in the shadow map are defined by the initial point and direction. The last but not the least, the min-max binary tree is constructed and traversed.

We traverse every sample by casting a ray, transforming its begin and exit positions to the camera space, then sampling and calculating internal scattering within shadow-map coordinate system. We apply a 2D lookup table to compare the depth between the samples and the occlusions. Only the lit samples (the depth of the current sample less than that of the obstacle) is accumulated along the ray. In contrast, shadowed samples are prevented from participating in ray marching, as shown in Fig 3.

We take samples in shadow-map coordinate system along the epipolar line resulting in a 1D depth map. Every camera ray within the epipolar slice is converted into this space. To perform a visible factor culling, we necessarily need to detect if the present samples along the camera ray are under the depth map or above it. As shown in Fig 4, the blue lines are view rays, while the black ones are segments in shadow, which means the current sample is above the depth map (Depth and height are inversely proportional).

Fig 4. 1D depth map for detecting lit and showed ray sections.

Fig 4

To construct the acceleration structure, the intersection between the epipolar slice and the shadow map needs to be defined. View rays within the epipolar-slice are projected to the epipolar line by using whichever two samples along the ray and converting them to the shadow-map coordinate system. The camera position is regarded as the original point. And its projected location Ouv represents the zeroth sample at the one-dimensional depth map. The exit point on the ray is cast through the end sample of the slice, converted into the uv space. The direction Duv is calculated from the start point to the final point. Then we normalize Duv to fit subsequent calculations. If Ouv falls outside the boundary, we shift the initial point to the first intersection with the boundary (See Fig 5).

Fig 5. Define view ray (which projected from the camera through the epipolar sample) by origin and orientation in shadow map space.

Fig 5

At this point, we are conscious of the location Ouv as the zeroth point in the one-dimensional depth map and its direction Duv. We are able to deduce the position of ith samples as Ouv+ i * Duv. For every single epipolar slice, we store its depth as depth[i]. By comparing the value of Ouv+ i * Duv and depth[i], we can determine whether the current samples are lit or shadowed.

The upper layer of the binary tree is constructed by calculating the min-max value of every 2ith and (2i+1)th sampling points. Then, the whole tree is constructed by spreading these min/max nodes upwards (See Fig 6a).

Fig 6. The binary tree for detecting visible vector.

Fig 6

At this point, we have constructed the acceleration structure and need to perform the visibility test. As shown in Fig 6, for the current ray sections, we extract its maximum and minimum values as the thresholds. If its maximum value is is under the minimum depth saved at the 1D min-max tree, then the ray segments are completely lit. If the minimum of threshold is greater than the maximum depth saved at the 1D min-max tree, then the ray segments is shadowed. Otherwise, tthe step-by-step transform to the lower layer are performed when the segments are neither totally lit nor shadowed. Nodes that are completely in shadow are identified as dark blue, nodes that are completely in light are white, and nodes that contain visibility boundaries are gray, as shown in Fig 6c.

Interactive participating media density estimating

Up to this point, we efficiently render the atmospheric scattering and volumetric light with minimal but sufficient samples in the low-level pipeline. To realize the interactions with participating media and multiple light sources, we need first to calculate the density of the participating media at every sample corresponding to a volumetric texture. There are various methods to generate air media, from Gaussian blur to physically driven method used in offline renderers. Also, there are various real-time solutions for more complex scattering functions [41]. This novel theoretical result enables the analytical computation of exact solutions to complex scattering phenomena which achieve an impressive interaction between light and participating media. However, these implementations are used for closed-form space rather than the outdoor open space and to compare with the proposed approach, their methods are far from real-time performance.

The proposed approach implies a LOD improvement by adjusting air density according to distance between the center of air particle and the view ray. For every samples, the ray is traced to generate noise, with the threshold φ which serves as a random component for details of fog or cloud surface as shown in formula below:

ϕe(poscenterr((1ratio)+2*ratio*f(x,y,z))) (1)

The formula filters the unitized density and forms a sphere by zooming the radius ratio, where r is the radius of the sphere. Pos is the Euclidean straight line point of the ray. The f function is used to normalize the random variable with Perlin noise [39]

Further, 3D Gaussian distribution is used to calculate the position of pseudo-spheroids instead of particle system by Huang B et al. [40]. We use fewer primitives to generate more details of cloud or fog boundary, as follows:

{Posx=Mx+U(αx+βx)Posy=My+U(αy+βy)Posz=Mz+U(αz+βz)Ri=ε|Posx|+|Posy|+1.0 (2)

Where Pos is the position of the pseudo-spheroid. M is used as the center of the fog. U represents the random variable with normal distribution, α the mean value, Ri the distance from P to the C. ε is applied to compute the magnitude of radius, and β the standard deviation.

The interaction of multiple light sources significantly improves realism, but at the same time greatly increase the performance consumption. The numerical integration and the lookup table are designed to solve this issue. We observe that the optical depth relies on two variables: the height h and the angle φ between gravity and the sun rays. So the Optical depth T(AP) can be pre-calculated and saved to the query form as the texture in GPU pipeline. Further, βRe and βMe can be represented as a 3D vector. Also the 2D texture is used to store the density of participating media from the ground to the top of the atmosphere. Finally, the optical depth integral is rewritten as follows:

T(AP)=βRe.xyz*T[h,cos(φ)].x+βMe.xyz*T[h,cos(φ)].y (3)

However, the above optimization cannot support the additional cost by the calculation of multiple light sources. Further optimization is performed by 3D volumetric textures, which have 4-channels with a 16-bit floating-point type. By introducing a compute shader, the participating media density pass and the light scattering pass can be integrated together which avoids the overhead of frequently reading the memory. Moreover, all scattering is calculated locally, thus saving an immediate density buffer, which is beneficial for the situation of participating media density.

We march along the camera ray, accumulate the in-scattering coefficients and participating media density in parallel by 3D volumetric texture. Let us take the calculation of internal scattering coefficient as an example. The 3D texture is executed as an array of 2D textures, as shown in Fig 7.

Fig 7. 3D texture for accumulating scattering coefficient and paticipating media density.

Fig 7

a) At slice i (from 0 to N-1), read the coefficients of in-scattering and transmittance. b) Add the above coefficient and transmittance to the accumulating texture. c) Write out the accumulated in-scattering and transmittance to another volumetric texture at the same position. d) Increase i and proceed back to Step a.

Results

Data description

In this research, we gathered the Scene data from more than 40 civil airports in China, including Beijing and Shanghai. The raw dataset is also augmented by filtering and modifying to satisfy 3D modeling standards. Moreover, during data processing, the dataset is divided into three subsets three subsets: the elevation, texture, and 3D model datasets, accounting for 43%, 34%, and 23%, respectively. To test the stability of the proposed architecture, we reproduce a Flight track of two days from Chengdu airport (ZUUU) in China. We also simulate the time of day (TOD) to check the related algorithms in the proposed architecture.

Qualitative tests

A set of tests is performed on the algorithm suite using nVidia GTX 980, running on a 64-bit i-Core 5 CPU 3.3GHz with 8 GB random access memory(RAM). The project is implemented entirely in C++ using the OpenGL for the host side and the OpenGL Shading Language (GLSL) for the GPU side. The ray-marching step size is determined by the adaptive Algorithm. The same algorithm suite performs perfectly in all resolutions using the nVidia GTX 980. The results of Qualitative experiments are illustrated in Figs 811 from which better realism, immersion and interactivity are observed. The realism of atmospheric scattering is improved by turning on the switch of proposed approach, as shown in Fig 8.

Fig 8. Atmospheric scattering.

Fig 8

(a)Atmospheric scattering off. (b)Atmospheric scattering on.

Fig 11. A quality comparison with equal rendering time.

Fig 11

Then the test is performed when the camera is far away from the horizon where we can observe the boundary of the ellipsoid as shown in Fig 9a. Moreover, volumetric light interacts with mountains and terrain, which generate beams of light known as god rays, or crepuscular rays as shown in Fig 9b.

Fig 9. Atmospheric scattering and volumetric light.

Fig 9

(a) Atmospheric scattering effect near the horizon. (b) Volumetric light and terrain shadow.

Then, the improvement of the atmospheric scattering at different time of the day is obtained by the proposed approach. The blue and purple light with shorter wavelengths is scattered away, which leads to orange or gold boundary in the evening, as shown in Fig 10.

Fig 10. Time of day.

Fig 10

(a)Atmospheric scattering effect during the day. (b)Atmospheric scattering effect in the evening.

The experiments are conducted on different scenarios with the equal time. For our proposal, we evaluate the performance obtained by anti-aliasing effect. From Fig 11, we can see that the experiments are divided into three parts for various scene and each row represents the results of the three methods: A (Chen et al. 2011), B (AliHH et al. 2016), and C (the proposed approach) from left to right. Regarding the different approaches, the proposed approach generates image with the least aliasing in the equal time.

The performance of the B (subsampling-based) is better than that of the A (1D mipmaps-based methods) because it is very good at improving performance by reducing samples, and making up for image quality by Bilateral Filtering. In contrast, although B achieve very good performance through subsampling, we realize better performance by adaptive sampling and a series of optimizations, while maintaining complete distribution of sampling points. The rendering time is shown in the upper left corner of the Fig 12. It can be seen from the experimental results that the best result is obtained by the proposed model with i.e., a 184% and 7.89% improvement against the A and B, respectively.

Fig 12. A speed comparison with equal quality.

Fig 12

Fig 13 shows the density change of the participating media and the interaction with multiple light sources. Different from the above two experiments, A represents approach by Wyman et al. 2013 and B means method by Pegoraro et al. 2010. The results demonstrate the significance the variable density in improving the performance of interactivity between the air media and volumetric light. We can observer that different effects and boundaries of the interaction between variable density and volumetric light, which leads to providing clearer distance hints and immersion.

Fig 13. An quality comparison by variable density of participating media with equal rendering time.

Fig 13

Quantitative tests

In this research, we apply the frames-per-second (FPS) and rendering time per frame (RTPF) to evaluate the performance of the adaptive volumetric light framework.

To evaluate the performance of atmospheric scattering and volumetric light framework, several experiments are conducted, the results of which are reported in Tables 1, 2 and 3. From the experimental results, we can draw the following conclusions:

Table 1. Performance of the methods at various rendering stage in a single frame.

Methods Shadowmap OpticalDepth Binarytree Multi-light Density Integration Total
Brute force 73.2 23.2 N/A N/A N/A 36.3 132.7
Engelhardt T et al. 2010 19.2 13.1 N/A N/A N/A 15.8 48.1
Chen et al. 2011 6.2 4.2 1.7 N/A N/A 6.7 18.8
AliHH et al. 2016 3.2 1.8 1.2 N/A N/A 1.3 7.5
the Proposed method 3 1.6 0.9 0.4 0.4 0.9 7.2

Table 2. FPS of methods with different resolutions.

Methods Resolution 1K Resolution 2K Resolution 4K
Brute force 7.5 4.8 2.9
Engelhardt T et al. 2010 20.8 12.8 7.3
Chen et al. 2011 53.2 21.6 14.5
AliHH et al. 2016 133.3 88.5 60.6
Theproposedmethod 138.9 95.2 67.6

Table 3. Numbers of samples supported by each method ah the same FPS.

Methods Samples at levle 1 Samples at levle 2 Total Samples
Brute force 3 19 22
Engelhardt T et al. 2010 4 23 27
Chen et al. 2011 6 32 36
AliHH et al. 2016 12 54 66
Config1 43 43 86
Config2 30 60 90
Config3 26 70 96
  • a)

    The Brute forece method performs ray marching pixel-by-pixel, which is used as a benchmark for performance comparison. Comparatively, the ES approach by Engelhardt et al. achieves an improvement of 84.6 milliseconds accomplishing a single frame and the 1D min-max binary tree by Chen et al. further shortened 29.3 milliseconds, as shown in Table 1. And the Soft bilateral filtering shadows method by AliHH et al. obtains soft shadow with the lowest number of samples and the image quality is compensated by bilateral filtering. Compared with the proposed approach, the first and foremost, both methods achieve a good balance between efficiency and image quality. This is the reason that the method is very close to the proposed approach by 7.5 milliseconds. However, this method reduces the samples for efficiency, which leads to sparse distribution of ray marching, where some unnatural light beams will appear. The best result is obtained by proposed architecture within 7.2 milliseconds for rendering a complete frame by two more stages (Multi-light and Density).

  • b)

    60 FPS is usually used as the FS application real-time performance benchmark. Although the 1D min-max binary tree by Chen et al. greatly improves the performance aginst the previous two methods, it still fails to achieve real-time performance by 38.8FPS in 1K resolution. By contrast, the Soft bilateral filtering shadows method by AliHH et al. and the proposed approach reach 60.6 and 67.6 FPS in 4K resolution, respectively.

  • c)

    Config1, Config2, and Config3 represent the various distribution of samples. The results demonstrate the significance of adaptive sampling in improving realism and interactivity, as shown in the picture above. Further, we describe the final image quality of various methods by quantifying the number of samples. From the experimental result, we can draw the following conclusions: Compare with the previous four methods, our proposed approach with confi1 improves the maximum number of samples by 336%, 255%, 166% and 45% at the same frame rate, respectively. Further, the maximum number of samples is improved by approximately 4.65% and 6.67% with config2 and config3, respectively. The results demonstrate the significant potential to achieve better image quality and efficiency with different configurations of sample distribution.

Discussion

From the experimental results, we can see that the proposed framework obtains a better sense of reality in real-time. At the same time, our adaptive sampling-based methods can be well adapted to applications where the complexity and scope of the scene change dramatically.

To improve the performance, some assumptions are made about the scattering. Only a single scattering is taken in to account for achieving an interactive rate. For smoothly varying media, only a small number of samples are calculated and interpolated within the transparent or translucent media. We also tested the proposed methods with more light sources. When the number of light sources exceeds 10, the performance drop sharply, especially when translucent media is taken into account. Like all algorithms based on shadow map, the performance is also limited by its resolution. Although adaptive sampling reduces the above impact to a certain extent, it is still the optimization for future work. Moreover, the proposed method realizes the real-time interaction between volumetric light and a semi-transparent media, but the self-shadow of semi-transparent medium is not completed.

Conclusion

In this work, to achieve realism and immersion by volumetric light and atmospheric scattering in real-time, we present a dynamic range of volumetric light architecture by introducing adaptive sampling-based and series optimization. According to the distance from the camera, we design different pipelines with targeted sampling step size and sampling strategies. To the best of our knowledge, this is the original study to introduce dynamic range and adaptive sampling-based architecture for volumetric light using IPMDE approaches. The sampling step size and distribution are taken into account by mearing distance from camera. The tasks in VFC are accomplished by the faster 1D binary tree, which prevents shadowed samples from participating in expensive calculating. In the IPMDE method, the interaction between participating media and multiple light sources is realized by 3D texture and the lookup table. The experimental results in this approach demonstrate that the proposed framework can serve as a significant application for current FS.

Future work

In our future work, we plan to consider the multiple scattering instead of single scattering in our architecture. More advanced optimizations are expected to propose. The shadow of the translucent is also a problem that needs to be addressed in this framework.

Supporting information

S1 Appendix. Basic priciples.

(PDF)

S1 Video. Demostration video.

(MP4)

S1 Data

(TXT)

S1 Dataset

(RAR)

S1 Sourcecode

(RAR)

Data Availability

All relevant data are within the manuscript and its Supporting information files.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Engelhardt T, Dachsbacher C. Epipolar Sampling for Shadows and Crepuscular Rays in Participating Media with Single Scattering. In: Acm Siggraph Symposium on Interactive 3d Graphics & Games; 2010.
  • 2.Chen, Jiawen and Baran, Ilya and Durand, Fredo and Jarosz, Wojciech. Real-time volumetric shadows using 1D min-max mipmaps. interactive 3d graphics and games, 2011:39-46.
  • 3. Klassen RV. Modeling the Effect of the Atmosphere on Light. Acm Trans Graphics. 1987;6(3):215–237. 10.1145/35068.35071 [DOI] [Google Scholar]
  • 4.Cerezo E, Perez F, Pueyo X, Seron FJ, Sillion FX. A survey on participating media rendering techniques; 2005.
  • 5. Kaneda K, Okamoto T, Nakamae E, Nishita T. Photorealistic image synthesis for outdoor scenery under various atmospheric conditions. Visual Computer. 1991;7(5-6):247–258. 10.1007/BF01905690 [DOI] [Google Scholar]
  • 6. Max NL. Atmospheric Illumination and Shadows. Acm Siggraph Computer Graphics. 1986;20(4). 10.1145/15886.15899 [DOI] [Google Scholar]
  • 7. Whitted Turner. An improved illumination model for shaded display. Acm Siggraph Computer Graphics;13(2):14. [Google Scholar]
  • 8.Nishita T, Sirai T, Tadamura K, Nakamae E. Display of The Earth Taking into Account Atmospheric Scattering. In: Conference on Computer Graphics & Interactive Techniques; 1993.
  • 9. Jackel D, Walter B. Modeling and Rendering of the Atmosphere Using Mie-Scattering. Computer Graphics Forum. 2010;16(4):201–210. [Google Scholar]
  • 10. Bruneton E. A Qualitative and Quantitative Evaluation of 8 Clear Sky Models. IEEE Transactions on Visualization & Computer Graphics. 2016;PP(99):1–1. [DOI] [PubMed] [Google Scholar]
  • 11. Ai Z, Zhang L, Yu W. Modeling and Real-time Rendering of Sky Illumination Effects Taking Account of Multiple Scattering. Journal of Graphics. 2014;35(2):181–187. [Google Scholar]
  • 12. Feng YK, Zhou SC, Chun-Yong MA, Han Y, Chen G. Simulation of Earth Atmosphere and 3D Volumetric Clouds Based on Graphics Processing Unit. Computer Engineering. 2012;38(19):218–221,225. [Google Scholar]
  • 13. Bruneton E, Neyret F. Precomputed Atmospheric Scattering. Computer Graphics Forum. 2010;27(4):1079–1086. [Google Scholar]
  • 14.Dobashi Y, Yamamoto T, Nishita T. Interactive rendering method for displaying shafts of light; 2000.
  • 15.Li S, Wang G, Wu E. Unified Volumes for Light Shaft and Shadow with Scattering. In: Computer-Aided Design and Computer Graphics, 2007 10th IEEE International Conference on; 2007.
  • 16. Lin HY, Chang CC, Tsai YT, Way DL. Adaptive sampling approach for volumetric shadows in dynamic scenes. Iet Image Processing;7(8):762–767. [Google Scholar]
  • 17. Nowrouzezahrai D, Johnson J, Selle A, Lacewell D, Kaschalk M, Jarosz W. A Programmable System for Artistic Volumetric Lighting. Acm Transactions on Graphics;30(4):p.29.1–29.8. [Google Scholar]
  • 18. Levoy M. Display of surfaces from volume data. IEEE Computer Graphics & Applications;8(3):0–37. [Google Scholar]
  • 19. Lipus B, Guid N. A new implicit blending technique for volumetric modelling. Visual Computer;21(1-2):p. 83–91. [Google Scholar]
  • 20. Baran I, Chen J, Ragan-Kelley J, Durand F, Lehtinen J. A Hierarchical Volumetric Shadow Algorithm for Single Scattering. Acm Transactions on Graphics;29(6CD):p.178.1–178.9. [Google Scholar]
  • 21.Billeter M, Sintorn E, Assarsson U. Real Time Volumetric Shadows using Polygonal Light Volumes. In: Acm Siggraph/eurographics Conference on High Performance Graphics; 2010.
  • 22.Chen S, Sheng L, Wang G. Real-Time Rendering of Light Shafts on GPU. In: Advances in Visual Computing, Second International Symposium, ISVC 2006, Lake Tahoe, NV, USA, November 6-8, 2006 Proceedings, Part I; 2006.
  • 23.Wang DL, Li S, Yang LP, Hao AM. Real-time volumetric lighting using polygonal light volume. In: International Conference on Information Science; 2014.
  • 24.Brown SA, Samavati F. Real-time panorama maps. 2017;.
  • 25. Kallweit S, Muller T, Mcwilliams B, Gross M, Novak J, Kallweit S, et al. Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks. Acm Transactions on Graphics. 2017;36(6). 10.1145/3130800.3130880 [DOI] [Google Scholar]
  • 26.Tevs, Art and Ihrke, Ivo and Seidel, Hanspeter. Maximum mipmaps for fast, accurate, and scalable dynamic height field rendering. interactive 3d graphics and games, 2008:183-190.
  • 27.Wyman, Chris. Voxelized shadow volumes. high performance graphics, 2011:33-40.
  • 28.Wyman, Chris and Dai, Zeng. Imperfect voxelized shadow volumes. high performance graphics, 2013:45-52.
  • 29.Tomasi C, Manduchi R. Bilateral Filtering for Gray and Color Images. In: International Conference on Computer Vision; 1998.
  • 30. Imagire T, Johan H, Tamura N, Nishita T. Anti-aliased and real-time rendering of scenes with light scattering effects. Visual Computer;23(9-11):935–944. [Google Scholar]
  • 31. Ali HH, Shahrizal SM, Hoshang K, Yudong Z. Soft bilateral filtering volumetric shadows using cube shadow maps. Plos One;12(6):e0178415–. 10.1371/journal.pone.0178415 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Chen J, Baran I, Durand F, Jarosz W. Rendering images with volumetric shadows using rectified height maps for independence in processing camera rays; 2016.
  • 33. Ju M, Gu Z, Zhang D. Single Image Haze Removal Based on the Improved Atmospheric Scattering Model. Neurocomputing. 2017;260:S0925231217307051. [Google Scholar]
  • 34. Ali HH, Kolivand H, Sunar MS. Soft bilateral filtering shadows using multiple image-based algorithms. Multimedia Tools & Applications. 2016;76(2):2591–2608. [Google Scholar]
  • 35. Ament M, Sadlo F, Dachsbacher C, Weiskopf D. Low-Pass Filtered Volumetric Shadows. Visualization & Computer Graphics IEEE Transactions on;20(12):2437–2446. [DOI] [PubMed] [Google Scholar]
  • 36. Sloan PP, Kautz J, Snyder J. Precom-puted Radiance Transfer for Real-Time Rendering in Dynamic. Acm Transactions on Graphics. 2017;21(3):527–536. [Google Scholar]
  • 37. Williams A, Barrus S, Morley RK, Shirley P. An Efficient and Robust Ray-Box Intersection Algorithm. Journal of Graphics Gpu & Game Tools;10(1):p.49–54. [Google Scholar]
  • 38.Kai L, Baer A, Saalfeld P, Preim B. Comparative Evaluation of Feature Line Techniques for Shape Depiction. In: Vision Modeling & Visualization; 2014.
  • 39.Simon Green. Implementing Improved Perlin Noise. In GPU Gems2, edited by Matt Farr, pp. 409–416.
  • 40.Huang, Bing and Chen, Juni and Wan, Wanggen and Bin, Cui and Yao, Jincao. Study and Implement About Rendering of Clouds in Flight Simulation. international conference on audio, language and image processing, 2008:1250-1254.
  • 41. Pegoraro Vincent and Schott Mathias and Parker Steven G. A closed-form solution to single scattering for general phase functions and light distributions. eurographics, 2010,29(4):1365–1374. [Google Scholar]

Decision Letter 0

Gulistan Raja

21 Aug 2020

PONE-D-20-19144

Adaptive volumetric light and atmospheric scattering

PLOS ONE

Dear Dr. ShiHan,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Based on the comments received by reviewers, the editor’s decision is “major revision” primarily due to the following reasons:

  1.  More experimentation in different scenarios is needed to demonstrate the effectiveness of proposed method.

  2. Some closely-related papers are not cited.

  3. Lack of clarity in devising problem statement for proposed research from existing literature.

Please revise the paper by incorporating all reviewer’s comments.

==============================

Please submit your revised manuscript by Oct 05 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Gulistan Raja

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please ensure that you refer to Figure 7 in your text as, if accepted, production will need this reference to link the reader to the figure.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In this manuscript, the authors present a methodology for improving the visual representation (graphics) of a flight simulator, by simulating volumetric light and atmospheric scattering in a realistic and computationally efficient way. The methodology seems reasonable, and the results look very good. I have suggestions for some minor revisions, as follows.

1. The section "Related works" should be incorporated into the Introduction, and the Introduction should then end by making clear the distinction between what was accomplished in the related works and the specific methodology of the current work.

2. Points 7 and 9 on p. 4 are a bit vague, although the meaning becomes clearer once the reader gets to the detailed description of the algorithm.

3. Step b in the subsection "Epipolar sampling" on p. 4 is also vague. Specifically, what constitutes a sample, and what does "are marched" mean? Are the samples values at grid points where calculations are performed? Does their spacing determine the spacial resolution, or are calculations also performed as the samples are propagated between the grid points shown in the figures?

4. p. 5: "its direction D_uv" - It seems that D_uv should be a vector with dimensions of length and not simply a direction. Is that correct?

5. Fig. 6c is not entirely clear, specifically the arrows indicating transitions from lower levels to higher levels, which are not explained in the paragraph that references Fig. 6c in the text.

6. In Fig. 8, are (a) and (b) switched (or are their descriptions switched in the text)?

7. In Table 1, I presume the slashes indicate categories that are not applicable. Perhaps "N/A" would be clearer than just a slash.

8. The section "Discussion" seems to be too short as a section unto itself.

9. Overall, the writing is decent, but there are some English mistakes here and there. After editing for content, the manuscript should be proofread again to catch such English mistakes. Some examples are:

p. 1 "slice of disadvantages"

p. 2 "the sampling strategy and steps need to redesign"

p. 2 "And the reasonable..."

p. 2 "And then..."

p. 3 "While further..."

p. 4 "Near clipping plan" and "Far Clipping plan"

p. 6 "visible vator"

p. 6 "by introduced"

p. 6 "density of participating media density"

p. 7 "is taxonomically into three subsets"

p. 7 "two-days flight"

p. 7 "results of Qualitative experiments"

p. 7, "Besides, ..."

p. 9 "mearing distance"

in general, the use of capitalization in the figures

Reviewer #2: Q1: This manuscript describes a real-time software system for rendering atmospheric effects in large-scale outdoor scenes, in the context of a flight simulator. It builds on the existing techniques of Epipolar Sampling. The paper's results appear quite reasonable and look very nice, and the strategies being described seem to be valid approaches to solving the problem. The results are described reasonably and report the performance (timing) measurements that are needed.

However, I would describe the description of the system as rather cursory and incomplete. It would not be possible to reproduce the results from the level of detail that appears in the paper. The precision and completeness with which the method is explained and derived is markedly lower than other work in the area, e.g. refs 12 and 19 and the following paper that is not cited:

J. Chen et al. "Real-time volumetric shadows using 1D min-max mipmaps," I3D 2011, doi:10.1145/1944745.1944752

The results of the system are not compared to any ground-truth references (e.g. by slow path tracing methods) to show how accurate they are, nor is the accuracy compared to previous methods (only the performance, and only to reference 12).

Because of this it is difficult to authoritatively judge correctness. Though it must be said that in the real time context, correctness in practice often means that it produces good-looking images fast enough, which this system does seem to.

Q3: As I read the expectations of PLOS ONE for data availability, I think the appropriate interpretation for a graphics paper is that the authors release an open source implementation of the method, together with the models required to replicate the key results of the paper. The authors simply state that all required data is in the paper, but I don't think this meets the spirit of making the data available that is needed to evaluate the paper's claims about its method.

Q4: The English is not very standard, but I did not have any trouble understanding it, so I would not think this should be a barrier for this manuscript.

On the whole, though it's not obvious to me exactly how PLOS's criteria apply to a computer graphics paper, I think it is safe to say that the paper does not meet the expectations for completeness, validation, and comparison against prior art that would be expected by a journal or peer-reviewed conference in computer graphics. The paper does not clearly state what technology is new and what is adopted from prior art, and it provides no comparisons to illustrate the benefits of the new components introduced in this manuscript. Also, it fails to cite the paper noted above (though it does cite a related one) which seems to overlap closely with the min/max tree methods used here for adaptive integration along epipolar lines.

I have made the recommendation "major revision" because I do believe a publishable paper can be written about this system. It needs to include a complete and clear description of the system and the algorithms it uses, clearly discuss which of those algorithms are new, demonstrate the benefits provided by the new methods, validate the accuracy of the images being produced, and (if my interpretation of the data policy is correct) include code and data that can be used to replicate the results.

Reviewer #3: Volumetric lighting is a hard problem in real-time rendering. This submission presents a technique to add it to a flight simulator, which has more rigid performance constraints than many real-time applications. This paper presents the engineering decisions made for a particular implementation, which is a reasonable publishable contribution.

However, there are some problems with the submitted work.

The first major problem is is unclear how the authors want it evaluated. The text suggests it is a novel research algorithm that solves what the authors claim were previously unachievable effects. The video and images suggest it should be evaluated as a simulation that faithfully and accurately reproduces the images see from an airplane cockpit, without worrying about specific complex scattering and shadowing. As a reviewer, I read these differently. The first is a research paper, the second is an engineering / systems paper. Unfortunately because of this lack of clairty, in the current state, I couldn't recommend it for acceptance in either category.

The second major problem is the paper skips citing many closely-related papers. For instance, the work on "Voxelized Shadow Volumes" by Chris Wyman (from High Performance Graphics 2011) and the follow up "Imperfect Voxelized Shadow Volumes" from HPG 2013 use a very similar epipolar space sampling technique that aligns ray marching samples with the alignment of samples in memory, and this scales to many lights (at least in the "imperfect" case). While other work by Chen et al. is cited, the paper "Real-Time Volumetric Shadows using 1D Min-Max Mipmaps" from Symp. on Interactive 3D Graphics 2011 is not cited, and their collective work renders volumetric shadows on rectified shadow maps using 1D mipmaps / binary trees (similar to the approach described in the text). Citing the Tevs et al I3D 2008 paper "Maximum mipmaps for fast, accurate, and scalable dynamic height field rendering" would also be appropriate given the use of 1D trees for traversal. Also, it's not clear why a Gaussian blob model was chosen for heterogeneous media. There are a variety of models that can be used for media ranging from Gaussian radial basis functions up to complex simulation-driven formats used in film renderers. I would have hoped to see some discussion for the rationale and citations of the relevant related work. Certainly, the real-time work typically relies on homogeneous media, but there is more exploration of more complex alternatives in the offline rendering literature. And there are various real-time solutions for more complex scattering functions (e.g., Pegararo et al's work "A closed-form solution to single scattering for general phase functions and light distributions" from Computer Graphics Forum 29(4).)

The third problem with the paper is the fairly simplistic evaluation. One of the key features of this work, as far as I can understand, is the handling of shadowing and visibility inside the media. The only image in the paper that clearly shows shadows is Figure 9b. And this is a scene where a radial image-space filter emanating from the sun would give nice results (this is frequently what game developers used for volumetric shadows today). It is important to see how the proposed approach from the paper works in a variety of scenarios and not just fairly simple situations. Additionally, one key feature of the paper is that it applies to not just one light source, but to many light sources. Only one example is provided in the submission (in Figure 11), and as far as I can tell this image does not have any shadows. Essentially: the results are unclear if this algorithm works for providing shadows from many lights. (Or if "shadows" and "many light source" are two independent improvements that do not work together.) Additionally, as far as I can tell, the only change in media density is the fairly simplistic decrease in density with altitude (shown in Figure 9a). It is unclear if the proposed approach in Equations 1, 2, and 3 is evaluated. Certainly, the texture ping-pong / ray-marching approach described in lines 219-223 is probably unnecessary with media density that changes so smoothly. The video isn't particularly useful -- it shows only simplistic relatively homogeneous media, as far as I can tell, without any shadowing, and without any ground truth comparisons. Ideally, for a flight simulator, you could compare with actual photos in similar situations.

A fourth problem with this submission is it skips the most important aspect of a research paper (in my opinion): describing why the work is needed. Typically, one would use a known, existing algorithm unless it does not solve a problem required by your application (i.e., your flight simulator). There is no need to develop a new algorithm that may have new and unknown problems without a good reason. These reasons should be clearly specified, and the choices made by a novel algorithm should derive from the functionality desired. This paper does not tie together the algorithm with the rationale, it simply specifies the algorithmic steps in a bulleted list without describing why these choices are better than any possible alternatives. Once you motivate each step in the new algorithm, it is much easier to judge whether the proposed approach will significantly improve over prior art. (Also, it often makes it easier to design evaluation images that clearly show the improvement.)

As far as clarity of the text, I would provide the following suggestions:

1) Provide an overview section that summarizes the new approach and why this new approach addresses the failures of prior work

2) Revisit the enumerated contributions (currently on lines 47-60). Make sure each one is measurable. Make sure theses measurements are provided in the evaluation.

3) For each algorithmic section, provide some reminder of what the next step in the algorithm needs to solve, and discuss which aspects are new and which reuse ideas from prior work.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Carynelisa Haspel

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 1

Gulistan Raja

6 Oct 2020

PONE-D-20-19144R1

Adaptive volumetric light and atmospheric scattering

PLOS ONE

Dear Dr. ShiHan,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The reviewer 2 found that all comments have been addressed which were raised by him in last cycle of review. He was of the view that manuscript is near the right condition for minor-revisions accept subject to fulfillment of some observation raised by him in his review recommending major revisions. On the other hand, Reviewer 1 is satisfied with the revised version of manuscript and recommended minor revisions.

After thorough consideration of comments of both reviewers, my decision is minor revisions. Please incorporate all the comments raised by both reviewers.

Please submit your revised manuscript by Nov 20 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Gulistan Raja

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: No

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is a revision of manuscript PONE-D-20-19144. From the reviews of manuscript PONE-D-20-19144, the reviewers, including myself, made suggestions that were based mostly on properly framing the work that the authors’ have completed in comparison to previous work, providing more and clearer detail regarding the authors’ algorithm (including providing the source code itself), and providing more demonstrations of the success of the authors’ algorithm. The authors have made numerous changes to the manuscript in response to those suggestions. In the revised manuscript, the reader now has a better perspective on the novelty of the authors’ work in comparison to previous work, and some important references to previous work that were missing in the original submission have now been included. The subsections titled “Related works” and “Overview” are a little unconventional, and the Introduction could have been structured differently while including the same information, but the necessary content is now there. The description of the method is now more detailed and clearer. The results from the previous submission looked good, and the additional results that the authors have included in the current submission, including comparisons to ground-truth references, also look good.

I think the paper could still use some minor revisions in terms of editing for English, especially in the newly added sections. Just a couple of examples include:

(1) The use of the word “finally” is incorrect in a number of places in the text.

(2) There are mistakes in capitalization, such as “chen” on line 277, “Transversed” on line 285, and “Textures” on line 357.

Reviewer #2: This revision is commendably responsive to the reviews. The manuscript is much improved in terms of detailed description of the contributions and proposed methods, and in evaluation of the results of those methods. Good discussion of the contributions of previous work relative to the current paper is included, and detailed pseudocode is provided to clarify the details for those who wish to implement the method. For me the level of novelty and the improvements in performance shown here are sufficient for a publication, and subject to the caveats below the paper appears to be near the right condition for a minor-revisions accept. But some of the below involve enough uncertainty that I'm still at "major revision".

Some things still missing:

* The source of the implementations of Chen et al. and Ali et al. is not mentioned. It should be clarified that it's the authors' own reimplementation (if it is), any differences with exactly what was proposed in those papers should be described, and these reimplementations should be included in the released source code.

* The source code contains comments in Chinese, many of which seem to be translations of the English but some of which seem to include new information; for publication everything should be in English.

* The text mentions a compute shader which I do not see in the source code.

* The understandability of the English has become worse in this revision, and I think it now needs some work to bring it up to the standard of "intelligible" everywhere. Really the paper should get a thorough edit by someone with more experience in written English.

It's worth pointing out that the source code does not meet the standard of replicability, since it cannot be compiled without the rest of the system from which it comes. But to the extent it contains the full implementation of the proposed method, I think it does meet the standard of revealing exactly what was used to compute the results. Including the implementations of the competing methods would make this more complete.

Some things that have me worried:

* The timing results appear to be from the same experiments as before, but with the methods relabeled. It's good that they are now labeled in a way that credits the pieces to earlier papers, but I am a little concerned that the Chen et al and Ali et al papers, which were not previously referenced, are now the labels for results that were included before. Were these methods implemented before but not cited? Or is the implementation that was made without referencing these papers now being used to represent those methods in the comparison? Both interpretations are a problematic.

* The paper by Ali et al. (formerly just called "subsampling" without any explanation) is included in the comparison but not discussed at all in the prior work. It also seems to produce the closest results to the new method. The differences between and relative merits of the methods need to be discussed.

* The authors have borrowed two complete sentences of text from the reviews and incorporated them into the discussion of prior work: "...a very similar epipolar space sampling technique that aligns ray marching samples with the alignment of samples in memory, and this scales to many lights..." and "There are a variety of models that can be used for media ranging from Gaussian radial basis functions up to complex simulation-driven formats used in film renderers...". This text is not the authors' work; reviewers are not here to write papers for authors. I'm not sure what PLOS's policies say about this.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Carynelisa Haspel

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Nov 18;15(11):e0242265. doi: 10.1371/journal.pone.0242265.r004

Author response to Decision Letter 1


28 Oct 2020

Cover Letter

Dear Editor and reviewers

Thank you very much for giving us an opportunity to revise our manuscript. We

appreciate the editor and reviewers very much for their constructive comments and

suggestions on our manuscript entitled “Adaptive volumetric light and atmospheric

scattering” (ID: PONE-D-20-19144R1).

We have studied reviewers’ comments carefully. According to the reviewers’

detailed suggestions, we have made a careful revision on the original manuscript. All

revised portions are marked in highlight in the revised manuscript which we would like

to submit for your kind consideration.

Kind regards.

TanShihan

E-mail: 530916232@qq.com

Rebuttal Letter

Dear Editor and reviewers:

Thank you for your letter and the reviewers’ comments on our manuscript entitled

“Adaptive volumetric light and atmospheric scattering” (ID: PONE-D-20-19144R1).

Those comments are very helpful for revising and improving our paper, as well as

the important guiding significance to other research. We have studied the comments

carefully and made corrections which we hope meet with approval. The main

corrections are in the manuscript and the responds to the reviewers’ comments are as

follows (the replies are highlighted in blue).

Replies to the reviewers’ comments:

Reviewer #1:

Q1. This is a revision of manuscript PONE-D-20-19144. From the reviews of manuscript

PONE-D-20-19144, the reviewers, including myself, made suggestions that were based

mostly on properly framing the work that the authors’ have completed in comparison to

previous work, providing more and clearer detail regarding the authors’ algorithm (including

providing the source code itself), and providing more demonstrations of the success of the

authors’ algorithm. The authors have made numerous changes to the manuscript in response

to those suggestions. In the revised manuscript, the reader now has a better perspective on

the novelty of the authors’ work in comparison to previous work, and some important

references to previous work that were missing in the original submission have now been

included. The subsections titled “Related works” and “Overview” are a little unconventional,

and the Introduction could have been structured differently while including the same

information, but the necessary content is now there. The description of the method is now

more detailed and clearer. The results from the previous submission looked good, and the

additional results that the authors have included in the current submission, including

comparisons to ground-truth references, also look good.

I think the paper could still use some minor revisions in terms of editing for English,

especially in the newly added sections. Just a couple of examples include:

(1) The use of the word “finally” is incorrect in a number of places in the text.

(2) There are mistakes in capitalization, such as “chen” on line 277, “Transversed” on line

285, and “Textures” on line 357.

Response:(1) We have revised the use of conjunctions, including but not limited to

“finally”, see the highlighted part for details.

(2) We have modified these mistakes in capitalization, such as “chen” ->”Chen” on line

246, “Traversed”->” traversed” on 255 and “Textures”->”texture” on line 333.

Reviewer #2:

Q1. This revision is commendably responsive to the reviews. The manuscript is much

improved in terms of detailed description of the contributions and proposed methods, and

in evaluation of the results of those methods. Good discussion of the contributions of

previous work relative to the current paper is included, and detailed pseudocode is provided

to clarify the details for those who wish to implement the method. For me the level of novelty

and the improvements in performance shown here are sufficient for a publication, and

subject to the caveats below the paper appears to be near the right condition for a minor-

revisions accept. But some of the below involve enough uncertainty that I'm still at "major

revision".

* The source of the implementations of Chen et al. and Ali et al. is not mentioned. It should

be clarified that it's the authors' own reimplementation (if it is), any differences with exactly

what was proposed in those papers should be described, and these reimplementations

should be included in the released source code.

Response: Unfortunately, the source code of Chen et al. is not found. We re-

implement it on basis of other 1D min/max mipmaps source code on “gethub”, and

modified it step by step according to the original paper. For the approach by Ali et al.,

we find its implement. The source code of the above two articles have been added to

this submission.

Q2. * The source code contains comments in Chinese, many of which seem to be

translations of the English but some of which seem to include new information; for

publication everything should be in English.

Response:Yes, the Chinese description in the source code has been changed to English

for publication, see the new version source code for details.

Q3. * The text mentions a compute shader which I do not see in the source code.

Response:Because in the historical version, some components do not support the

compute shader, we put all this type files in a separate folder, resulting in the missing

of the compute shader part in the last version. This submission will include this one.

Q4. * The understandability of the English has become worse in this revision, and I think it

now needs some work to bring it up to the standard of "intelligible" everywhere. Really the

paper should get a thorough edit by someone with more experience in written English.

Response: Yes, an American student in this field was invited to standardize and

improve the language description, see the highlighted part for details.

Q5. It's worth pointing out that the source code does not meet the standard of replicability,

since it cannot be compiled without the rest of the system from which it comes. But to the

extent it contains the full implementation of the proposed method, I think it does meet the

standard of revealing exactly what was used to compute the results. Including the

implementations of the competing methods would make this more complete.

Response: Yes, we add the implementations of the competing methods in the source-

code file, such as soft bilateral filtering shadows by Ali HH et al. and 1D min-max

mipmaps approach by Chen et al.

Q6. * The timing results appear to be from the same experiments as before, but with the

methods relabeled. It's good that they are now labeled in a way that credits the pieces to

earlier papers, but I am a little concerned that the Chen et al and Ali et al papers, which

were not previously referenced, are now the labels for results that were included before.

Were these methods implemented before but not cited? Or is the implementation that was

made without referencing these papers now being used to represent those methods in the

comparison? Both interpretations are a problematic.

Response:I’m sorry that the previous description is not clear enough. In the previous

version of the manuscript, the experimental comparison was divided into several

categories according to the methods, without specifying which paper was used as the

reference. Indeed, for the “ES+BT” type, the paper we actually referred to at the time

was “Chen J, Baran I, Durand F, Jarosz W. Rendering images with volumetric shadows

using rectified height maps for independence in processing camera rays; 2016.”

(Number 28 in the references in our manuscript) This paper is the subsequent version

of “Real-time volumetric shadows using 1D min-max mipmaps” by the same author in

which ES and 1D min-max acceleration data structure for rectified shadow map is also

the core algorithm. Unfortunately, the source code of these two paper is not found.

We re-implement it on basis of other 1D min/max mipmaps source code on gethub,

and modified it step by step according to the original paper. Since the implementation

of the two versions of the experimental reference paper is essentially the same version

we implemented, only the name has been modified, and the results of the experiment

remain unchanged. Similarly, the approach by Ali et al. was called “ES+SS” in the

previous version of manuscript, which is actually the same work by Ali et al in this

version. Therefore, the experimental results have not changed, and the source code

will be submitted in this version.

Q7. * The paper by Ali et al. (formerly just called "subsampling" without any explanation)

is included in the comparison but not discussed at all in the prior work. It also seems to

produce the closest results to the new method. The differences between and relative merits

of the methods need to be discussed.

Response:Yes we discussed the paper by Ali et al. as follows: This study introduces

the soft bilateral filtering shadows method of image-based shadows. Its main

contribution is to obtain soft shadow with the lowest number of samples and the

image quality is compensated by bilateral filtering. Compared with the proposed

approach, the first and foremost, both methods achieve a good balance between

efficiency and image quality. This the why the method is very close to the proposed

approach. However, this method reduces the samples for efficiency, which leads to

sparse distribution of ray marching, where some unnatural light beams will appear.

See the highlighted part for details on line 408-414.

Q8. * The authors have borrowed two complete sentences of text from the reviews and

incorporated them into the discussion of prior work: "...a very similar epipolar space

sampling technique that aligns ray marching samples with the alignment of samples in

memory, and this scales to many lights..." and "There are a variety of models that can be

used for media ranging from Gaussian radial basis functions up to complex simulation-driven

formats used in film renderers...". This text is not the authors' work; reviewers are not here

to write papers for authors. I'm not sure what PLOS's policies say about this.

Response:Yes, we have re-modified the related description.

"...a very similar epipolar space sampling technique that aligns ray marching samples

with the alignment of samples in memory, and this scales to many lights..."

to “… a very similar approach by using epipolar space sampling, which aligns the

samples of ray marching in memory. Then, many lights are supported by their

subsequent works” on line 135-137.

"There are a variety of models that can be used for media ranging from Gaussian radial

basis functions up to complex simulation-driven formats used in film renderers..."

to” There are various methods to generate air media, from Gaussian blur to

physically driven method used in offline renderers” on line 302-303.

--------------------------------------------------End of Reply---------------------------------------------

Once again, thank you very much for your constructive comments and suggestions

which would help us both in English and in depth to improve the quality of the paper.

Kind regards,

TanShihan

E-mail: 530916232@qq.com

Attachment

Submitted filename: Response to Reviewers.pdf

Decision Letter 2

Gulistan Raja

30 Oct 2020

Adaptive volumetric light and atmospheric scattering

PONE-D-20-19144R2

Dear Dr. ShiHan,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Gulistan Raja

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Gulistan Raja

6 Nov 2020

PONE-D-20-19144R2

Adaptive volumetric light and atmospheric scattering 

Dear Dr. ShiHan:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Gulistan Raja

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Basic priciples.

    (PDF)

    S1 Video. Demostration video.

    (MP4)

    S1 Data

    (TXT)

    S1 Dataset

    (RAR)

    S1 Sourcecode

    (RAR)

    Attachment

    Submitted filename: Rebuttal letter.pdf

    Attachment

    Submitted filename: Response to Reviewers.pdf

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES