Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Oct 8.
Published in final edited form as: IEEE Trans Vis Comput Graph. 2020 Mar 13;26(6):2156–2167. doi: 10.1109/TVCG.2020.2970522

Photographic High-Dynamic-Range Scalar Visualization

Liang Zhou 1, Marc Rivinius 2, Chris R Johnson 3, Daniel Weiskopf 4
PMCID: PMC8500312  NIHMSID: NIHMS1741467  PMID: 32175863

Abstract

We propose a photographic method to show scalar values of high dynamic range (HDR) by color mapping for 2D visualization. We combine (1) tone-mapping operators that transform the data to the display range of the monitor while preserving perceptually important features, based on a systematic evaluation, and (2) simulated glares that highlight high-value regions. Simulated glares are effective for highlighting small areas (of a few pixels) that may not be visible with conventional visualizations; through a controlled perception study, we confirm that glare is preattentive. The usefulness of our overall photographic HDR visualization is validated through the feedback of expert users.

Keywords: Tone mapping, glare, high dynamic range visualization, 2D diagrams

1. Introduction

WE refer to HDR data as scalar data containing values of very large, often even unbounded ranges on 2D diagrams. HDR data may occur in many different plots: scatter plots [1], scatter plot matrices [2], parallel coordinates [3], [4], trajectory plots [5], and node-link diagrams [6]. These plots are used across all areas of scientific and information visualization. In this paper, we propose a photographic method for the visualization of scalar HDR data; our method makes important features perceptually pronounced and highlights high-value regions of small area. The method is intended for rapidly communicating important information of the data, and can be easily and intuitively understood even by non-expert users.

HDR data are typically visualized by global transformations, e.g., linear, logarithmic, or gamma functions, to the display range of a monitor including subsequent color mapping. Such global mappings could mislead visualization users: important values may be washed out as they are mapped to similar values as their surroundings and cannot be perceived due to insufficient contrast, therefore causing loss of important details. Visualizing all data points while identifying high-value outliers in color-mapped HDR visualizations with these transformations is difficult if not impossible, even with laborious color map editing. In cases that high-value regions are of only a few pixels, even carefully designed transformation and color mapping cannot address the problem as there is not enough “energy” on the monitor to clearly show just a few pixels.

Such a case is illustrated in Figure 1, where an example of edge-bundled HDR flight trajectory data is shown with a color map that goes from full black to full white. Figures 1(b)(d) result from typical global transformations: most data except for a few high-valued pixels are invisible with linear mapping (Figure 1(b)), whereas high-valued pixels cannot be well seen due to ineffective use of the full color range with gamma mapping (Figure 1(c)) or logarithmic mapping (Figure 1(d)). For all cases, the majority of the color range is concentrated in areas as small as a few pixels that can only be seen with zoom-ins. In fact, the problem here cannot be addressed with existing methods as it is inherent in the data itself: the areas containing most of the dynamic range are just too small. This is a principal problem, and no change of color maps can solve this issue. In contrast, our method, Figure 1(a), clearly shows all data points and highlights the high-valued regions with glares on top of the tone-mapped visualization without the need of zooming-in.

Fig. 1.

Fig. 1.

(a) Photographic HDR scalar visualization of high dynamic densities of bundled flight routes within the United States. Conventional visualizations with transformed color maps for comparison: (b) linear, (c) gamma 1/2.2, and (d) logarithmic mappings. All images use the full range of the same color map—details are shown in the zoom-ins. A gray line is drawn for every 20% of the data value range in each color map to indicate the transformation. For the HDR visualization, glare is applied to pixels with values greater or equal to 40% of the value range. Our new photographic technique (a) highlights regions with large scalar values even if they cover only a few pixels, which are easily missed by any of the existing color mapping approaches. At the same time, we still show the full dynamic range of data values.

We propose an adaptive photographic HDR scalar data visualization method that combines perceptually-driven HDR image processing and overlaid glares for highlighting. Photographic methods are well known in computer graphics, for example, for artistic image editing, color grading in the film industry, or in computer games. However, photographic effects have been largely ignored in visualization. We argue that such effects should also be introduced to visualization to generate easy-to-perceive visualizations.

HDR image processing is achieved by using two carefully chosen tone-mapping operators that balance the perception of the overall structure of the data while preserving fine details. Simulated glares, which we confirm to be preattentive through a perception study, are used to make high-value regions (sometimes as small as a few pixels) more pronounced. We evaluate tone-mapping operators in the context of the specific characteristics of visualization images, which differ substantially from the natural images used to devise tone-mapping operators originally. The overall photographic HDR method is assessed through a qualitative study with expert users. Our contributions are as follows:

  • An adapted method that combines tone mapping and glare for HDR data visualization.

  • A global tone-mapping operator and a local tone-mapping operator suitable for the characteristics of visualization images, based on a quantitative comparison of existing operators and a modification of the best candidate operators.

  • A controlled perception study that confirms that glare is a preattentive visual cue.

  • A set of interaction techniques that allow users to better gain knowledge of the data.

Our technique is different from computer graphics methods that also combine HDR and glare simulation. First, our goal is very different because visualization aims to communicate data values and structures. Second, our method addresses generic scalar HDR data in visualization, which could potentially be abstract images that contain high spatial frequencies, whereas computer graphics concerns natural/real-world scenes. The third difference is that our method works on both luminance and color-mapped images, whereas computer graphics methods work directly on RGB images.

2. Related Work

HDR image processing is a critical and extensively studied field in computer graphics [7]. There, RGB-color natural-scened photographs or photorealistic renderings are compressed into low-dynamic-range (LDR) images that fit the display range of monitors using tone-mapping operators.

Among the many tone-mapping operators, the tone-curve method by Mai et al. [8] and the contrast perception method by Mantiuk et al. [9] are of our interest as they yield good results in our evaluation (Section 3) and have a strong perception basis. Mai et al. [8] optimize a tone curve based on a statistical model that approximates the distortion of the visual response between the HDR image and the LDR representation. In contrast, the method by Mantiuk et al. [9] is based on human perception of contrast: the contrast of the image is converted into a linear perceptual response space for easy manipulation in terms of just-noticeable differences; and then, the image is transformed from the response space back to a displayable image by solving an optimization problem. Our technique extends these methods by color mapping and GPU acceleration and applies them to HDR data in visualization.

HDR image processing has been mostly ignored in visualization research. One of the few counter examples is Color Lens [10]—an interactive visualization method that uses the dynamic range of user-specified lenses to optimize local color maps to gain insights into 2D HDR data. In contrast, our HDR method intends to generate static overviews for HDR data with little interaction; we use the full dynamic range of the whole data, and we use the “magic lens” [11] for “seeing through” only. Other examples are from volume visualization: Yuan et al. [12] employ a tone-mapping operator to generate displayable images preserving details of high-precision volume rendering on regular monitors. Vollrath et al. [13] include a global tone-mapping operator [14] to map overly bright highlights to the displayable range. In these cases, the datasets to be visualized are objects having natural spatial configurations with rather smooth density distributions. In comparison, our technique concerns visualizations that generate more abstract images containing arbitrary density distributions and higher spatial frequencies.

However, regardless of what tone-mapping operator is chosen, perceived contrast is reduced in bright regions thanks to the luminance discrimination of the human visual system [15]. Measured data suggest that the perception of high luminance roughly follows the Weber-Fechner law, i.e., the ratio of the threshold luminance ∆L and the luminance L is a constant (∆L/L = c). Glare can be used to make bright pixels in images appear brighter [16], [17] and help generate dramatic photographic effects. Furthermore, glare can highlight very small regions of a few pixels in an image. The structure of glare can be divided into bloom and flare [16]: the bloom is a glow that reduces the contrast around bright objects, whereas the flare is composed of the lenticular halo, i.e., the concentric colored rings, and the ciliary corona—the rays or needles radiating from bright objects. We base our glare simulation on previous work by Ritschel et al. [17] that has a wave-optics-based foundation, a flexible texture-based aperture configuration, and can be effectively accelerated on a GPU. In contrast to manually marking pixels for glare as in computer graphics, our new glare simulation pipeline automatically finds high-value regions.

Non-photorealistic techniques in illustrative visualizations are also relevant. Such techniques have been proposed to visualize static scalar volume data [18], time-varying scalar volumes [19], and time-varying flow data [20]. Our photographic HDR visualization addresses problems in a different context but also generates drastic illustrative results.

Research on preattentive processing has been applied to visualization and visual analysis [21]. Relevant to the appearance of glare, features like lighting direction, 3D depth, and certain shapes are confirmed to be preattentive [21], but the preattentiveness of glare has not been studied in visualization. In this work, we confirm that glare is preattentive and use it for highlighting in visualization.

3. Method

The photographic HDR visualization is comprised of an HDR processing pipeline and a glare simulation pipeline followed by (optional) lens-based user interaction. Figure 2 illustrates the workflow of our method: the input HDR data I(x,y) is processed by the HDR pipeline, which generates a color-mapped image C(x,y) after processing by the tone-mapping operator; the glare pipeline generates a glare overlay G(x,y) by convolving the glare filter t(x,y) with high-value regions Ih(x,y); the color-mapped image and the glare overlay are blended with weight a to create the final visualization P(x,y). These processes are also described as pseudo codes in Algorithm 1.

Fig. 2.

Fig. 2.

The workflow of the photographic HDR visualization.

Algorithm 1 Photographic HDR Visualization                                ¯¯  1:procedure PHOTOHDR  2:      Lx,y tmoIx,y           Lx,yLDR luminance  3:      Cx,y colormapLx,y  4:      Ihx,y blobThresIx,y          Ihx,yhigh-val      pixels  5:      G(x, y)glare(Ih(x, y))  6:      P(x, y)C(x, y)aGx, y            aglare intensity  7:      return P(x, y)  8:  9:blobThres(I(x, y)):              Blob detection and thresholding10:      S(x, y,σ) g(x, y,σ)*Ix,yS(x, y,σ)scale space11:      blobMask(x, y)find extrema of σ2ΔS(x, y,σ)12:      Ihx,yIx,yυtblobMask(x, y)13:                                                                    υtthreshold14:15:glareIhx,y:                               Glare overlay generation16:      tx,yGlare Simulation             tx,yglare texture17:      Gx,yIhx,y*tx,y                                                ¯

3.1. HDR Pipeline

The HDR pipeline compresses the HDR data I(x,y) to an LDR luminance image L(x,y) in which all pixels with data are visible while important features can be perceived. Therefore, all features can be seen after color mapping without the need for manually editing color maps.

Tone-mapping operators can be classified into two groups: global operators (spatially invariant) and local operators (spatially varying). We argue that both types of operators are useful and needed in data visualization—global operators keep a one-to-one mapping for consistent color encoding, whereas local operators enhance contrast, and our technique integrates both operators in a non-confusing way through the magic lens discussed in Section 3.4.

Figure 3 summarizes effects of representative tone-mapping operators on a typical 2D histogram of volume data: gamma mapping fails to show structures other than high-value regions; logarithmic mapping reveals all pixels with non-vanishing values but reduces contrast, resulting in a limited range of colors; the tone-mapping operator by Reinhard et al. [14] fails to preserve contrast across the whole image although all non-zero pixels can be seen due to local “dodging-and-burning”, which creates realistic results on natural images.

Fig. 3.

Fig. 3.

Our HDR pipeline applies the tone-mapping operators of Mai11 [8] and Man06 [9] followed by color mapping. Effects of other tone-mapping operators are shown for comparison (from left to right): Rei02 [14], global logarithmic, and global gamma functions. The dataset is the 2D histogram of “scalar” versus “gradient magnitude” of a human hand CT scan.

Tone-mapping operators were designed for, and evaluated on, natural-scened images [22], [23] and videos [24], however, no evaluation was previously performed for visualization images, which have quite different characteristics than natural images. Therefore, we conducted a systematic evaluation of popular tone-mapping operators on visualization images. Full details of this evaluation can be found in the supplemental material.

Our evaluation of tone-mapping operators is based on the dynamic-range-independent image quality metric [25] that predicts the perceived image appearance of the HDR reference image and compares the predictions against tone-mapped images. This metric has several important benefits: it makes meaningful comparisons between images of different dynamic ranges; it classifies distortions in the tone-mapped image into three categories giving insights into the tone-mapping algorithms and enabling detailed examinations; it uses a carefully calibrated model of the human visual system and is statistically validated through lab perception studies using HDR monitors. Therefore, our objective computational study using this metric is advantageous, as a subjective user-based approach for the same study would have to deal with a very large parameter space.

Our systematic evaluation considers 14 popular tone-mapping operators: linear, gamma 1/2.2 (gamma), logarithmic (log), Man06 [9], Mai11 [8], and [14], [26], [27], [28], [29], [30], [31], [32], [33].

We choose three datasets for the evaluation because they are typical representatives of HDR data containing different types of features: a 2D scatter plot containing only dots; a parallel coordinates plot containing curves and high-value areas of curve crossings; and a less abstract geospatial dataset of flight routes of the world containing both dots and curves. All images are converted to luminance before evaluation as these tone-mapping operators work in different color spaces. Details of the tone-mapping operators, test images, and our systematic comparison can be found in the supplemental material.

The image quality metric [25] classifies distortions in tone mapping to three categories: loss of visible contrast, amplification of invisible contrast, and reversal of visible contrast. For each tone-mapped image, we calculate the mean value of the averaged distortion score of all types on each pixel. Through an analysis of these mean values and examination of distributions of distortions, we conclude that the optimized tone curve method (Mai11 [8]) and the contrast perception-based method (Man06 [9]) are the best performing global and local tone-mapping operators in our evaluation and are our choices for the photographic HDR method.

It can be seen in Figure 3 (Mai11) that the result from Mai et al. [8] reveals all pixels with data while achieving good contrast throughout the image, and the technique by Mantiuk et al. [9] (Figure 3 (Man06)) provides better local contrast, especially for the left arch in the image.

Our systematic comparison allowed us to identify the two tone-mapping operators that are most suitable for the characteristics of visualization images. To obtain even better results we modify and extend both of them for data visualization. For the tone curve method [8], a small δ value is added to all pixels to avoid errors in the logarithmic mapping for background pixels (zero-valued), and the output image of the tone curve is normalized to use the full range of a color map. For the local tone-mapping operator [9], the contrast factor in the tone-mapping operator is set to 0.4 by default to avoid overly bright regions. For both methods, the tone-mapped luminance images L(x,y) are then used for color mapping to generate color-mapped images C(x,y). Since our tone-mapping operators of choice generate LDR images with good distributions of values, the full range of colors in typical color maps can be effectively used without any further editing (Figure 3 (blue rounded box)).

3.2. Glare Pipeline

The glare pipeline adopts a state-of-the-art glare simulation—the wave-optics-based glare simulation [17]—for glare filter generation. In our case, we simulate static glares seen by human eyes: the aperture is simulated once without movement or dynamic changes; a fixed pupil diameter is used as the luminance of the HDR data does not correspond to physical luminance. Then, the glare filter is approximated as the diffraction pattern at the retina caused by the aperture. By default, all wavelengths of visible light are used for glare filter generation; however, we also make the range of wavelengths as a user-tunable parameter, so that the color of the glare filter can be fine-tuned. An example of glares with the modified range of wavelengths is shown in Figure 12(b), where better contrast is achieved for glares over the parallel coordinates.

Fig. 12.

Fig. 12.

Parallel coordinates of the wine quality dataset: (a) global logarithmic mapping and (b) photographic HDR visualization.

Next, the glare filter is convolved with only the high-value pixels Ih(x,y) in the HDR data to generate the glare overlay G(x,y) (function glare() in Algorithm 1). Figure 4 (glare overlay) shows a glare overlay generated by convolving a small blob with the glare filter: the bloom at the center and the flare composited by the lenticular halo (colorful rings) and the ciliary corona (radiating rays) are clearly visible. High-value pixels Ih(x,y) are detected automatically with blob detection [34] and thresholding (function blobThres() in Algorithm 1). A discrete 3D Gaussian scale-space is generated from I(x,y) using scales that satisfy the scale selection rule [34]. Blobs are detected using the normalized Laplacian response of 26 neighbors of a pixel in the scale-space and then reconstructed to create the blob mask. Figure 4 (blob mask) shows the reconstructed blobs (in orange) from a synthetic data containing blobs of various radii.

Fig. 4.

Fig. 4.

The glare pipeline illustrated using a synthetic data of blobs with various radii.

The blob mask detects high-value and low-value regions alike, and therefore, we combine the blob detection with a user-tunable threshold vt to discard low-value blobs. The effect of thresholding on the blob mask is shown in Figure 4 (high-value pixels). The threshold can be flexibly used to leverage between highlighting global high-value regions or local high-value regions in the visualization—in general, a smaller threshold value would introduce more points for glaring.

To support the magic lens that could “see through” glares (Section 3.4), connected component labeling is performed after high-value regions are found. Individual glare images are generated by convolving the glare filter with individual images of connected components. One sees through individual glares by subtracting glare(s) of interest from the glare overlay. The number of connected components is usually no more than 10 in typical usage scenario. However, if a low threshold is applied, a large number of connected components could emerge. In that case, i.e., the number of connected components is greater than 10, we cluster neighboring pixels (not necessarily connected) using k-means.

3.3. Composition

The color-mapped image C(x,y) and the glare overlay G(x,y) are additively blended with a weight a that controls the glare intensity to obtain the final image P(x,y). Different glare intensity settings could greatly impact the appearance of the visualization as shown in Figure 5. Smaller glare intensities lead to smaller bloom with barely visible flare yielding better contrast around the glare and better visibility of the underlying image, whereas larger glare intensities generate larger bloom and make flares more visible and, therefore, results in a more photographic look.

Fig. 5.

Fig. 5.

Visualizations using a glare intensity a of 0.2 (a), 0.4 (b), 0.8 (c), 1.0 (d), and 1.2 (e) on the world flight routes dataset.

3.4. User Interaction

The photographic HDR visualization can be explored by user interaction tools—the magic lens and glare switch—for more insights. The glares provide an impressive photographic look but also occlude the underlying structure, and the interaction tools address this issue by helping the user to “see through” the glares that triggered their attention.

We adopt the magic lens concept [10], [11] for its intuitiveness. We use a disk-shaped semi-transparent lens to overlay additional images with the contextual photographic HDR visualization (Figure 6(a)) with simple hovering operations of the mouse pointer and pressing keys for engaging different modes. The reveal mode shows the underlying HDR visualization by removing glares within the lens. As shown in Figure 6(b), it allows the user to see the structures underneath the large glare area, where a high-density bundle showing the major east-westward air route in the East Coast of the USA. Furthermore, the highest pixel value of the glare at the center is shown as semi-transparent texts next to the lens.

Fig. 6.

Fig. 6.

The magic lens allows the user to look through the visualization (a) with three different modes: (b) the reveal mode, (c) the bright pixel mode, and (d) the contrast enhanced mode.

The bright pixels mode overlays the bright pixels that are responsible for the glares with the HDR visualization (Figure 6(c)). Here, we clearly see pixels of highest densities that generate the glares within the region.

With the contrast enhancement mode, the HDR visualization from the contrast-based tone-mapping operator [9] is shown within the lens (Figure 6(d)). This mode allows the user to better examine relative differences within the lens with the enhanced contrast by the local tone-mapping operator, for example, the high-density region is more pronounced in Figure 6(d) compared to the global HDR visualization [8] in Figure 6(b). The usefulness of this mode is further seen in the Taxi data example in Section 5, where the Metropolitan Museum is identified to have a higher intensity than surroundings, which cannot be seen with the global HDR visualization.

A simple switch is used to toggle all glares on and off—adopting the user interaction employed in security checks for X-ray scanners, e.g., those used in airports, where the security agent quickly toggle between different color maps to identify suspicious objects of different materials. The usefulness of interactions in our tool is demonstrated in Section 6.

3.5. Implementation

Our method is implemented using C++ and DirectX, and is accelerated by the GPU. We achieved interactivity with our implementation on a laptop with a 2.3 GHz Intel Core i5 CPU, 8 GB main memory, and an Intel Iris Plus Graphics 655 GPU.

4. Perception Study

Glare is an important component for our overall method. Based on our initial observations, we expected it to be a preattentive visual cue. Therefore, we performed a perception study to test whether glare is preattentive. Our study follows the typical design from state-of-the-art preattentive studies, for example, the one used in Deadeye [35], [36].

Our hypothesis is that glare is a preattentive visual cue. For the hypothesis to hold true, the accuracy has to be high and error rate has to be sufficiently low and remain constant among different set sizes and types of distractors.

The main factor is the set size, i.e., the number of distractors, with two levels: 16 and 32; the secondary factor is the type of distractors, which has three levels: disk, rectangle, and the mixture of half disks and half rectangles. The stimuli are generated by randomly placing objects in a 6 × 6 grid with jittered positions. The distractors are colored with the averaged color of the glare, so that they are close to the glare in hue and luminance. Examples of stimuli used in the study can be found in Figure 7. Here, we only show stimuli that contain glares. In fact, the study is a stress test—in real applications, much fewer distractors that look much different to the glares are expected. Each set consists 48 images in a randomized order—half of them containing a glare. A head rest is used and the viewing distance is set such that the horizontal size of the test image is about 15°.

Fig. 7.

Fig. 7.

Example stimuli with 32 objects used in the perception study on preattentiveness. Here, each stimulus contains one glare.

4.1. Task

A test image on a black background, either with or without a glare, is shown for 250 ms, and the participants are asked to press the “Y” key if they think that the glare was present, and the “N” key otherwise. Before each test image, a white crosshair on the same black background is shown for 2500 ms. The participants were asked to focus on the white crosshair.

4.2. Study Procedure

The study took place in a laboratory at our university, with room light switched off and the curtains shut in order to minimize the reflection on the screen. Volunteers were recruited as participants. We informed participants about the study’s procedure and provided a formal consent form. After their consent, we asked them to fill out demographics including gender and age.

Detailed instructions for the experiment followed, and then participants were asked to complete a tutorial. The tutorial used the same type of distractor settings for the actual experiment, but with only 4 objects.

4.3. Results

A total of 10 subjects (3 females, 7 males), aged 26 to 67 (M = 35,SD = 11.4), all reported normal or corrected to normal acuity and no defects of vision, participated in the study.

Similar to other studies on low-level perception and based on our pilot study, we expected little variance between subjects and, therefore, did not choose a large number of participants. The following results are in line with this expectation. The results of the study are shown in Figure 8: the average accuracies (16 disks: M = 0.998,SEM = 0.002; 16 rectangles: M = 0.975,SEM = 0.007; 16 mixed objects: M = 0.983,SEM = 0.006; 32 disks: M = 0.988,SEM = 0.05; 32 rectangles: M = 0.996,SEM = 0.003; 32 mixed objects: M = 0.996,SEM = 0.003) are similar to other preattentive visual cues. Two-way ANOVA is applied with the set size and type of distractors as within-subject variables: results show no significant difference in accuracy between set size (F(1,9) = 1.678,p = 0.227) nor type of distractors (F(2,18) = 1.316,p = 0.293), but there is a crossover interaction (F(2,18) = 3.983,p = 0.037).

Fig. 8.

Fig. 8.

The average accuracies of the perception study. Error bars show standard error of the mean (SEM).

Therefore, we confirm that glare is, in fact, a preattentive visual cue.

5. Examples

We demonstrate the usefulness of our method with examples of a wide range of typical visualizations: trajectories, spectral imaging, geographical visualization, and parallel coordinates.

Figure 1 shows force-directed bundled [37] flight route data within the continental United States. With conventional visualizations (Figure 1(b)(d)), not all flight routes are visible and the upper half of the color map is used only for a few pixels that are invisible without zooming-in; it is impossible to determine busy airports or flight routes (please see full resolution images in the supplemental material). Our photographic HDR visualization (Figure 1(a)) reveals all flight routes and makes use of the full color map; glares clearly highlight the bundled east-westward flight routes and the complex flight route networks in the east; important airports are clearly visible with glares, for example, Seattle, Houston, Chicago, and New York City.

Photographic HDR visualizations of world flight routes are shown in Figure 5. The HDR visualization makes all paths visible while retaining the contrast. Important long-haul intercontinental flights that are low-value outliers can be clearly seen. In the meantime, the glares highlight three clusters of air-traffic hotspots in the world: East Asia, Europe, and North America. One cannot gain such insights from conventional visualizations since they are not able to highlight high values while preserving low-value outliers. Note that this example uses a white-to-black color map, and our method is still effective here even with a white background, demonstrating that our method is flexible and versatile regarding the choice of color map. Using the same tone-mapping operator in the conventional way—applying to the RGB image after color mapping—will lead to a very different result as the low luminance regions will not be well preserved as only a few luminance levels will be used there, i.e., the direct application of HDR methods from computer graphics would completely fail for such visualization problems because it would emphasize the background.

Figure 9 shows the pickup locations of taxis in New York City [38]. With global linear mapping (Figure 9(c)), or gamma mapping (Figure 9(e)), the majority of the image cannot be seen; with the logarithmic mapping (Figure 9(g)), larger parts of the image can be seen, but details of Queens and the Bronx are not well shown and the color range is not well used. Photographic HDR visualization, shown in Figure 9(a), better uses the range of the color map and reveals all pickup locations: low-value regions in Queens and the Bronx can be clearly seen.

Fig. 9.

Fig. 9.

Visualizations of the New York City taxi data: (a) photographic HDR visualization, (c) linear mapping, (e) gamma mapping, and (g) logarithmic mapping. The corresponding transformed color maps with marks on every 20% are shown in (b), (d), (f), and (h), respectively. The Metropolitan Museum is identified with the contrast-enhanced mode of the magic lens (the bottom right zoom-in in (a)), and four individual hotspots are revealed under the glare over Penn Station with the bright pixel mode (the top left zoom-in in (a)).

The glares provide more insights into the data by highlighting regions of high value. Exploring the visualization using the magic lens with the contrast-based tone-mapping operator lets us identify a relative high-value spot near Central Park (the bottom right zoom-in in Figure 9(a))—the Metropolitan Museum of Art. Hovering over the largest glare in Manhattan (the top left zoom-in in Figure 9(a)) shows four individual bright clusters: (from south to north) three entrances to Penn Station, and 42 St – Port Authority Bus terminal. By checking the map, glares in Queens are terminals of La Guardia airport, and in fact, they have the highest densities in the image.

Visualization is also used in signal processing for Fast Fourier Transform (FFT) images, where high-value regions are of interest. In our example, the FFT power spectrum of a test image contaminated with a sine signal (Figure 10(a)) is visualized using logarithmic mapping (Figure 10(b)), where relevant areas with high values are not visible. With the photographic HDR visualization in Figure 10(c), four high-value points of the artifact are clearly seen with glares in addition to the prominent direct current (DC) component at the center.

Fig. 10.

Fig. 10.

The power spectrum of a noisy image (a) is visualized with (b) logarithmic mapping and (c) photographic HDR visualization.

Figure 11 shows choropleth maps visualizing the per-person GDP data of Germany, France, and the United Kingdom of the year 2008. Our method (Figure 11(a)) highlights municipal areas that have highest per-person GDP: London, Paris, Berlin, Hamburg, and Munich; it also well uses the range of the color map. The global logarithmic mapping (Figure 11(g)) transforms high-value regions to similar reds that are difficult to distinguish, whereas global linear (Figure 11(c)) or gamma (Figure 11(e)) mappings fail to use the range of the color map effectively.

Fig. 11.

Fig. 11.

The GDP per-person data of Germany, France, and the United Kingdom of the year 2008 is visualized as choropleth maps with (a) photographic HDR visualization, (c) linear mapping, (e) gamma mapping, and (g) logarithmic mapping. The associated transformed color maps are shown in (b), (d), (f), and (h), respectively.

Parallel coordinates are a typical source of HDR datasets due to overplotting and the intrinsic structure introduced by the point-line duality. White wine quality data [39] is visualized by parallel coordinates in Figure 12. In Figure 12(a), a global logarithmic transformation generates a large area of high-value regions in dark gray/black with similar luminance, making it difficult to perceive details inside; it is hard to see high-density regions other than those having a large area between axes. The photographic HDR visualization (Figure 12(b)) shows all polylines, and highlights high-density regions with pronounced glares—even for those small high-density regions on axes that are not visible with the logarithmic mapping.

6. Feedback of Expert Users

We evaluated our visualization tool that implements the proposed method with the feedback of five visualization experts with think-aloud protocol analysis and questionnaires. All of them are experienced visualization experts with experiences in the area from 3 years to 10 years. After an introduction to the proposed method, participants were shown three static examples to get familiar with our visualization, and then they were asked to explore the New York taxi data with our visualization tool while the observer observed and talked to participants. Afterward, they were asked to fill out a questionnaire. Note that further details can be found in the supplemental material.

Some common preferences from participants are as follows. First, they found that glares effectively draw their attention to high-value areas. Second, the participants liked interactive techniques that reveal the underlying image under glares. Third, they found combining global and local tone-mapped images in the magic lens useful and not confusing.

In terms of criticisms, one participant who had a photography background did not initially understand that the glares were for highlighting—because, in photography, glares were mostly used for style and aesthetic purposes—but the confusion was resolved after explanation. Another participant found that it was more difficult to map the colors to numbers with our HDR method than logarithmic mapping, as the logarithmic function is conceptually simple, whereas the tone-mapping operators are black boxes. The problem is resolved after value range bars are drawn on color maps (Figures 1 and 9).

We summarize the mean responses of the questionnaire using a Likert scale 1–5 (1 for not at all/strongly disagree to 5 for excellent/strongly agree) after interacting with the visualization tool: 4.4 is rated for insights gained of data with the new method, 4.7 for effectiveness of interaction methods, and 2.3 for how misleading the new method is.

Overall, participants agreed that the proposed method was able to improve insights gained from the dataset, and was effective for qualitative analysis of the dataset. They agreed that the glares steered their attention to interact with the high-value regions.

7. Discussion, Conclusion, and Future Work

In conclusion, we have introduced photographic HDR scalar data visualization: an HDR visualization method that requires little to no user interaction. It is designed to include non-expert users and expert users alike, aiming to be easy-to-understand and intuitive. The best use case of our method is visualizing data that has high dynamic range concentrating in small spatial areas, especially, for those as small as just a few pixels. Conventional methods cannot address issues there without zooming-in and careful color map design, as shown in the examples in this paper.

Our method is intuitive: glares preattentively steer attention, and people understand that glares are intended for highlighting features of high value. We found that this mental indication works naturally on dark-to-bright color maps with dark backgrounds, and it also works on bright-to-dark color maps with bright backgrounds. However, our method will not be as effective if areas of high value are rather large. For example, glares may not be suitable for a CT body scan with large areas of high values such as bones. Also, glare is not appropriate for datasets where high values are not meaningful unless for indicating outliers. Furthermore, a low contrast surrounding of the glare region, e.g., the neighboring pixels have slightly lower data values than the glare region, will decrease the effectiveness of our method.

Based on the feedback from the user studies and our own observations, the following are the limitations of our method. The biggest one is that glares occlude the underlying visualization. However, it is hardly avoidable if features of only a few pixels have to be seen—they have to cover a large enough display area. Furthermore, it is an intrinsic characteristic of human visual perception that small regions mask darker areas in their neighborhood—HDR monitors take advantage of this by using a small number of large and bright back lights for much higher pixel resolution [40]. To address the occlusion problem, we provide the magic lens to show structures underneath the glares. The second issue is that global and local tone-mapping operators have their own limitations. On the one hand, the global tone-mapping operator [8] leads to a consistent mapping from the HDR to LDR image, which makes colors comparable after color mapping. However, there is no absolute and reliable perception of color, i.e., no one-to-one perceptual mapping from color/luminance back to data values: one of such mechanisms is the simultaneous contrast effect—the same color appears differently in different backgrounds. On the other hand, the local tone-mapping operator [9] enhances contrast and makes important features more significant perceptually; but it is a one-to-many mapping, which could be misleading after color mapping, especially with a multihue color map. We address these issues and make the best of both methods through user interaction. The two techniques are combined in a non-confusing way—the global HDR visualization is shown as an overview, whereas the local HDR visualization is shown through the isolated magic lens, which has a small area that does not interfere with the whole image.

In the future, we would like to use an automatic and data-dependent method, e.g., histogram analysis, for choosing good thresholds for glaring. The method could be augmented with additional information of the data, e.g., the global value range, the local value range and histogram within the lens, and brushing-and-linking for qualitative analysis.

Beyond the specific techniques and studies that we have presented in this paper, we think that there is a more general message: We are convinced that glare—as a preattentive visual cue—could play an important role in other visualization examples that need intuitive highlighting. Similarly, HDR techniques from computer graphics might be adopted for other visualization uses than the one presented in this paper.

Supplementary Material

Supplemental material

Acknowledgments

This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – SFB 716 and Project-ID 251654672 – TRR 161 (project B01), by National Institute of General Medical Sciences of the National Institutes of Health under grant number P41 GM103545–18, and by the Intel Graphics and Visualization Institutes. The GPU used for this research was donated by the Nvidia Corporation.

Biographies

graphic file with name nihms-1741467-b0001.gif

Liang Zhou is a research associate at the Scientific Computing and Imaging (SCI) Institute at the University of Utah. Prior to this, he was a postdoc researcher at the Visualization Research Center (VISUS) of the University of Stuttgart, Germany. He received his PhD degree in computing from the University of Utah in 2014. His research interests include scientific and information visualization, visual analytics, and visual perception.

graphic file with name nihms-1741467-b0002.gif

Marc Rivinius is a MSc Computer Science student at the University of Stuttgart, Germany, where he received his BSc degree in 2017. His interests include visual computing and information security.

graphic file with name nihms-1741467-b0003.gif

Chris R. Johnson is a Distinguished Professor of Computer Science and founding director of the Scientific Computing and Imaging (SCI) Institute at the University of Utah. He also holds faculty appointments in the Departments of Physics and Bioengineering. His research interests are in the areas of scientific computing and scientific visualization. In 1992, Dr. Johnson founded the SCI research group, now the SCI Institute, which has grown to employ over 200 faculty, staff and students. Professor Johnson is a Fellow of AIMBE (2004), AAAS (2005), SIAM (2009), and IEEE (2014). He has received a number of awards, including a NSF Presidential Faculty Fellow award from President Clinton, the Governor’s Medal for Science, the IEEE Visualization Career Award, the IEEE IPDPS Charles Babbage Award, the IEEE Sidney Fernbach Award, and the Rosenblatt Prize.

graphic file with name nihms-1741467-b0004.gif

Daniel Weiskopf is a professor at the Visualization Research Center (VISUS) of the University of Stuttgart, Germany. He received his Dr. rer. nat. (PhD) degree in physics from the University of Tübingen, Germany (2001), and the Habilitation degree in computer science at the University of Stuttgart, Germany (2005). His research interests include information and scientific visualization, visual analytics, eye tracking, computer graphics, and special and general relativity. He is a member of the IEEE Computer Society, ACM SIGGRAPH, Eurographics, and the Gesellschaft für Informatik.

Contributor Information

Liang Zhou, SCI Institute, University of Utah, Salt Lake City, UT 84112, USA..

Marc Rivinius, Visualization Research Center (VISUS), University of Stuttgart, 70569 Stuttgart, Germany..

Chris R. Johnson, SCI Institute, University of Utah, Salt Lake City, UT 84112, USA..

Daniel Weiskopf, Visualization Research Center (VISUS), University of Stuttgart, 70569 Stuttgart, Germany..

References

  • [1].Chambers JM, Graphical Methods for Data Analysis Wadsworth International Group, 1983. [Google Scholar]
  • [2].Elmqvist N, Dragicevic P, and Fekete JD, “Rolling the dice: Multidimensional visual exploration using scatterplot matrix navigation,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1539–1148, 2008. [DOI] [PubMed] [Google Scholar]
  • [3].Novotny M and Hauser H, “Outlier-preserving focus+context visualization in parallel coordinates,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 5, pp. 893–900, 2006. [DOI] [PubMed] [Google Scholar]
  • [4].Heinrich J and Weiskopf D, “Continuous parallel coordinates,” IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 6, pp. 1531–1538, 2009. [DOI] [PubMed] [Google Scholar]
  • [5].Scheepens R, Willems N, van de Wetering H, Andrienko G, Andrienko N, and van Wijk JJ, “Composite density maps for multivariate trajectories,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 2518–2527, 2011. [DOI] [PubMed] [Google Scholar]
  • [6].Branke J, Dynamic Graph Drawing Berlin, Heidelberg: Springer, 2001, pp. 228–246. [Google Scholar]
  • [7].Reinhard E, Heidrich W, Debevec P, Pattanaik S, Ward G, and Myszkowski K, High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting, 2nd ed. Morgan Kaufmann Publishers Inc., 2010. [Google Scholar]
  • [8].Mai Z, Mansour H, Mantiuk R, Nasiopoulos P, Ward R, and Heidrich W, “Optimizing a tone curve for backward-compatible high dynamic range image and video compression,” IEEE Transactions on Image Processing, vol. 20, no. 6, pp. 1558–1571, 2011. [DOI] [PubMed] [Google Scholar]
  • [9].Mantiuk R, Myszkowski K, and Seidel H-P, “A perceptual framework for contrast processing of high dynamic range images,” ACM Transactions on Applied Perception, vol. 3, no. 3, pp. 286–308, 2006. [Google Scholar]
  • [10].Elmqvist N, Dragicevic P, and Fekete J, “Color Lens: Adaptive color scale optimization for visual exploration,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 6, pp. 795–807, 2011. [DOI] [PubMed] [Google Scholar]
  • [11].Bier EA, Stone MC, Pier K, Buxton W, and DeRose TD, “Tool-glass and magic lenses: The see-through interface,” in Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ‘93, 1993, pp. 73–80.
  • [12].Yuan X, Nguyen MX, Chen B, and Porter DH, “HDR VolVis: high dynamic range volume visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 4, pp. 433–445, 2006. [DOI] [PubMed] [Google Scholar]
  • [13].Vollrath JE, Weiskopf D, and Ertl T, “A generic software framework for the GPU volume rendering pipeline,” in Proceedings of Vision, Modeling, and Visualization Berlin: Akademische Verlagsgesellschaft, 2005, pp. 391–398. [Google Scholar]
  • [14].Reinhard E, Stark M, Shirley P, and Ferwerda J, “Photographic tone reproduction for digital images,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 267–276, 2002. [Google Scholar]
  • [15].Whittle P, “Increments and decrements: Luminance discrimination,” Vision Research, vol. 26, no. 10, pp. 1677–1691, 1986. [DOI] [PubMed] [Google Scholar]
  • [16].Spencer G, Shirley P, Zimmerman K, and Greenberg DP, “Physically-based glare effects for digital images,” in Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ‘95, 1995, pp. 325–334.
  • [17].Ritschel T, Ihrke M, Frisvad JR, Coppens J, Myszkowski K, and Seidel H-P, “Temporal glare: Real-time dynamic simulation of the scattering in the human eye,” Computer Graphics Forum, vol. 28, no. 2, pp. 183–192, 2009. [Google Scholar]
  • [18].Rheingans P and Ebert D, “Volume illustration: nonphotorealistic rendering of volume models,” IEEE Transactions on Visualization and Computer Graphics, vol. 7, no. 3, pp. 253–264, 2001. [Google Scholar]
  • [19].Joshi A and Rheingans P, “Illustration-inspired techniques for visualizing time-varying data,” in IEEE Visualization 2005, 2005, pp. 679–686.
  • [20].Svakhine NA, Jang Y, Ebert D, and Gaither K, “Illustration and photography inspired visualization of flows and volumes,” in IEEE Visualization 2005, 2005, pp. 687–694.
  • [21].Healey C and Enns J, “Attention and visual memory in visualization and computer graphics,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 7, pp. 1170–1188, 2012. [DOI] [PubMed] [Google Scholar]
  • [22].Ledda P, Chalmers A, Troscianko T, and Seetzen H, “Evaluation of tone mapping operators using a high dynamic range display,” ACM Transactions on Graphics, vol. 24, no. 3, pp. 640–648, 2005. [Google Scholar]
  • [23].Yoshida A, Blanz V, Myszkowski K, and Seidel H-P, “Perceptual evaluation of tone mapping operators with real-world scenes,” in Human Vision and Electronic Imaging X, ser. Proceedings of SPIE 5666, 2005, pp. 5666–5666 – 12.
  • [24].Eilertsen G, Wanat R, Mantiuk RK, and Unger J, “Evaluation of tone mapping operators for HDR-video,” Computer Graphics Forum, vol. 32, no. 7, pp. 275–284, 2013. [Google Scholar]
  • [25].Aydin TO, Mantiuk R, Myszkowski K, and Seidel H-P, “Dynamic range independent image quality assessment,” ACM Transactions on Graphics, vol. 27, no. 3, pp. 69:1–69:10, 2008. [Google Scholar]
  • [26].Ashikhmin M, “A tone mapping algorithm for high contrast images,” in Proceedings of the 2002 Eurographics Workshop on Rendering, 2002, pp. 145–156.
  • [27].Pattanaik SN, Tumblin J, Yee H, and Greenberg DP, “Timedependent visual adaptation for fast realistic image display,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ‘00, 2000, pp. 47–54.
  • [28].Mantiuk R, Daly S, and Kerofsky L, “Display adaptive tone mapping,” ACM Transactions on Graphics, vol. 27, no. 3, pp. 68:1–68:10, 2008. [Google Scholar]
  • [29].Ferradans S, Bertalmio M, Provenzi E, and Caselles V, “An analysis of visual adaptation and contrast perception for tone mapping,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 10, pp. 2002–2012, 2011. [DOI] [PubMed] [Google Scholar]
  • [30].Fattal R, Lischinski D, and Werman M, “Gradient domain high dynamic range compression,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 249–256, 2002. [Google Scholar]
  • [31].Drago F, Myszkowski K, Annen T, and Chiba N, “Adaptive logarithmic mapping for displaying high contrast scenes,” Computer Graphics Forum, vol. 22, no. 3, pp. 419–426, 2003. [Google Scholar]
  • [32].Durand F and Dorsey J, “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 257–266, 2002. [Google Scholar]
  • [33].Reinhard E and Devlin K, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 1, pp. 13–24, 2005. [DOI] [PubMed] [Google Scholar]
  • [34].Lindeberg T, “Feature detection with automatic scale selection,” International Journal of Computer Vision, vol. 30, no. 2, pp. 79–116, 1998. [Google Scholar]
  • [35].Krekhov A and Krüger J, “Deadeye: A novel preattentive visualization technique based on dichoptic presentation,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, pp. 936–945, 2019. [DOI] [PubMed] [Google Scholar]
  • [36].Krekhov A, Cmentowski S, Waschk A, and Krüger J, “Dead-eye visualization revisited: Investigation of preattentiveness and applicability in virtual environments,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, pp. 547–557, 2020. [DOI] [PubMed] [Google Scholar]
  • [37].Holten D and Van Wijk JJ, “Force-directed edge bundling for graph visualization,” Computer Graphics Forum, vol. 28, no. 3, pp. 983–990, 2009. [Google Scholar]
  • [38].NYC Taxi & Limousine Commission. (2016) TLC Trip Record Data http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml.
  • [39].Cortez P, Cerdeira A, Almeida F, Matos T, and Reis J, “Modeling wine preferences by data mining from physicochemical properties,” Decision Support Systems, vol. 47, no. 4, pp. 547–553, 2009. [Google Scholar]
  • [40].Richards G (2005) BrightSide DR37-P HDR display https://www.bit-tech.net/reviews/tech/brightside_hdr_edr/6/.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental material

RESOURCES