Skip to main content

This is a preprint.

It has not yet been peer reviewed by a journal.

The National Library of Medicine is running a pilot to include preprints that result from research funded by NIH in PMC and PubMed.

bioRxiv logoLink to bioRxiv
[Preprint]. 2024 Jun 15:2024.04.11.589087. Originally published 2024 Apr 15. [Version 2] doi: 10.1101/2024.04.11.589087

psudo: Exploring Multi-Channel Biomedical Image Data with Spatially and Perceptually Optimized Pseudocoloring

Simon Warchol 1,4,5, Jakob Troidl 1,4, Jeremy Muhlich 2,4, Robert Krueger 5, John Hoffer 2,5, Tica Lin 1,4, Johanna Beyer 1,4, Elena Glassman 1, Peter K Sorger 2,5, Hanspeter Pfister 1,4,5
PMCID: PMC11042212  PMID: 38659870

Abstract

Over the past century, multichannel fluorescence imaging has been pivotal in myriad scientific breakthroughs by enabling the spatial visualization of proteins within a biological sample. With the shift to digital methods and visualization software, experts can now flexibly pseudocolor and combine image channels, each corresponding to a different protein, to explore their spatial relationships. We thus propose psudo, an interactive system that allows users to create optimal color palettes for multichannel spatial data. In psudo, a novel optimization method generates palettes that maximize the perceptual differences between channels while mitigating confusing color blending in overlapping channels. We integrate this method into a system that allows users to explore multi-channel image data and compare and evaluate color palettes for their data. An interactive lensing approach provides on-demand feedback on channel overlap and a color confusion metric while giving context to the underlying channel values. Color palettes can be applied globally or, using the lens, to local regions of interest. We evaluate our palette optimization approach using three graphical perception tasks in a crowdsourced user study with 150 participants, showing that users are more accurate at discerning and comparing the underlying data using our approach. Additionally, we showcase psudo in a case study exploring the complex immune responses in cancer tissue data with a biologist.

Introduction

The discovery of fluorescent biomarkers has dramatically enhanced our understanding of how cells function and interact [Ren13, LSP03] by visualizing how proteins are expressed within cells. Indeed, the 2008 Nobel Prize in Chemistry was awarded to scientists who first identified and isolated green fluorescent protein within jellyfish [Wei08]. Such fluorescent proteins can now be fused to other targeted proteins, allowing biomedical experts to distinguish and investigate cells of different types and states [CTE*94, Ren13, LSP03]. For instance, cancer biologists use immunofluorescence microscopy to investigate tumor growth, immune response, and the impact of specific therapies [TNC*20]. Advances in multiplexed imaging [LIW*18] now allow experts to digitally analyze 50+ biomarkers within the same specimen. Here, pseudocoloring, or the mapping of color to individual image channels, followed by the additive mixing of these channels into a composite visual encoding, serves as a digital twin to traditional analysis methods and is critical for exploring tissues and communicating findings.

However, pseudocoloring has several limitations. First, the blending of pseudocolored channels can make it hard to infer each variable in isolation and compare these variables [GG14b]. Moreover, visualizing more than three pseudocolored channels simultaneously leads to a phenomenon known as metamerism, where various combinations of colors within the palette produce identical visual outputs, making it impossible for humans to distinguish which specific channels are being expressed. Second, while color models and spaces that attempt to match human perception have been extensively researched [Col04, Ott20], the standard RGB (sRGB) color space, which is typically used to pseudocolor channels and blend them into a composition visualization, is not perceptually uniform, thus limiting graphical perception. Third, the spatial properties of multi-channel data influence these visualizations in that the overlap of highly correlated variables exacerbates these visual limitations. This blending of overlapping channels through additive mixing often yields colors outside of the sRGB gamut, thus further misrepresenting the underlying data. However, no approach for palette assignment and visualization exists that accounts for visual perception and the spatial relationships between channels, nor do systems to evaluate and compare palettes on real data.

To address these needs, we propose a novel method for color palette assignment integrated into an interactive system for the visualization of multi-channel imaging data (Fig. 1). We make the following contributions: (1) A method for assigning and visualizing optimized color palettes to multi-channel imaging data. Our method considers perceptual differences between colors in the palette as well as the distinctiveness of color names in the palette; colormaps containing a greater number of uniquely named colors are more effective for graphical perception tasks [LH18, RSGP21, RS21]. In addition, our method considers the spatial overlap of channels to address ambiguous and potentially confusing color blending. Users can further assign or exclude certain colors or color names in the optimization process. (2) psudo, an interactive system for assigning color palettes to and visualizing multi-channel data. Based on a user’s data and input, we automatically assign an optimal palette and preview the composite visual encoding. psudo supports quick and interactive refinement of a suggested palette by offering interactive visualizations on channel overlap, the presence of out-of-gamut colors, and feedback on a channel confusion score. We further offer a lensing approach to apply a color palette either globally or to a local region of interest. (3) Evaluation of psudo and our palette assignment method. We conducted a user study to compare psudo to existing standards for palette assignment and visualization of multi-channel imaging data. Participants performed three tasks inspired by biomedical image analysis: estimate values from a multi-channel visualization, search for regions of interest, and compare channels spatially. We demonstrate that psudo improves graphical perception, particularly when more than two channels are visualized concurrently. We further assert the utility of psudo in an actual usage scenario through a case study with a cancer biologist.

Figure 1:

Figure 1:

In psudo, domain experts can analyze (a) multichannel biomedical images through (b) pseudocoloring and (c) additive blending into a single visualization. (d) Using a novel color palette assignment method, users can generate perceptually and spatially optimal palettes and (e) iterate on these palettes based on focus & context exploration of the visualization and their specific constraints.

2. Related Work

Modeling Color.

Seminal work quantifying human perception of color established the CIEXYZ color space [SG31], which models the colors humans perceive in terms of three primary colors (X, Y, Z). This mirrors trichromatic theory, which states that the three types of photoreceptor cone cells in our eyes are sensitive to red, green, and blue light, forming the basis for modern color science [WSK68]. While CIEXYZ models how the eye perceives the addition of colored light, it is not perceptually uniform in that distances in this space do not correspond to perceived changes in color. The Standard RGB (sRGB) [AMCS96] space, meanwhile, represents colors as a function of the three primary base colors red, green, and blue and is gamma corrected [AMCS96] to account for human perception. However, while this gamma correction is inspired by perception, it is not perceptually uniform and only models a subset of the full CIEXYZ gamut. However, visualization systems using sRGB typically assume image data to be already in a gamma-corrected space, which is not the case when image intensity is a linear function of the measured underlying physical process. In contrast, the CIELAB [Col04] space is designed to reflect human perception by aligning color distances with perceived differences. Yet recent research has identified flaws in the perceptual uniformity of this space, necessitating new distance functions to accurately measure the perceptual distance between colors [SWD05,GC12]. More recently, the OKLab [Ott20] space uses CIEDE2000 [SWD05] for color difference calculation and improves hue preservation while blending colors. OKLab has been used to calculate color distance [YVK*23], to create color gradients [SJ21], and to model lightness and chroma when blending [Lev21]. Broadly, the key differences between these spaces are the positions and distances between colors and how those distances align with human perception. In psudocoloring a channel, and thus interpolating between black and a given color, different spaces yield drastically different gradients. We use OKLab to pseudocolor channels and compute color differences based on its hue-preserving properties when blending colors [Bri23,Lil23] and empirical behavior. However, other spaces can easily be substituted into our overall method.

Color perception is influenced by the specific terms used to describe colors, which vary across languages and cultures [HLX*19, TAW*09]. Heer and Stone [HS12] create a probabilistic model to quantify the nameability and salience of colors’ names in English. Using this model, one can calculate the distance between colors based on naming patterns, which we use as one aspect of our novel palette optimization method.

Color Palette Selection.

Best practices for using color palettes differ based on the visualized data types [Bre94, SSSM11, Sza18]. For categorical data, ColorBrewer [HB03] and Tableau both offer a set of color palettes that are designed to be perceptually distinct and have been integrated into many popular visualization tools [Wic16, Was21, Hun07]. Other approaches consider color name distinctiveness to create palettes [VM16] and evaluate both categorical [HS12] and quantitative colormaps [LH18]. Methods for interactive palette assignment for categorical data have gained popularity in recent years, including systems that maximize perceptual color differences while also considering user constraints and the intended analysis task [Mit15, FWD*17]. Gramazio et al. [GLS17] integrate the color name distance into palette generation and allow users to build palettes iteratively. Wang et al. [WCG*19] and Lu et al. [LFC*21] introduce data-aware approaches that assign colors to classes in scatterplots based on class overlap. Whereas these methods operate on categorical data, multi-channel imaging data is both categorical (the color used to pseudocolor each channel) and continuous (color transfer within a channel) and is additionally complex as these colors are additively mixed. Thus, in our optimization method and in contrast to previous methods, we evaluate the spatial relationships between and overlap of channels to create optimal palettes.

Visualizing Spatial Data Using Color.

The assignment and blending of colors have been extensively researched to optimally visualize spatial 2D [Rob88,HSKIH07,GG14a,LXL21,BTS*18] and volumetric data [SDB*17, KGZ*12, CWM09]. Levkowitz et al., when investigating linear color scales, find that greyscale can outperform color scales traditionally thought to be perceptually linear [LH92], motivating the development of the e.g., OKLab colorspace. Rogowitz et al. [RT98,RKPC99] highlight the limitations in using the rainbow colormap for spatial data and propose colormaps that vary saturation or luminance depending on the spatial frequency of the data. The standard technique when pseudocoloring biomedical data is to scale luminance for each channel, motivating our visualization approach. Reda et al. [RNAK18] find colormaps with many hues to be effective for quantity estimation, while divergent colormaps are best for pattern perception tasks. Subsequent work evaluates gradient perception [RP19] and the role of nameability within colormaps [RSGP21] and finds that colormaps with salient colors are better at emphasizing global features, whereas less colorful colormaps better communicate local features [Red23]. We thus consider the distinctiveness of color names when optimizing palettes while also considering the added complexity of channel blending.

Some volume visualization approaches similarly use perception research and color differences to visualize differences in neural pathways [ZSZ*06] or use harmonic color maps to create more aesthetically pleasing visualizations of volumetric data [WGM*08]. Kuhne et al. [KGZ*12] emphasize hue preservation and build a machine-learning model to omit false colors from blending. Finally, Kumar et al. [KZX*23] propose an interactive radial color map to visualize multi-variate data to voxel color and opacity. However, while these approaches look at alpha blending, combining multi-channel spatial imaging data requires additive mixing, such that each individual channel value is consistently visualized independent of the number of channels. Thus, we use OKLab to pseudocolor channels before combining these channels in CIEXYZ to ensure the underlying values are preserved.

Most similar to our approach is work investigating the visualization of multi-channel imaging data; Dunn et al. [DKM11] note that the spatial relationships between variables impact the effectiveness of a visualization; whereas co-occurring variables necessitate color-mixing, correlated variables that exist in proximity but do not directly overlap require different considerations. Zhou et al. [ZAZH20] use kernel density and co-localization estimation to visualize pairs of variables. Liu et al. [LWB15] use dimensional reduction to transform high-dimensional data into RGB values. In our case, individual channels must be preserved and visualized simultaneously. Finally, Cheng et al. [CXM19] embed data samples and assign colors in a perceptually uniform space. However, the blending they propose is not additive and does not consider color names.

3. Visualizing Multi-Channel Biomedical Images

The visual analysis of multichannel images is critical across many domains, but we are specifically motivated by our collaboration with cancer researchers; these pathologists, biologists, and oncologists rely on whole-slide multiplexed tissue images generated through methods such as CycIF [LIW*18], which can capture the spatial expression of 50+ protein targets across regions up to 10cm2 and containing millions of cells at subcellular resolution. Investigating these images and analyzing the tumor microenvironment at unprecedented detail [NMV*22] is essential for cancer diagnosis and therapy evaluation [AWK22, HCLA17]. Our past work building visual analytics approaches for such experts [KBJ*20,JKW*22, WKN*23] emphasizes the essential role that visual inspection of the underlying data plays, both to steer supplementary analysis and validate results. To do so, domain experts often visualize as many as eight channels simultaneously. However, per trichromatic theory, humans can only fully perceive 3 color channels simultaneously, [WSK68]. Additionally, the tools that these experts frequently use [AMR04, SLE*22, ABM*12, MGP*22] visualize by using pseudocoloring channels and then additively mixing them using sRGB and suggest palettes containing the primary and secondary colors. This approach results in the likely presence of out-of-gamut colors and is influenced by the non-linearities of the RGB color space. We thus propose a visualization and color palette assignment approach that better aligns with perception and minimizes ambiguous blending of colors, which is inevitable with more than three channels. We also emphasize the highly exploratory nature of expert analysis and the importance of toggling channels on and off to accurately perceive the underlying image data and add features to help users identify overlap and channel expression.

psudo Visualization Pipeline.

Motivated by prior work on perception [Ott20, SJ21, Lev21], in psudo, we modify the default visualization pipeline to use perceptual color spaces (Fig. 2). Each channel is assigned a color from an overall palette and thus pseudocolored by linearly interpolating between black and the corresponding color in OKLab [Ott20], which models perceived hue, brightness, and chroma when blending colors. This blending, however, differs from additive mixing, which combines channels instead of interpolating between them; we use the CIEXYZ space, which specifically models human perception when adding light of different colors, to perform this additive mixing. We find this pipeline produces the clearest visual encodings and avoids both out-of-gamut colors and “washed-out” composite encodings.

Figure 2: Multi-channel Image Data Visualization:

Figure 2:

(a) Each data channel is (b) assigned a color, (c) pseudocolored by linearly interpolating between black and the color, and (d) additively mixed.

4. Color Palette Optimization

psudo’s core component is a novel palette optimization method for multi-channel images that enables interactive palette recommendations based on the visualized data, the spatial relationships between channels, and perceptual considerations (Fig. 3).

Figure 3: Objective Function Components:

Figure 3:

L1 and L2 distribute colors perceptually and linguistically, respectivley. L3 considers the spatial overlap of channels.

Optimization Components.

We generate color palettes through an optimization method with the following components: First, we consider the perceptual difference between colors in the palette to ensure the colors are well distributed throughout the gamut. Second, we evaluate the semantic distance of color names for all pairs of colors in our palette [HS12], which has been shown to improve graphical perception [LH18, RSGP21, RS21] in colormaps. Third, we consider the color-confusion between channels (i.e., when different combinations of channels and their intensities create similar-looking colors). For example, if two highly overlapping channels are pseudocolored red and green, mapping yellow to a third channel creates ambiguity and confusion. As such, we try to optimize a palette in which singular linear combinations of channels produce distinct colors. Additionally, through constrained optimization, certain colors can be explicitly included or omitted.

Objective Function.

We integrate these three components into an objective function and optimize palettes through stochastic global optimization. We experimented with the basin-hopping technique [LS87] and dual annealing [XSFG97] to find suitable minima, but found simulated annealing [KGV83] to provide the most consistent convergence across our experiments. We found that, especially for a low number of channels, many roughly equivalent minima exist and that an initial temperature of 15 best explores the search space. Our overall optimization method operates on multi-channel imaging data, I, with n channels and a color palette, P, of n colors, and generates an optimal palette, P*, by optimizing an objective function L(Pn,In).

P*=argminPL(Pn,In) (1)

More specifically, the objective function, L, is defined as the weighted sum of the following subfunctions, which map directly to the aforementioned three optimization components.

L(P,I)=ω1L1(P)+ω2L2(P)+ω3L3(P,I) (2)

We abstract pseudocoloring and blending of channels in the objective function with the following functions. color pseudocolors the i-th channel’s imaging data Ii, by converting an sRGB color, Pi(R,G,B), to OKLab, Pi(L,a,b), and then interpolating between black and the color.

color(Pi(R,G,B),Ii)=IiOKLab(Pi(R,G,B)) (3)

Next, to additively mix image channels In, we first pseudocolor each channel with the corresponding color from palette Pn, convert each channel to CIEXYZ, take the sum across all channels, and convert them back to OKLab.

mix(Pn(R,G,B),In)=OKLab(i=1nCIEXYZ(color(Pi(R,G,B),Ii))) (4)

Next, we describe the components of our optimization in detail.

4.1. Maximizing Perceptual Differences Within the Palette

L1 aims to maximize the perceptual differences between colors in a palette (Fig. 3). Using Euclidean distance to calculate large color differences is problematic, as these spaces are derived from just-noticeable differences [AATF20]. Inspired by existing approaches that use OKLab or CIEDE2000 to calculate small color distances [YVK*23, GLS17, LFC*21], we maximize the minimum distance between two colors in the palette, as calculated using Euclidean distance in OKLab.

L1(P)=min1ijnOKLab(Pi)OKLab(Pj) (5)

4.2. Maximizing Color Name Distinctiveness

Our ability to distinguish between colors is intrinsically linked to the names we ascribe to each color, the distinctiveness of these names, and their distance from one another [HLX*19, TAW*09, HS12]. This, in turn, impacts our graphical perception in spatial data [RSGP21, Red22]. In L2, we attempt to improve graphical perception by evaluating the distinctiveness of color names in a palette (Fig. 3); when assigning a color palette, we consider the difference between a pair of colors a,b, using the name cosine distance D(a,b). This metric is derived from a survey with over 3 million entries [HS12] where participants identified colors by name and quantifies the distance between colors in terms of these responses.We compute the average name distance D between pairs of (n) color in the palette, P.

L2=1(n2)i=1nj=1iD(Pi,Pj) (6)

As with L1, this component of our overall loss operates strictly on the palette. Thus, we must consider a third term in order to avoid out-of-gamut colors and prevent overlapping channels from blending to form a color already in the palette.

4.3. Minimizing Color Blending Confusion

The use of pseudocolored composite images to visualize the spatial distribution of multiple variables faces significant challenges due to human perception limitations in discerning blended colors [GG14a] and the possibility that different combinations of channel values can yield identical colors, making it impossible for a viewer to identify which channels produced a given color. Given human trichromatic vision, such confusion and metamerism are inevitable when visualizing more than three channels. However, given the inseparable role this visualization method plays in the work of our domain collaborators, we attempt to reduce this ambiguity in our optimization method (Fig. 3). In the Supplementary Material, we further demonstrate how our approach reduces potential metamerism when compared to baseline colormaps, as well as how our system provides context to ambiguous regions. To quantify this confusion, we consider the input channel intensity values and resulting colors in the composite encoding and use multivariate multiple regression to evaluate how well these intensity values can be used to predict the resulting color. Such a model will thus perform worse on a dataset and palette that contains multiple combinations of markers that yield similar colors. Additionally, this approach penalizes the presence of out-of-gamut colors, as if channel intensity values are increasing without a corresponding color change, the data is not being effectively encoded. We thus fit a model using ordinary least squares regression to predict a given pixel color in OKLab in terms of the n channel intensity values, I: YL,a,b = β0 +β1I0 +…βnIn. We then calculate the root mean square error of this model relative to the colors that pseudocoloring and mixing imaging data, I, with palette P.

L3=RMSE(mix(P,I),Y^(I) (7)

To make this approach scalable, we fit and predict on a 5,000-pixel subsample of the original image, omitting any pixels outside of the established contrast limits of a channel (see Sec 5.1), ensuring there is meaningful marker expression at these points. We use the resulting RMSE score as our confusion metric that indicates how much color blending confusion remains in our optimized palette.

4.4. Evaluating Our Optimization Method

In addition to the user study and case study detailed in Sec. 7 and Sec. 8, respectively, we also perform small-scale evaluations of the individual components of our optimization method and on the ability of our method to reduce metamerism.

Evaluating Model Components

Based on Reda et al.’s finding that nameability is comparable to the perceptual difference in evaluating a colormap [RNAK18], we equally weigh the contributions of both as ω1 = ω2 = 1. Many local minima maximize the perceptual distance between colors; we find that by combining these two loss components, we select one of these minima that does not include two colors with the same name. We weight L3 relative to the average loss across 100 random palettes, x¯, such that ω3=x¯1, normalizing in a method similar to Lu et al. [LFC*21]. In a small-scale ablation study (see Supplementary Material), in which 30 users estimated 30 values in multi-channel biomedical images (following Sec. 7‘s estimate task), users were more accurate using the full objective function (74.6%) than they were when we omitted L1, L2, or L3 from the optimization (70.4%, 69.6%, and 71.5%, respectively). While this supports the respective orders of magnitude for ω1, ω2, and ω3, a larger-scale study is needed to quantify the benefit of each component in more depth.

Evaluating Metamerism Reduction

A key motivation for this project was to minimize color confusion or the presence of metamers while also providing interactive ways for users to identify potentially problematic regions and discern the relative contribution of different channels in those regions. We use an example dataset of partially overlapping circles (see Supplementary Material) to demonstrate the presence of such artifacts when using a naive RGB blending approach and highlight how psudo’s optimization method and interface help users build a more accurate understanding of the underlying data. We compare this dataset when visualized using sRGB and pseudocolored with RGB primary/secondary colors to the same dataset with an optimized palette and visualized with our approach. On a per-pixel basis, we can then perform non-negative least-squares regression to determine the relative contribution of each color in the palette that could result in a pixel of that color (e.g., 1*R + 1*G = Yellow Pixel). We perform this regression on each possible combination of channels, which, for this four-channel image, leaves 15 solutions per pixel. We then omit all solutions that have near-zero coefficients, as this means that the given color does not meaningfully contribute to the color at that pixel, and additionally omit all solutions with a 2-norm of 0.005, as this solution does not accurately reflect the color shown. Thus, we are left with a list of potential combinations of colors in our palette that could produce the color at the pixel value. If every single pixel has a single solution, we can roughly say that no metamers exist, whereas if a pixel has multiple solutions, we find that multiple combinations of colors and intensities in the palette could produce this color. In this limited experiment, we find that 15.5% of the pixels in the baseline image are potential metamers, while only 3.5% of the pixels in the optimized image are metamers. For more information about this analysis, please see the Supplementary Material.

4.5. Incorporating User Preferences

Based on our domain experts’ needs to manually fine-tune palettes or pick certain colors, we support constrained optimization. Across domains, there are expectations that specific channels be pseudocolored in certain colors; in immunofluorescence imaging, DAPI, a biomarker for DNA, is typically colored blue. Users may lock an exact color for a channel, in which case we only optimize the remaining colors. A user may also define a looser constraint by providing a color name for a given channel. Thus, our optimization method suggests a color above a user-specified salience threshold. We found an initial salience threshold of 0.6 to balance precision and search space for the most common color names. However, a lower threshold is better suited for more obscure colors, prompting the configurability of this value. This is calculated based on a formula [HS12] where p(c,n) is the probability that a color c is identified as a name n. For example, p(‘#0000FF’, ‘blue’) = 0.817 means that #0000FF is identified as “blue” 81.7% of the time. We further allow users to exclude specific colors from the generated palette. In this case, we omit any color above the specified salience threshold from the palette.

5. psudo: Palette Creation and Visualization System

Palette creation is an iterative, user-driven process. In psudo, users can interactively explore their data, create optimized color palettes, and evaluate and modify palettes in a web-based system (Fig. 4). Users can explore the current palette, specify constraints, and get feedback on palette details such as color confusion and out-of-gamut colors in a lens-based focus-and-context approach.

Figure 4: psudo.

Figure 4:

visualizing lung cancer tissue [SSY*22]. We display (a) the combined visualization and (b) each channel in isolation. (c) Users can change the colors for each channel. They can lock the colors of specific channels and specify color names to (d) to generate optimal palettes based on our optimization method. (e) An interactive lens provides on-demand feedback on channel overlap and lets users change the optimization scope. (f) Past palettes are displayed, allowing users to rollback to a previous version and create a palette iteratively. Linked views of (g) palette quality and (h) channel marker expression allow users to compare palettes and explore their data.

5.1. Image Exploration

In psudo’s main view (Fig. 4, a), users can zoom, pan, and toggle channels to explore large multiplexed images. We use a perceptually-based visualization pipeline for blending different image channels into a single view, as described in Sec. 3.

Individual Channel Inspection.

Each channel displayed in the main viewer is also visualized in isolation (Fig. 4, b), allowing a user to investigate features in these channels and their presence in the composite visualization. The single-channel views can be linked to the view state of the main view, or users can zoom and pan within them independently.

Setting Contrast Limits.

When generating fluorescence images, the bit-depth of the camera generates data with a much higher dynamic range than is perceptible to humans, and meaningful expression of a particular biomarker often exists within a much smaller range of values [Joh12]. Therefore, experts typically first set contrast limits for each channel [KBJ*20]. Intensity values are thus clamped to the minimum and maximum values set and linearly stretched between the range to use the bit-depth of the dataset effectively. Users can set contrast limits in two ways, either manually or by using an automatic approach specifically designed for CycIF data. For automatic contrast assignment, we exploit that pixel intensity values in CycIF data are generally log-normally distributed. Hence, by using a tri-modal Gaussian mixture model, the meaningful dynamic range of that channel is captured in the greatest of the three Gaussians. When image channels are made visible, we set the contrast limits using this method with respect to the global data distribution. Moreover, users can also set contrast limits locally when focusing on local features in a region of interest (see Sec. 5.3).

5.2. Dynamic Palette Generation and Refinement

We integrate our palette assignment method into psudo to enable iterative palette refinement within an interactive visualization tool.

Baseline Optimization.

All palettes are optimized relative to the set contrast limits. When loading a dataset, we automatically assign a palette with our optimization method. When additional channels are toggled on (Fig. 4, c), we lock the colors already assigned and assign a new color relative to these existing channels, preventing the palette from switching unexpectedly while ensuring new channels are visualized effectively. Alternatively, users can generate an entirely new palette using unconstrained optimization (Fig. 4, d).

Accommodating User Preferences.

We allow the palette to be further refined based on user input; locking a color to a channel prevents that color from being changed, and the other colors in the palette are optimized relative to locked values (see Sec 4.5). Users can specify a color name if they do not require a specific shade of that color. Our domain collaborators had different visualization preferences. For instance, some preferred to never use white, while others preferred to reserve certain colors for specific channels. Thus, if users want to omit colors from the generated palette, they can add those names to the excluded color list (Fig. 4, d).

Feedback on Optimization Results.

We visualize the three individual components of our objective function and the overall loss (L1,L2,L3,L) as gauge charts that update when any change to the palette is made (Fig. 4, g). This helps deter users from creating suboptimal palettes, and provides a basis for comparing multiple palettes. In addition, as more and more channels are visualized simultaneously, this visualization provides context as to how each subsequent channel impacts perception of the composite visualization. To avoid confusion, for any visualizations that do not directly show or apply the color palette, we use greyscale.

Iterative Palette Refinement.

In psudo, users can iteratively refine palettes tailored to their specific requirements. Users can test constraint configurations until a satisfactory outcome is achieved; they can begin with unconstrained palette optimization and by progressively refining constraints, enhance visual quality and accommodate personal preferences. Alternatively, they may begin with a pre-existing palette and improve it. psudo displays a history of prior-generated palettes (see Fig. 4, f), which can be re-activated by mouse click. By juxtaposing the composite visual encoding and explicitly showing color assignments for each channel, users can quickly assess the quality of individual optimization results.

5.3. Focus & Context Interaction

For our collaborators, effectively visualizing regions of interest within a tissue is often more important than visualizing the entire image. Moreover, these regions of interest often contain markers that were not necessarily expressed throughout the image, indicating, e.g., the presence of a rare cell type or specific immune reaction [NMV*22]. We thus extend our previous lensing approach [JKW*22] to allow users to inspect these regions of interest and perform local palette optimization.

Changing Optimization Scope.

Highlighting global versus local features in the data often requires different color palettes [Red23]. In our system, users can thus change the scope of optimization by zooming and panning throughout the image and focusing an interactive lens on their intended region. Users can re-compute contrast limits and optimize a new palette relative to the spatial expression of channels within the lens (Fig. 4, d). Additionally, the lens can be used to evaluate a palette within a given region; as users navigate the image, the loss gauges update to reflect the color-blending confusion within the lens (see Fig. 5, a).

Figure 5: Interactive Lensing:

Figure 5:

(a) Users can investigate the marker expression and palette quality by dragging a lens over a ROI. Individual channels or combinations of channels can be analyzed in isolation (b, left), and out-of-gamut pixels and coexpression can be shown in the overlap view (b, right).

Close Inspection of the Composite Visualization.

The interactive lens provides two additional features to allow users to inspect the overlap of channels in their data. First, the lens can be used to look at individual channels or user-defined combinations of channels (Fig. 5, b) such that the user can compare these channels to the overall visualization. Using a slider, the user can fade between the two views to understand how these channels are reflected in the composite encoding. Second, the lens shows the overlap of channels and the presence of out-of-gamut colors through a greyscale overlay in which all out-of-gamut pixels are displayed in white (Fig. 5, b). Users can use these two features to explore regions of potential confusion when generating a color palette or identify regions in which the marker expression or overlap differs significantly from the overall image. A linked density plot (Fig. 4, g) visualizes the distribution of marker values within the lens or, if the lens is not enabled, globally.

6. Implementation

To support gigapixel, multi-channel image data, and a wide array of users, we emphasize scalability and web-based technologies in our implementation. Therefore, psudo uses a client-only architecture that relies heavily on Rust compiled to WebAssembly [KT22, HRS*17] to perform palette optimization and evaluation on a user’s machine. We build on Viv [MGP*22] and Vitessce [KGM*21] to visualize the data and use custom WebGL shaders to perform the color space conversions necessary to support our visualization pipeline. Data can be loaded either locally or directly from a cloud bucket or URL. Additionally, we authored two Python packages on PyPi; we implemented optimized and vectorized color space conversions in colorutil and created a scalable Python implementation of Heer and Stone’s [HS12] Categorical Color Components (C3) library, available on PyPi as pyc3. We also authored a Rust implementation of C3 (rust-c3, which can be compiled to WebAssembly and run in the browser. We will make this code open-source upon acceptance of the paper and offer a public demo of psudo for users across domains to use with their data at https://psudo.xyz.

7. User Study

We ran a within-subjects user study to investigate the effectiveness of our color optimization approach. We asked participants to perform graphical perception tasks in two conditions. The first condition was the baseline, where the composite visual encoding is pseudocolored and mixed using sRGB primary and secondary colors, adding orange and white when displaying eight channels, which mirrors the palettes used in popular multi-channel image viewers [ABM*12, MGP*22]. The second condition uses our palette optimization and visualization method. We additionally vary the number of channels to evaluate how this impacts user performance.

7.1. Stimuli

We use cropped regions from a multi-channel CycIF dataset [LIW*18] (40 image channels, 25,808 × 36,857 pixels). The data contains notable regions of tumor-immune interaction and was previously used to study the progression of melanoma [NMV*22]. We crop regions of this image at different levels of resolution to avoid global/local bias and use only channels with significant spatial expression within the cropped region.

7.2. Participants

We ran our crowd-sourced study on Prolific [PS18]. We recruited and prescreened participants through Prolific, selecting those who self-identified as proficient in English, without any visual impairments, and residing in the United States. A desktop computer was required to participate. However, as further discussed in Sec. 9, the inherent variance in screen resolution and lighting does influence results, as does the inherent variance in visual literacy among participants. Participants were compensated at a rate of $20 USD per hour. We excluded participants whose accuracy or time across all tasks fell outside of two standard deviations of the mean accuracy and time across all experiments. We recruited 170 participants but omitted 20 individuals based on our exclusion criteria. Participants took an average of 10.70 seconds per task and, accounting for the tutorial, spent an average of 650.74 seconds on the overall study. Participants reported as 57% female, 41.8% male, and 1.2% other / prefer not to say. They were, on average, 37 years old.

7.3. Tasks

Our study included three tasks, which are motivated by our collaborations with cancer biologists [KBJ*20, JKW*22, WKN*23, GBR*23]. We build upon existing studies that evaluate the visualization of spatial data [Mun15, RNAK18, QR22] and also draw on Munzner’s work [Mun15] in task abstraction for visualization. We specifically focus on the lower level search and query tasks.

Estimate.

Participants are asked to estimate the value of an individual channel at a specific point in the visualization (Fig. 7). Participants must query the visualization, and identify a value in the visualization. Domain experts perform such value estimation when identifying cell types and morphological features within tissue and while validating computational approaches. This task is also motivated by existing work evaluating inference in visualizations [MKA*11, RNAK18, HDHA10, RSGP21]; we adapt a task proposed by Reda et al. [RNAK18] where users estimate the value of a channel at a specific point in the visualization. We evaluate accuracy in this task as the absolute distance from the guess to the correct answer relative to the value range. In all tasks, users may not toggle channels and must rely on the composite visualization.

Figure 7: User Study Tasks:

Figure 7:

Tasks inspired by domain-specific analysis to evaluate perception of multi-channel spatial data.

Compare.

In the second task, participants compare channels in the visualization [RNAK18], which falls under the query action in Munzner’s taxonomy [Mun15]. We superimpose a box on the visualization and ask participants to choose which of the two channels (out of all displayed channels) has a higher average value in the box (Fig. 7). This task mirrors the analysis of our collaborators; cancer biologists compare the relative presence of biomarkers to quantify immune response and tumor growth.

Search.

Participants are asked to search for and identify more complex features and patterns in the visualization. This task falls under the search action category in Munzner’s taxonomy [Mun15]. It is motivated by the need to locate a specific feature present in an individual channel in the composite visualization, such as identifying a notable spatial neighborhood pattern within cancer tissue [WKN*23]. Pattern identification is a key component of human attention and perception [HP07] and related research proposes similar tasks [RNAK18, HDHA10]. In this task, we show participants a small region from an individual channel and ask them to identify the same region in the composite visualization (see Fig. 7). We calculate participant accuracy as the distance from the selected region to the target region, normalized relative to the image size.

7.4. Hypotheses

We have the following hypotheses for this study. These hypotheses and the experimental setup were preregistered with OSF [War23].

  • H1: Participants will have higher accuracy on the graphical perception tasks using psudo when compared to the standard approach (sRGB primary and secondary colors, no optimization).

  • H2: Participant accuracy will decrease as the number of channels in the stimuli visualization increases.

7.5. Procedure

Participants first had to complete a tutorial, which included examples of the three tasks that had to be solved correctly to continue. Text pop-ups explained incorrect choices. Next, participants were asked to complete 20 random tasks, where the dataset, number of channels, data, and visualization approaches were all randomly assigned. Randomly assigning tasks implies the number of times a user completes a given task may vary. While we found no statistically significant correlation between user performance and task count, further user studies may benefit from keeping this count constant to eliminate any anchoring (see Supplementary Material for more information). Each task should take no more than 30 seconds, and a countdown clock encourages users to follow this timeline.

Baseline.

Across our experiments, we compare our approach to a baseline approach, where channels are pseudocolored and combined in sRGB and palettes are composed of the sRGB primary colors (for two channels), sRGB primary and secondary colors (four and six channels), adding white and orange (eight channels). We randomly assigned these palettes to the channels in a dataset.

7.6. Results

Overall, we find that our approach outperforms the baseline when four, six, and eight channels are displayed when estimating and comparing. Our approach performs similarly to the baseline when searching or visualizing only two channels. We hypothesize that when only two channels are visualized, even the baseline approach adequately distinguishes channels. Furthermore, the similar performance in the search task may be explained by the constrained search space users had to evaluate, as they were asked to search a small region of the overall image, which is a simpler use-case than in many real-world tasks. Moreover, the conditions that help users identify global features vs. local features differ [Red23], further motivating a larger-scale study of behavior at different scales. We summarize our results in Fig. 8 and investigate each task in isolation and the overall performance across all tasks, including search, as we find no evidence that our approach is worse than the status quo. We provide our raw data and report in further detail on the parameters of our statistical models in the Supplementary Material.

Figure 8: User Study Results:

Figure 8:

Our within-subject user study (N=150) indicates that users perform better using psudo as color palette assignment method when compared to the baseline when estimating and comparing values. Error bands represent 95% confidence intervals.

Estimate.

Participants performed similarly at quantity estimation for two channels, but our approach outperformed the baseline at four and six channels (Fig. 8). To further evaluate the impact of the approach (psudo vs. baseline) on task performance while accounting for the number of channels, we use a two-way Analysis of Variance (ANOVA). As the approach and number of channels are randomized, we do not expect interaction between these variables. We reject the null hypothesis, finding that both approach (F1,1168 = 9.01, p < 0.0001) and the number of channels (F1,1168 = 27.6, p < 0.0001) are significant. The F-test on the underlying multiple-regression model rejects the null hypothesis, showing that this model is significant (F = 18.83, p < 0.000001), while the positive coefficient on the approach (0.0397) and the negative coefficient on the number of channels (−0.0153) suggest that our approach improves graphical perception (H1) and that the number of channels negatively impacts both approaches (H2).

Search.

Participant performance on this task when comparing our approach and the baseline was nearly identical. (Fig. 8). We again perform ANOVA, where we find the number of channels does impact performance (approach: F1,694 = 14.15, p < 0.0001. However, we find that the approach does not meaningfully impact performance (p > 0.0001). This result thus supports H2 but not H1. In these experiments, palettes were optimized for the entire cropped section as opposed to the ROI directly around the pattern. Further analysis could investigate how modifying palettes to specifically emphasize these local features impacts overall performance.

Compare.

Our approach outperforms the baseline, with the largest difference at 4 channels. Differences at six and eight channels, meanwhile, are not outside of a 95% confidence interval (Fig. 8). As such, without further analysis, we cannot project these results out to a larger number of channels. By performing ANOVA, we find that both approach (F1,1195 = 24.61, p < 0.0001) and the number of channels(F1,1195 = 39.37, p < 0.0001) are significant factors. The F-test on the underlying multiple-regression model rejects the null hypothesis, showing that the model is significant (F = 32.59, p < 0.005) in predicting accuracy, with a negative coefficient for the number of channels (0.0372, p < 0.005), supporting H2 and a positive coefficient (0.1316, p < 0.05) for our approach, supporting H1.

Overall Performance.

Finally, we analyze performance across all tasks, which we find decreases as the number of channels increases (H2) and that the psudo pseudocoloring and palette assignment technique yields better participant accuracy when compared to the baseline (Fig. 8). However, improvement is most notable at four channels and fails to clear a 95% confidence interval for six and eight channels. We additionally notice that performance decreases at an increasing rate as the number of channels increases. We are interested in building on this study to further quantify this behavior by testing across a wider number of channels. Following the same statistical approach taken with each of the individual tasks, we use ANOVA to determine if the variable significantly impacts normalized performance and examine the model to quantify this impact. This test indicates that the approach and number of channels remain significant across tasks (F1,3061 = 30.91, p < 0.0001 and (F1,3061 = 69.04, p < 0.0001, respectively), with, again, a negative coefficient for number of channels (0.0225, p < 0.0001) and a positive coefficient for our approach (0.0675, p < 0.0001), further demonstrating that participants do benefit from psudo’s method of palette assignment and multi-channel visualization (H1).

8. Case Study: Melanoma Analysis

We demonstrate the utility of our system through a case study with a cancer biologist from Harvard Medical School, following the Pair Analytics [AHKGF11] model. The expert spent one hour investigating a 40-channel section of cancer tissue, 25,808 × 36,857 pixels in size (~ 17 × 24mm). They were interested in analyzing this tissue as they had previously identified regions of interaction between immune cells and the tumor and wanted to understand how these regions correlated with the stages of melanoma progression.

The biologist first investigated a region containing a significant immune population directly adjacent to tumor cells. They were specifically interested in visualizing three channels, each of which corresponds to a different cell population: SOX10 (tumor), CD3 (immune), and CD11C (macrophages). They indicated that blue is generally used to visualize tumor cells and thus selected that color name in the psudo interface. They had no requirements for the other two channels and wanted to draw a stark contrast to the presence of these three populations. Finally, they changed the optimization scope from global to local using the lens (see Sec. 5.3) to accentuate the spatial patterns within this region and generated an optimal color palette. Fig. 9, a shows this region as visualized with the optimal palette. The biologist emphasized that this visualization supports their hypothesis that the presence of macrophages may be suppressing the immune cells as they attempt to combat the tumor. The biologist then investigated this suppression in greater detail and zoomed into a smaller region on this boundary, as emphasized by Fig. 9, a. To show immune suppression, the biologist was interested in visualizing T cells (immune cells that combat tumors) and their disparate states; these states indicate how “exhausted” the T cells are, which impacts their ability to fight the tumor. We kept the SOX10 marker on to show the tumor and additionally added four channels, TIM3, PD1, LAG3, and C8A, which, in this order, visualize low to high levels of T cell exhaustion. The biologist identified that the more exhausted states generally lie closer to the tumor (see Fig. 9). To compare our approach to the sRGB baseline, we show the same data and contrast limits using a palette of primary and secondary colors, as recommended by the cancer biologist’s typical image viewer [ABM*12], in Fig. 9, c. Here, the channels are pseudocolored and blended in sRGB. Distinctions between variables are more apparent with psudo, whereas the sRGB version has less contrast and is harder to discern.

Figure 9:

Figure 9:

Melanoma tissue visualized with psudo (a), showing disparate cell populations containing varying levels of immune response, visualized with (b) psudo and (c) with the sRGB baseline.

Overall, the biologist stressed the tedious and ad-hoc nature of their previous palette assignment method and emphasized that the degree of objectivity that psudo offers is compelling. Additionally, they had several suggestions to enhance psudo. First, they indicated that sometimes they want overlapping channels to have nearly identical colors when these channels correspond to highly correlated variables and when they want to visualize the channels in tandem without drawing distinctions. Next, when investigating different subpopulations, it would be helpful to specify hierarchical classifications for channels; for instance, they would like to assign “warm” colors to a certain set of channels and “cold” colors to another set without specifying the individual color for each channel. We will consider these suggestions in future work.

9. Limitations and Discussion

Accessibility.

Despite their frequency, we do not explicitly address visual impairments as part of this study. To our knowledge, there is no perceptually linear color space that models red-green color blindness, though such a space could easily be integrated into our methods. However, existing research has devised ways to simulate red-green color blindness through the transformation of pixel values, which we could use to evaluate our approach further.

Device Variation.

Factors such as lighting, viewing angle, hardware, and device color profiles influence a user’s ability to perceive data on a screen. We attempt to mitigate the impact of these factors by evaluating on a broad swath of users. These results may differ from a more controlled user base (e.g., pathologists using highly specialized hardware). Thus, a wider-scale survey that collects data relating to these viewing factors would better demonstrate generalizability while providing insights into specific use cases.

Data Variance.

We evaluated our approach using biomedical imaging data. However, the spatial properties of this data may differ from data from other domains and modalities. Existing approaches have evaluated the impact of spatial frequency on the perception of single-channel data [RNAK18]. Further research should investigate how the correlation between variables impacts graphical perception for multi-channel data. We plan to evaluate our approach with data from other domains (e.g., environmental science, geography, astronomy) to better understand how data from each field differs and how a holistic approach can best accommodate these discrepancies.

Data Encoding with Blended Colors.

Encoding multiple image channels as different colors and blending them into a combined view is not ideal from a perceptual point of view. Color is a non-separable channel for humans, often resulting in difficulties decoding the individual channel values from the blended visualization [GG14a]. Therefore, blending of data channels should only be used for spatial scientific data, such as measured multi-channel image data, where it is vital to show each pixel’s value in its correct x and y position. Non-spatial high-dimensional data should be handled with alternative encodings, such as polar coordinate plots, small multiples, or dimensionality reduction techniques. Furthermore, users should always have the option to toggle the visibility of individual channels to look at their data one channel at a time.

10. Conclusions and Future Work

The results of our studies suggest that continued development and widespread adoption of our approach could profoundly impact the effectiveness of scientific visualization across domains, allowing for more precise analysis and more effective communication of findings. Additionally, by introducing a degree of objectivity into the process of visualization and data exploration, we hope to lower the barrier to entry for those looking to investigate and understand complex, multi-channel data while enabling experts to better quantify their preferences and workflows when visualizing such data. We see several avenues of future research that further these goals.

Incorporating 3D Data.

Many of the considerations and design decisions that went into psudo are relevant to optimally visualizing higher dimensional data. Specifically, optimizing color assignment during visualizing multi-channel volumetric data adds another level of complexity to our existing approach, as we must also consider the spatial overlap of channels across the z dimension. Our biomedical collaborators have begun collecting such 3D fluorescent imaging and we plan to integrate that data into our approach and adapt methods accordingly.

High Impact Deployment.

Recent studies find that the display systems used by pathologists to view tissues impact these experts’ perception of clinically relevant features and thus influence clinical performance [TJA*14]. Our results suggest that palette assignment and pseudocoloring may be similarly significant. Thus, by continuing to develop our approach and integrate our methods with existing systems that visualize biomedical imaging data [MGP*22, SLE*22, ABM*12] and systems for presentation and storytelling of these data [RCH*22], this research may help improve experts identify and disseminate key findings and ultimately improve clinical outcomes.

Supplementary Material

1
media-1.pdf (1.8MB, pdf)

Figure 6: User Study Stimuli:

Figure 6:

Cropped regions of cancer tissue.

CCS Concepts.

  • Human-centered computing → Visualization systems and tools;

Acknowledgments

This work is supported by the Ludwig Center at Harvard Medical School and NIH grant 1U01CA284207.

References

  • [AATF20].Abasi S., Amani Tehran M., Fairchild M. D.: Distance metrics for very large color differences. Color Research & Application 45, 2 (2020), 208–223. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/col.22451,doi: 10.1002/col.22451. 4 [DOI] [Google Scholar]
  • [ABM*12].Allan C., Burel J.-M., Moore J., Blackburn C., Linkert M., Loynton S., Macdonald D., Moore W. J., Neves C., Patterson A., others: OMERO: flexible, model-driven data management for experimental biology. Nature methods 9, 3 (2012), 245–253. Publisher: Nature Publishing Group US New York. 3, 7, 10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [AHKGF11].Arias-Hernandez R., Kaastra L. T., Green T. M., Fisher B.: Pair Analytics: Capturing Reasoning Processes in Collaborative Visual Analytics. In 2011 44th Hawaii International Conference on System Sciences (2011), pp. 1–10. doi: 10.1109/HICSS.2011.339. 9 [DOI] [Google Scholar]
  • [AMCS96].Anderson M., Motta R., Chandrasekar S., Stokes M.: Proposal for a standard default color space for the Internet - sRGB: FINAL PROGRAM AND PROCEEDINGS OF IS&T/SID. In FOURTH COLOR IMAGING CONFERENCE: COLOR SCIENCE, SYSTEMS AND APPLICATIONS (SPRINGFIELD, 1996), Soc Imaging Science & Technology, pp. 238–246. Backup Publisher: SOC IMAGING SCI & TECHNOL. 2 [Google Scholar]
  • [AMR04].Abràmoff M. D., Magalhães P. J., Ram S. J.: Image processing with ImageJ. Biophotonics international 11, 7 (2004), 36–42. Publisher: Laurin Publishing. 3 [Google Scholar]
  • [AWK22].Andreou C., Weissleder R., Kircher M. F.: Multiplexed imaging in oncology. Nature Biomedical Engineering 6, 5 (May 2022), 527–540. URL: 10.1038/s41551-022-00891-5,doi: 10.1038/s41551-022-00891-5. 3 [DOI] [PubMed] [Google Scholar]
  • [Bre94].Brewer C. A.: Chapter 7 - Color Use Guidelines for Mapping and Visualization. In Modern Cartography Series, Maceachren A. M., Taylor D. R. F., (Eds.), vol. 2 of Vi- sualization in Modern Cartography. Academic Press, Jan. 1994, pp. 123–147. URL: https://www.sciencedirect.com/science/article/pii/B9780080424156500144,doi: 10.1016/B978-0-08-042415-6.50014-4. 2 [DOI] [Google Scholar]
  • [Bri23].Briggs D. J. C. B.: The elements of colour II: the attributes of perceived colour. Journal of the International Colour Association 32 (2023). Publisher: International Colour Association, AIC. 2 [Google Scholar]
  • [BTS*18].Bujack R., Turton T. L., Samsel F., Ware C., Rogers D. H., Ahrens J.: The Good, the Bad, and the Ugly: A Theoretical Framework for the Assessment of Continuous Colormaps. IEEE Transactions on Visualization and Computer Graphics 24, 1 (2018), 923–933. doi: 10.1109/TVCG.2017.2743978. 3 [DOI] [PubMed] [Google Scholar]
  • [Col04].Colorimetry C. I. D. L.: Report No: CIE Pub No 15. Vienna: CIE Central Bureau; (2004). 2 [Google Scholar]
  • [CTE*94].Chalfie M., Tu Y., Euskirchen G., Ward W. W., Prasher D. C.: Green fluorescent protein as a marker for gene expression. Science 263, 5148 (1994), 802–805. Publisher: American Association for the Advancement of Science. 1 [DOI] [PubMed] [Google Scholar]
  • [CWM09].Chuang J., Weiskopf D., Moller T.: Hue-Preserving Color Blending. IEEE Transactions on Visualization and Computer Graphics 15, 6 (Nov. 2009), 1275–1282. Conference Name: IEEE Transactions on Visualization and Computer Graphics. doi: 10.1109/TVCG.2009.150. 3 [DOI] [PubMed] [Google Scholar]
  • [CXM19].Cheng S., Xu W., Mueller K.: ColorMapND: A Data-Driven Approach and Tool for Mapping Multivariate Data to Color. IEEE Transactions on Visualization and Computer Graphics 25, 2 (2019), 1361–1377. doi: 10.1109/TVCG.2018.2808489. 3 [DOI] [PubMed] [Google Scholar]
  • [DKM11].Dunn K. W., Kamocka M. M., Mcdonald J. H.: A practical guide to evaluating colocalization in biological microscopy. American journal of physiology. Cell physiology 300, 4 (Apr. 2011), C723–C742. Edition: 2011/01/05 Publisher: American Physiological Society. URL: https://pubmed.ncbi.nlm.nih.gov/21209361,doi: 10.1152/ajpcell.00462.2010. 3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [FWD*17].Fang H., Walton S., Delahaye E., Harris J., Storchak D. A., Chen M.: Categorical Colormap Optimization with Visualization Case Studies. IEEE Transactions on Visualization and Computer Graphics 23, 1 (Jan. 2017), 871–880. Conference Name: IEEE Transactions on Visualization and Computer Graphics. doi: 10.1109/TVCG.2016.2599214. 2 [DOI] [PubMed] [Google Scholar]
  • [GBR*23].Gaglia G., Burger M. L., Ritch C. C., Rammos D., Dai Y., Crossland G. E., Tavana S. Z., Warchol S., Jaeger A. M., Naranjo S., Others: Lymphocyte networks are dynamic cellular communities in the immunoregulatory landscape of lung adenocarcinoma. Cancer Cell 41, 5 (2023), 871–886. Publisher: Elsevier. 8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [GC12].Sharma Gaurav, Carlos Eduardo Rodríguez-Pardo: The dark side of CIELAB. vol. 8292. URL: 10.1117/12.909960,doi: 10.1117/12.909960. 2 [DOI] [Google Scholar]
  • [GG14a].Gama S., Gonçalves D.: Studying Color Blending Perception for Data Visualization. The Eurographics Association, 2014. Accepted: 2014-12-16T07:21:09Z. URL: https://diglib.eg.org:443/xmlui/handle/10.2312/eurovisshort.20141168.121-125,doi: 10.2312/eurovisshort.20141168. 3,4,10 [DOI] [Google Scholar]
  • [GG14b].Gama S., Gonçalves D.: Studying the perception of color components’ relative amounts in blended colors. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational (New York, NY, USA, Oct. 2014), NordiCHI ‘14, Association for Computing Machinery, pp. 1015–1018. URL: https://dl.acm.org/doi/10.1145/2639189.2670264,doi: 10.1145/2639189.2670264. 1 [DOI] [Google Scholar]
  • [GLS17].Gramazio C. C., Laidlaw D. H., Schloss K. B.: Colorgorical: creating discriminable and preferable color palettes for information visualization. IEEE Transactions on Visualization and Computer Graphics (2017). 2, 4. [DOI] [PubMed] [Google Scholar]
  • [HB03].Harrower M., Brewer C. A.: ColorBrewer.org: An Online Tool for Selecting Colour Schemes for Maps. The Cartographic Journal 40, 1 (June 2003), 27–37. Publisher: Taylor & Francis. URL: https://www.tandfonline.com/doi/abs/10.1179/000870403235002042,doi: 10.1179/000870403235002042. 2 [DOI] [Google Scholar]
  • [HCLA17].Heinzmann K., Carter L. M., Lewis J. S., Aboagye E. O.: Multiplexed imaging for diagnosis and therapy. Nature Biomedical Engineering 1, 9 (Sept. 2017), 697–713. URL: 10.1038/s41551-017-0131-8,doi: 10.1038/s41551-017-0131-8. 3 [DOI] [PubMed] [Google Scholar]
  • [HDHA10].Wickham H., Cook D., Hofmann H., Buja A.: Graphical inference for infovis. IEEE Transactions on Visualization and Computer Graphics 16, 6 (Dec. 2010), 973–979. doi: 10.1109/TVCG.2010.161. 8 [DOI] [PubMed] [Google Scholar]
  • [HLX*19].He H., Li J., Xiao Q., Jiang S., Yang Y., Zhi S.: Language and Color Perception: Evidence From Mongolian and Chinese Speakers. Frontiers in Psychology 10 (2019). URL: https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00551,doi: 10.3389/fpsyg.2019.00551. 2,4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [HP07].Huang L., Pashler H.: A Boolean map theory of visual attention. Psychological review 114, 3 (July 2007), 599–631. Place: United States. doi: 10.1037/0033-295X.114.3.599. 8 [DOI] [PubMed] [Google Scholar]
  • [HRS*17].Haas A., Rossberg A., Schuff D. L., Titzer B. L., Holman M., Gohman D., Wagner L., zakai A., Bastien J.: Bringing the Web up to Speed with WebAssembly. SIGPLAN Not. 52, 6 (June 2017), 185–200. Place: New York, NY, USA: Publisher: Association for Computing Machinery. URL: https://doi-org.ezp-prod1.hul.harvard.edu/10.1145/3140587.3062363,doi: 10.1145/3140587.3062363. 7 [DOI] [Google Scholar]
  • [HS12].Heer J., Stone M.: Color Naming Models for Color Selection, Image Editing and Palette Design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2012), CHI ‘12, Association for Computing Machinery, pp. 1007–1016. event-place: Austin, Texas, USA. URL: https://doi-org.ezp-prod1.hul.harvard.edu/10.1145/2207676.2208547,doi: 10.1145/2207676.2208547. 2,4,5,7 [DOI] [Google Scholar]
  • [HSKIH07].Hagh-Shenas H., Kim S., Interrante V., Healey C.: Weaving Versus Blending: a quantitative assessment of the information carrying capacities of two alternative methods for conveying multivariate data with color. IEEE Transactions on Visualization and Computer Graphics 13, 6 (2007), 1270–1277. doi: 10.1109/TVCG.2007.70623. 3 [DOI] [PubMed] [Google Scholar]
  • [Hun07].Hunter J. D.: Matplotlib: A 2D graphics environment. Computing in Science & Engineering 9, 3 (2007), 90–95. Publisher: IEEE COMPUTER SOC. doi: 10.1109/MCSE.2007.55. 2 [DOI] [Google Scholar]
  • [JKW*22].Jessup J., Krueger R., Warchol S., Hoffer J., Muhlich J., Ritch C. C., Gaglia G., Coy S., Chen Y.-A., Lin J.-R., Santagata S., Sorger P. K., Pfister H.: Scope2Screen: Focus+Context Techniques for Pathology Tumor Assessment in Multivariate Image Data. IEEE Transactions on Visualization and Computer Graphics 28, 1 (2022), 259–269. doi: 10.1109/TVCG.2021.3114786. 3,7,8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [Joh12].Johnson J.: Not seeing is not believing: improving the visibility of your fluorescence images. Molecular Biology of the Cell 23, 5 (2012), 754–757. _eprint: https://doi.org/10.1091/mbc.e11–09-0824. URL: 10.1091/mbc.e11-09-0824,doi: 10.1091/mbc.e11-09-0824. 6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [KBJ*20].Krueger R., Beyer J., Jang W.-D., Kim N. W., Sokolov A., Sorger P. K., Pfister H.: Facetto: Combining Un-supervised and Supervised Learning for Hierarchical Phenotype Analysis in Multi-Channel Image Data. IEEE Transactions on Visualization and Computer Graphics 26, 1 (Jan. 2020), 227–237. doi: 10.1109/TVCG.2019.2934547. 3,6,8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [KGM*21].Keller M. S., Gold I., Mccallum C., Manz T., Kharchenko P. V., Gehlenborg N.: Vitessce: a framework for integrative visualization of multi-modal and spatially-resolved single-cell data, Oct. 2021. URL: osf.io/y8thv, doi: 10.31219/osf.io/y8thv. 7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [KGV83].Kirkpatrick S., Gelatt C. D., vecchi M. P.: Optimization by Simulated Annealing. Science 220, 4598 (1983), 671–680. _eprint: https://www.science.org/doi/pdf/10.1126/science.220.4598.671. URL: https://www.science.org/doi/abs/10.1126/science.220.4598.671,doi: 10.1126/science.220.4598.671. 4 [DOI] [PubMed] [Google Scholar]
  • [KGZ*12].Kühne L., Giesen J., Zhang Z., Ha S., Mueller K.: A Data-Driven Approach to Hue-Preserving Color-Blending. IEEE Transactions on Visualization and Computer Graphics 18, 12 (Dec. 2012), 2122–2129. doi: 10.1109/TVCG.2012.186.3 [DOI] [PubMed] [Google Scholar]
  • [KT22].Kyriakou K.-I. D., TSELIKAS N. D.: Complementing JavaScript in High-Performance Node.js and Web Applications with Rust and WebAssembly. Electronics 11, 19 (2022). URL: https://www.mdpi.com/2079-9292/11/19/3217,doi: 10.3390/electronics11193217. 7 [DOI] [Google Scholar]
  • [KZX*23].Kumar A., Zhang X., Xin H. L., Yan H., Huang X., Xu W., Mueller K.: RadVolViz: An Information Display-Inspired Transfer Function Editor for Multivariate Volume Visualization. IEEE transactions on visualization and computer graphics PP (Apr. 2023). Place: United States. doi: 10.1109/TVCG.2023.3263856. 3 [DOI] [PubMed] [Google Scholar]
  • [Lev21].Levien R.: An interactive review of Oklab, Jan. 2021. URL: https://raphlinus.github.io/color/2021/01/18/oklab-critique.html. 2,3 [Google Scholar]
  • [LFC*21].Lu K., Feng M., Chen X., Sedlmair M., Deussen O., Lischinski D., Cheng Z., Wang Y.: Palettailor: Discriminable Colorization for Categorical Data. IEEE Transactions on Visualization and Computer Graphics 27, 2 (2021), 475–484. doi: 10.1109/TVCG.2020.3030406. 2,4,5 [DOI] [PubMed] [Google Scholar]
  • [LH92].Levkowitz H., Herman G. T.: The design and evaluation of color scales for image data. IEEE Computer Graphics and Applications 12, 1 (1992), 72–80. Publisher: Citeseer. 3 [Google Scholar]
  • [LH18].Liu Y., Heer J.: Somewhere Over the Rainbow: An Empirical Assessment of Quantitative Colormaps. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), CHI ‘18, Association for Computing Machinery, pp. 1–12. event-place: Montreal QC, Canada. URL: https://doi-org.ezp-prod1.hul.harvard.edu/10.1145/3173574.3174172,doi: 10.1145/3173574.3174172. 2,4 [DOI] [Google Scholar]
  • [Lil23].Lilley C.: Color on the Web. In Fundamentals and Applications of Colour Engineering. Oct. 2023, pp. 271–291. URL: 10.1002/9781119827214.ch16,doi: 10.1002/9781119827214.ch16. 2 [DOI] [Google Scholar]
  • [LIW*18].Lin J.-R., Izar B., Wang S., Yapp C., Mei S., Shah P. M., Santagata S., Sorger P. K.: Highly multiplexed immunofluorescence imaging of human tissues and tumors using t-CyCIF and conventional optical microscopes. elife 7 (2018). Publisher: eLife Sciences Publications, Ltd. 1, 3, 7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [LS87].Li Z., Scheraga H. A.: Monte Carlo-minimization approach to the multiple-minima problem in protein folding. Proceedings of the National Academy of Sciences 84, 19 (1987), 6611–6615. _eprint: https://www.pnas.org/doi/pdf/10.1073/pnas.84.19.6611. URL: https://www.pnas.org/doi/abs/10.1073/pnas.84.19.6611,doi: 10.1073/pnas.84.19.6611. 4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [LSP03].Lippincott-Schwartz J., Patterson G. H.: Development and use of fluorescent protein markers in living cells. Science (New York, N.Y.) 300, 5616 (Apr. 2003), 87–91. Place: United States. doi: 10.1126/science.1082520. 1 [DOI] [PubMed] [Google Scholar]
  • [LWB15].Liu D., Wang L., Benediktsson J. A.: An interactive color visualization method with multi-image fusion for hyperspectral imagery. In 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (2015), pp. 1088–1091. doi: 10.1109/IGARSS.2015.7325959. 3 [DOI] [Google Scholar]
  • [LXL21].Li Q., Xu H., Lan Z.: A Novel Adaptable Sonar Image Pseudo-color Enhancement Method Using CIELab Space. In 2021 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC) (2021), pp. 1–5. doi: 10.1109/ICSPCC52875.2021.9564827. 3 [DOI] [Google Scholar]
  • [MGP*22].Manz T., Gold I., Patterson N. H., Mccallum C., Keller M. S., Herr B. W., Börner K., Spraggins J. M., Gehlenborg N.: Viv: multiscale visualization of high-resolution multiplexed bioimaging data on the web. Nature methods 19, 5 (2022), 515–516. Place: United States: Publisher: Nature Publishing Group. 3, 7, 10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [Mit15].Mittelstädt S.: Methods for Effective Color Encoding and the Compensation of Contrast Effects. PhD Thesis, Universität Konstanz, Konstanz, 2015. 2 [Google Scholar]
  • [MKA*11].Borkin M., Gajos K., Peters A., Mitsouras D., Melchionna S., Rybicki F., Feldman C., pfister H.: Evaluation of Artery Visualizations for Heart Disease Diagnosis. IEEE Transactions on Visualization and Computer Graphics 17, 12 (Dec. 2011), 2479–2488. doi: 10.1109/TVCG.2011.192. 8 [DOI] [PubMed] [Google Scholar]
  • [Mun15].Munzner T.: Visualization analysis & design, 1st edition ed. A K Peters visualization series. CRC Press, Boca Raton, Florida, 2015. 8 [Google Scholar]
  • [NMV*22].Nirmal A. J., Maliga Z., Vallius T., Quattrochi B., Chen A. A., Jacobson C. A., Pelletier R. J., Yapp C., Arias-Camison R., Chen Y.-A., Lian C. G., Murphy G. F., Santagata S., Sorger P. K.: The Spatial Landscape of Progression and Immunoediting in Primary Melanoma at Single-Cell Resolution. Cancer discovery 12, 6 (June 2022), 1518–1541. Place: United States. doi: 10.1158/2159-8290.CD-21-1357. 3,7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [Ott20].Ottosson B.: A perceptual color space for image processing, Dec. 2020. URL: https://bottosson.github.io/posts/oklab/. 2,3
  • [PS18].Palan S., Schitter C.: Prolific.ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance 17 (Mar. 2018), 22–27. URL: https://www.sciencedirect.com/science/article/pii/S2214635017300989,doi: 10.1016/j.jbef.2017.12.004. 7 [DOI] [Google Scholar]
  • [QR22].Quadri G. J., Rosen P.: A Survey of Perception-Based Visualization Studies by Task. IEEE Transactions on Visualization and Computer Graphics 28, 12 (Dec. 2022), 5026–5048. Place: USA: Publisher: IEEE Educational Activities Department. URL: https://doi-org.ezp-prod1.hul.harvard.edu/10.1109/TVCG.2021.3098240,doi: 10.1109/TVCG.2021.3098240. 8 [DOI] [PubMed] [Google Scholar]
  • [RCH*22].Rashid R., Chen Y.-A., Hoffer J., Muhlich J. L., Lin J.-R., Krueger R., Pfister H., Mitchell R., Santagata S., Sorger P. K.: Narrative online guides for the interpretation of digital pathology images and tissue-atlas data. Nature biomedical engineering 6, 5 (May 2022), 515–526. Place: England. doi: 10.1038/s41551-021-00789-8. 10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [Red22].Reda K.: Rainbow Colormaps: What are they good and bad for? IEEE Transactions on Visualization and Computer Graphics (2022), 1–15. Conference Name: IEEE Transactions on Visualization and Computer Graphics. doi: 10.1109/TVCG.2022.3214771. 4 [DOI] [PubMed] [Google Scholar]
  • [Red23].Reda K.: Rainbow Colormaps: What are They Good and Bad for? IEEE Transactions on Visualization and Computer Graphics 29, 12 (2023), 5496–5510. doi: 10.1109/TVCG.2022.3214771. 3,7,8 [DOI] [PubMed] [Google Scholar]
  • [Ren13].renz M.: Fluorescence microscopy—A historical and technical perspective. Cytometry Part A 83, 9 (Sept. 2013), 767–779. Publisher: John Wiley & Sons, Ltd. URL: 10.1002/cyto.a.22295,doi: 10.1002/cyto.a.22295. 1 [DOI] [PubMed] [Google Scholar]
  • [RKPC99].Rogowitz B. E., Kalvin A. D., Pelah A., Cohen A.: Which trajectories through which perceptually uniform color spaces produce appropriate colors scales for interval data? In Color Imaging Conference (1999), pp. 321–326. 3 [Google Scholar]
  • [RNAK18].Reda K., Nalawade P., Ansah-Koi K.: Graphical Perception of Continuous Quantitative Maps: The Effects of Spatial Frequency and Colormap Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), CHI ‘18, Association for Computing Machinery, pp. 1–12. event-place: Montreal QC, Canada. URL: https://doi-org.ezp-prod1.hul.harvard.edu/10.1145/3173574.3173846,doi: 10.1145/3173574.3173846. 3,5,8,10 [DOI] [Google Scholar]
  • [Rob88].Robertson P. K.: Visualizing color gamuts: a user interface for the effective use of perceptual color spaces in data displays. IEEE Computer Graphics and Applications 8 (1988), 50–64. URL: https://api.semanticscholar.org/CorpusID:10629388. 3 [Google Scholar]
  • [RP19].Reda K., Papka M. E.: Evaluating Gradient Perception in Color-Coded Scalar Fields. In 2019 IEEE Visualization Conference (VIS) (2019), IEEE, pp. 271–275. URL: https://scholarworks.iupui.edu/bitstream/handle/1805/24170/Reda_2019_evaluating.pdf?sequence=1&isAllowed=y. 3 [Google Scholar]
  • [RS21].Reda K., Szafir D. A.: Rainbows Revisited: Modeling Effective Colormap Design for Graphical Inference. IEEE Transactions on Visualization and Computer Graphics 27, 2 (2021), 1032–1042. doi: 10.1109/TVCG.2020.3030439. 2,4 [DOI] [PubMed] [Google Scholar]
  • [RSGP21].Reda K., Salvi A. A., Gray J., Papka M. E.: Color Nameability Predicts Inference Accuracy in Spatial Visualizations. Computer Graphics Forum 40, 3 (2021), 49–60. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14288. URL: https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14288,doi: 10.1111/cgf.14288. 2,3,4,8 [DOI] [Google Scholar]
  • [RT98].Rogowitz B., Treinish L.: Data visualization: the end of the rainbow. IEEE Spectrum 35, 12 (Dec. 1998), 52–59. Conference Name: IEEE Spectrum. URL: https://ieeexplore.ieee.org/abstract/document/736450,doi: 10.1109/6.736450. 3 [DOI] [Google Scholar]
  • [SDB*17].Liu S., Maljovec D., Wang B., Bremer P. -T., Pascucci V.: Visualizing High-Dimensional Data: Advances in the Past Decade. IEEE Transactions on Visualization and Computer Graphics 23, 3 (Mar. 2017), 1249–1268. doi: 10.1109/TVCG.2016.2640960. 3 [DOI] [PubMed] [Google Scholar]
  • [SG31].Smith T., Guild J.: The C.I.E. colorimetric standards and their use. Transactions of the Optical Society 33, 3 (Jan. 1931), 73. URL: 10.1088/1475-4878/33/3/301,doi: 10.1088/1475-4878/33/3/301. 2 [DOI] [Google Scholar]
  • [SJ21].Sochorová Š., Jamriška O.: Practical Pigment Mixing for Digital Painting. ACM Trans. Graph. 40, 6 (Dec. 2021). Place: New York, NY, USA: Publisher: Association for Computing Machinery. URL: https://doi-org.ezp-prod1.hul.harvard.edu/10.1145/3478513.3480549,doi: 10.1145/3478513.3480549. 2,3 [DOI] [Google Scholar]
  • [SLE*22].Sofroniew N., Lambert T., Evans K., Nunez-Iglesias J., Bokota G., Winston P., Peña-Castellanos G., Yamauchi K., Bussonnier M., Doncila Pop D., Can Solak A., Liu Z., Wadhwa P., Burt A., Buckley G., Sweet A., Migas L., Hilsenstein V., Gaifas L., Bragantini J., Rodríguez-Guerra J., Muñoz H., Freeman J., Boone P., Lowe A., Gohlke C., Royer L., Pierré A., Har-Gil H., Mcgovern A.: napari: a multi-dimensional image viewer for Python, Nov. 2022. URL: 10.5281/zenodo.7276432,doi: 10.5281/zenodo.7276432. 3,10 [DOI] [Google Scholar]
  • [SSSM11].Silva S., Sousa Santos B., Madeira J.: Using color in visualization: A survey. Computers & Graphics 35, 2 (Apr. 2011), 320– 333. URL: https://visualization.sites.clemson.edu/reu/papers/paper1.pdf, doi: 10.1016/j.cag.2010.11.015. 2 [DOI] [Google Scholar]
  • [SSY*22].Schapiro D., Sokolov A., Yapp C., Chen Y.-A., Muhlich J. L., Hess J., Creason A. L., Nirmal A. J., Baker G. J., Nariya M. K., Lin J.-R., Maliga Z., Jacobson C. A., HODGMAN M. W., Ruokonen J., Farhi S. L., Abbondanza D., Mckinley E. T., Persson D., Betts C., Sivagnanam S., Regev A., Goecks J., Coffey R. J., Coussens L. M., Santagata S., Sorger P. K.: Mcmicro: a scalable, modular image-processing pipeline for multiplexed tissue imaging. Nature Methods 19, 3 (Mar. 2022), 311–315. URL: 10.1038/s41592-021-01308-y,doi: 10.1038/s41592-021-01308-y. 6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [SWD05].Sharma G., Wu W., Dalal E. N.: The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application 30, 1 (Feb. 2005), 21–30. Publisher: John Wiley & Sons, Ltd. URL: 10.1002/col.20070,doi: 10.1002/col.20070. 2 [DOI] [Google Scholar]
  • [Sza18].Szafir D. A.: Modeling Color Difference for Visualization Design. IEEE Transactions on Visualization and Computer Graphics 24, 1 (Jan. 2018), 392–401. Conference Name: IEEE Transactions on Visualization and Computer Graphics. doi: 10.1109/TVCG.2017.2744359. 2 [DOI] [PubMed] [Google Scholar]
  • [TAW*09].Thierry G., Athanasopoulos P., Wiggett A., Dering B., Kuipers J.-R.: Unconscious effects of language-specific terminology on preattentive color perception. Proceedings of the National Academy of Sciences 106, 11 (Mar. 2009), 4567–4570. Publisher: Proceedings of the National Academy of Sciences. URL: 10.1073/pnas.0811155106,doi: 10.1073/pnas.0811155106. 2,4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [TJA*14].Kimpe Tom, Rostang Johan, Avanaki Ali, Espig Kathryn, Xthona Albert, Cocuranu Ioan, Parwani Anil V., Liron Pantanowitz: Does the choice of display system influence perception and visibility of clinically relevant features in digital pathology images? vol. 9041, p. 904109. URL: 10.1117/12.2042771,doi: 10.1117/12.2042771. 10 [DOI] [Google Scholar]
  • [TNC*20].Tan W. C. C., Nerurkar S. N., Cai H. Y., Ng H. H. M., Wu D., Wee Y. T. F., Lim J. C. T., Yeong J., Lim T. K. H.: Overview of multiplex immunohistochemistry/immunofluorescence techniques in the era of cancer immunotherapy. Cancer Communications 40, 4 (2020), 135–153. Publisher: Wiley Online Library. 1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [VM16].Setlur V., Stone M. C.: A Linguistic Approach to Categorical Color Assignment for Data Visualization. IEEE Transactions on Visualization and Computer Graphics 22, 1 (Jan. 2016), 698–707. doi: 10.1109/TVCG.2015.2467471. 2 [DOI] [PubMed] [Google Scholar]
  • [War23].Warchol S.: Spatially and Perceptually Aware Pseudocoloring in Multi-Channel Imaging Data, Mar. 2023. URL: osf.io/fm6vn,doi: 10.17605/OSF.IO/FM6VN. 8 [DOI] [Google Scholar]
  • [Was21].Waskom M. L.: seaborn: statistical data visualization. Journal of Open Source Software 6, 60 (2021), 3021. Publisher: The Open Journal. URL: 10.21105/joss.03021,doi: 10.21105/joss.03021. 2 [DOI] [Google Scholar]
  • [WCG*19].Wang Y., Chen X., Ge T., Bao C., Sedlmair M., Fu C.-W., Deussen O., Chen B.: Optimizing Color Assignment for Perception of Class Separability in Multiclass Scatterplots. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2019), 820–829. doi: 10.1109/TVCG.2018.2864912. 2 [DOI] [PubMed] [Google Scholar]
  • [Wei08].Weiss P. S.: 2008 Nobel Prize in Chemistry: Green Fluorescent Protein, Its Variants and Implications. ACS Nano 2, 10 (Oct. 2008), 1977–1977. Publisher: American Chemical Society. URL: 10.1021/nn800671h,doi: 10.1021/nn800671h. 1 [DOI] [PubMed] [Google Scholar]
  • [WGM*08].Wang L., Giesen J., Mcdonnell K. T., Zolliker P., Mueller K.: Color Design for Illustrative Visualization. IEEE Transactions on Visualization and Computer Graphics 14, 6 (Nov. 2008), 1739–1754. Conference Name: IEEE Transactions on Visualization and Computer Graphics. doi: 10.1109/TVCG.2008.118. 3 [DOI] [PubMed] [Google Scholar]
  • [Wic16].Wickham H.: ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag; New York, 2016. URL: https://ggplot2.tidyverse.org. 2 [Google Scholar]
  • [WKN*23].Warchol S., Krueger R., Nirmal A. J., Gaglia G., Jessup J., Ritch C. C., Hoffer J., Muhlich J., Burger M. L., Jacks T., Santagata S., sorger P. K., Pfister H.: Visinity: Visual Spatial Neighborhood Analysis for Multiplexed Tissue Imaging Data. IEEE Transactions on Visualization and Computer Graphics 29, 1 (2023), 106–116. doi: 10.1109/TVCG.2022.3209378. 3,8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [WSK68].Wyszecki G., Stiles V. S., Kelly K. L.: Color Science: Concepts and Methods, Quantitative Data and Formulas. Physics today 21, 6 (1968), 83–84. Publisher: AIP Publishing. doi: 10.1063/1.3035025. 2,3 [DOI] [Google Scholar]
  • [XSFG97].Xiang Y., Sun D., Fan W., Gong X.: Generalized simulated annealing algorithm and its application to the Thomson model. Physics Letters A 233, 3 (Aug. 1997), 216–220. URL: https://www.sciencedirect.com/science/article/pii/S037596019700474X,doi: 10.1016/S0375-9601(97)00474-X. 4 [DOI] [Google Scholar]
  • [YVK*23].Yang J., Vining N., Kheradmand S., Carr N., Sigal L., Sheffer A.: Subpixel Deblurring of Anti-Aliased Raster Clip-Art. In Computer Graphics Forum (2023), vol. 42, Wiley Online Library, pp. 61–76. Issue: 2. 2, 4 [Google Scholar]
  • [ZAZH20].Zhou M., Ai T., Zhou G., Hu W.: A Visualization Method for Mining Colocation Patterns Constrained by a Road Network. IEEE Access 8 (2020), 51933–51944. Conference Name: IEEE Access. doi: 10.1109/ACCESS.2020.2980168. 3 [DOI] [Google Scholar]
  • [ZSZ*06].Zhou W., Sibley P. G., Zhang S., Tate D. F., Laidlaw D. H.: Perceptual Coloring and 2D Sketching for Segmentation of Neural Pathways. In ACM SIGGRAPH 2006 Research Posters (New York, NY, USA, 2006), SIGGRAPH ‘06, Association for Computing Machinery, pp. 172–es. event-place: Boston, Massachusetts. URL: https://doi-org.ezp-prod1.hul.harvard.edu/10.1145/1179622.1179820,doi: 10.1145/1179622.1179820. 3 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1
media-1.pdf (1.8MB, pdf)

Articles from bioRxiv are provided here courtesy of Cold Spring Harbor Laboratory Preprints

RESOURCES