Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2021 Nov 1;17(11):e1009503. doi: 10.1371/journal.pcbi.1009503

linus: Conveniently explore, share, and present large-scale biological trajectory data in a web browser

Johannes Waschke 1,2, Mario Hlawitschka 2, Kerim Anlas 3, Vikas Trivedi 3,4, Ingo Roeder 5,6, Jan Huisken 7, Nico Scherf 1,6,8,*
Editor: Manja Marz9
PMCID: PMC8584757  PMID: 34723958

Abstract

In biology, we are often confronted with information-rich, large-scale trajectory data, but exploring and communicating patterns in such data can be a cumbersome task. Ideally, the data should be wrapped with an interactive visualisation in one concise packet that makes it straightforward to create and test hypotheses collaboratively. To address these challenges, we have developed a tool, linus, which makes the process of exploring and sharing 3D trajectories as easy as browsing a website. We provide a python script that reads trajectory data, enriches them with additional features such as edge bundling or custom axes, and generates an interactive web-based visualisation that can be shared online. linus facilitates the collaborative discovery of patterns in complex trajectory data.

Author summary

Many of the processes that we study in biology are dynamic or interconnected. We can represent most of them as trajectories, being it connections between neurons in a brain or species in an ecosystem or motion traces of animals, cells or molecules. Modern experiments allow researchers to generate such trajectory data at unprecedented scales: think the parallel tracking of thousands of cells in a developing embryo over hours or days. However, visualising large-scale trajectory data is a challenge: the typical static visualisations result in excessive overplotting and often resemble the infamous hairballs. Simplification and interactivity are crucial strategies to deal with this problem. We present the lightweight tool linus that enables researchers to explore and share their trajectory data in an engaging way in web browsers from almost any device.


This is a PLOS Computational Biology Software paper.

Introduction

In biology, we often face large-scale trajectory data from dense spatial pathways, such as the brain connectivity obtained from diffusion MRI imaging [1], or tracking data such as cell trajectories [2] or animal trails [3]. Although this type of data is becoming increasingly prominent in biomedical research [46], exploring, sharing, and communicating patterns in such data are often cumbersome tasks requiring a set of different software that are often complex to install, learn and use. Recently, new tools have become available for efficiently visualising 3D volumetric data [79], and some of those allow the user to overlay tracking data to cross-check the quality of the results or to visualise simple predefined features (such as speed or time). However, given the more general-purpose design of such software, these are not ideal solutions to efficiently and collaboratively explore and share the visualisations.

Tools like CATMAID [10] or Neuroglancer (https://github.com/google/neuroglancer) impressively demonstrated the benefit of in-browser 3D visualisations for collaborative curation and visualisation of neuroimaging data [11]. In contrast to the specialised focus of these tools on volumetric neuroimaging data (e.g. reconstructing and visualising neural morphologies from electron microscopy images), we aimed to build a general-purpose, lightweight, and interactive visualisation of generic trajectory data across all fields of biology that might be challenging to visualise in static images otherwise (from animal tracks or static brain tractography to cellular or molecular motion). Here, interactive, scriptable, and easily shareable visualisation [12] open up novel ways of communicating and discussing experimental results and findings [13]. The analysis of complex trajectory data and the creation and testing of hypotheses could then be done collaboratively. Importantly, since such bioinformatics tools would be right at the interface of computational and life sciences, they need to be accessible and usable for scientists with little or no background in programming. Ideally, the data should be bundled with a guided, interactive presentation in one concise packet that can be passed to a collaborator. To address these challenges, we have developed our tool linus, making it easier to explore 3D trajectory data from any device without a local installation of specialised software. linus creates interactive visualisation packets that can be explored in a web browser while keeping data presentation straightforward and shareable, both offline and online (Fig 1A). In previous work, we explored cell trajectories during zebrafish gastrulation extracted from large-scale fluorescence microscopy datasets [2]. In these experiments, linus allowed us to interactively explore the tracks of around 11.000 cells (starting number) as they moved across the zebrafish embryo throughout 16 hrs. More importantly, it enabled us to share and discuss visualisations with collaborators from different backgrounds and to create figures for the manuscript.

Fig 1. Browser-based exploration and sharing of trajectory visualisations with linus.

Fig 1

(a) Control workflow of linus. (Input data) linus can import tracking data from a variety of formats. (Preprocessing) The Python-converter additionally enriches the imported data with additional features (providing e.g. an edge-bundled version of the data, visual context, or a coordinate system) and prepares the visualisation packet. (Tour setup) The user can open the visualisation in a web browser and create an interactive presentation of the data. (Sharing) These visualisations can be shared via a URL, or a QR code and (Exploring) readily presented and explored across various devices. (b) Overview of the graphical user interface (GUI). The data can be visualised and explored in the browser. Different aspects of the data can be interactively highlighted (zoomed example on the right shows the effect of changing the degree of trajectory bundling).

By sharing this tool with the community, we hope to facilitate novel applications of visualising trajectories across all of biology. We have written this manuscript for a broad audience and thus mainly concentrate on describing how to create, use, and share the visualisations in the Results section from a user perspective. The Design and Implementation section and the S1 Text describe the technical details for readers who need to deploy the tool on their data. Finally, we refer readers interested in contributing new functionality or adapting the existing code (maintainers) to the technical documentation at our repository https://gitlab.com/imb-dev/linus.

Design and implementation

From a more technical perspective, our overall goal when developing linus was to create a versatile and lightweight visualisation tool that runs on a wide range of devices. To this end, we based the visualisation part on web technologies. Specifically, we used TypeScript, which compiles to JavaScript and WebGL. Moreover, a core component of the visualisation process, the data preparation, requires local file access and fast computations, both of which are limited in JavaScript. For that reason, we also created a Python (> v3.0) script that handles the computationally demanding parts of data processing and automatically generates the web-based visualisation packets.

The overall workflow for creating visualisations is summarised in Fig 1A. The importer script from linus can read trajectory data from a generic, plain CSV format (see S1 Text) or from a variety of established trajectory formats such as SVF [5], TGMM XML [14], or the community standard biotracks [15], which itself supports import from a wide variety of cell tracking tools such as CellProfiler [16] or TrackMate [17]. During the data conversion, linus can enrich the trajectory data with additional attributes or spatial context. For example, we can declutter dense trajectories by highlighting the major “highways” through edge-bundling (Fig 1B). linus can automatically add generic attributes that are useful in a range of applications, such as the local angle of the trajectories or a timestamp. The user can simply add custom numerical attributes for specific applications by providing these measurements as extra columns in CSV files (see S1 Text). The data attributes form the basis for advanced rendering effects. To highlight the spatial context, linus can generate axes automatically, or users can define custom axes. The result of the preprocessing is a ready-to-use visualisation packet that can be opened in a web browser on any device with WebGL support. The packet is a folder containing HTML, JavaScript, and related files.

After configuring and creating the core visualisation with the Python toolkit, further adjustments are possible within the web browser. Opening the index.html file starts the visualisation and shows the trajectories with baseline render settings (semi-transparent, single-coloured rendering on a grey background). The browser renders an interactive visualisation of the trajectories and an interface for the user to update and adapt the presentation to their needs (e.g. colour scales, projections, clipping planes). The user interface itself is adapted to each dataset: The preprocessing script generates a separate property and the corresponding slider (filters and colour mapping) for each given data attribute in the user interface. If more than one state is available for the dataset (e.g. an edge bundled copy of the data or custom projections), the interface automatically offers the functionality to fade between the states. An in-depth discussion of the technical details can be found in the S1 Text and in our online documentation.

Results

Interactive visualisation with configurable filters allows in-depth data exploration for a variety of applications across sciences

After configuring and creating the visualisation packet with the Python toolkit, the user can carve out patterns from the original “hairball” of lines by setting general visualisation parameters like shading and colour maps (Fig 2A). For example, the user can directly encode the local movement direction (x, y, z) into RGB values by selecting the colour mode to orientation XYZ. To further enhance the visibility of movement directions, the visibility of parts of the trajectory can be gradually reduced by the Opacity mode option that maps, e.g. the time dimension to the opacity channel. To focus on particular parts of the dataset, the user filters the data for the various attributes such as specific time intervals or user-specified numerical properties such as marker expression in cell tracking (Fig 2B and 2C). Alternatively, the user can select spatial regions of interest (ROIs) either with cutting planes or with progressively refinable selections (Fig 2D). The visual attributes can then be separately defined for the selected in-focus areas and the (non-selected) context regions (Fig 2D) to create a more focused visualisation. Apart from the purpose of qualitative visualisation, the selected trajectories can also be downloaded as CSV files for subsequent quantitative analysis (see S1 Text).

Fig 2. Configurable filters allow deep data exploration.

Fig 2

The user can choose from a range of several visualisation methods directly in the browser interface to highlight aspects of interest in the data (zebrafish tracking results from [2] as an example). (a) The line data is visualised using a range of options for shading and colour mapping. (b-c) From the full dataset (top), the user can filter parts of the data concerning specific attributes, such as time intervals (bottom) or (c) a specific range of signals (marker expression in cells in this case). (d) The user can further create subselections of the tracks in space using cutting planes or refinable spatial selections. The visual attributes can be defined separately for the selected focus region and the non-selected context region. (e-g) The web interface can blend seamlessly between different states of the data. This feature can be used to map between (e) original tracks and their edge-bundled version, to visualise planar projections of the 3D data (f) locally on a definable (oblique) plane or (g) globally using a Mercator projection (with definable parameters).

One crucial problem with large-scale trajectory data is the sheer density of tracks that often leads to extreme visual clutter. To tackle this problem, one prominent feature of linus is the ability to blend between different data transformations seamlessly. We provide two main sorts of transformations out-of-the-box: The user can smoothly transition between original and bundled state to focus on major “highways” (Figs 2D and 1B), or between original (3D cartesian) view and different 2D projections (e.g. a Mercator map) to provide a global, less cluttered perspective on the trajectories (Fig 2E and 2F). If other application-specific transformations are needed, such as a spatial transformation or any form of trajectory clustering, the user can provide such an alternative state during preprocessing and then interactively blend between those states.

Data and visualisations are easily shareable with collaborators via interactive visualisation packets

As a straightforward solution to share the results, the user directly exports the visualisations from the webview as static images and videos (e.g. such as S1 Video). But sharing the visualisation of the data can go a step beyond image or video data. The user can conveniently record all these visualisation properties directly in the web interface of linus to create information-rich, interactive tours. The user adjusts these tours on a detailed level using a timeline-based editor (S1 Fig). An icon represents each action that can be moved along the time axis to develop a visual storyline. Smooth transitions and textual markers that can be precisely timed facilitate understanding and storytelling. To communicate and distribute new findings, these tours can easily be shared online or offline with the community (colleagues, readers of a manuscript, audience of a real or virtual presentation). The tours are typically copied into the source code of the visualisation packet. If they consist of a limited number of actions (see S1 Text for details), they can also be shared by a dynamically created URL or a QR Code. Fig 3 shows examples of visualisations that have been created with linus ranging from dynamic trajectories in 2D (Fig 3A) or on surfaces (Fig 3B) to static (Fig 3C) or dynamic 3D (Fig 3D) tracks across applications from ethology, neuroscience, and developmental biology. An interactive version of each example can be found online by simply scanning the respective QR codes in the figure.

Fig 3. Sharable interactive visualisation packets for a multitude of applications ranging across a variety of sciences.

Fig 3

The user can combine the visualisation methods, annotations, and camera motion paths in a scheduled tour that can be shared by a custom URL or QR code generated directly in the browser interface. Panels (a)-(d) demonstrate use cases for real-world datasets with different characteristics and dimensionality. (a) Ant trails (2D+t) from [18]. Bundling and colour-coding (spatial orientation by mapping (x,y,z) to (R,G,B) values) indicate the major trails running in opposing directions. (b) GPS Animal tracking data for two species (blue whales [19]—blue and arctic tern [20]—red) shown on a Mercator projection of the earth’s surface. For a better orientation, the outline of the continents is included as axes into the visualisation that dynamically adapt to the projections and viewpoint changes (2D surface data + t). (c) Brain tractography data showing major white matter connectivity from diffusion MRI (3D). The spatial selection highlights the left hemisphere, while anatomical context is provided by the outline of the entire brain (from mesh data) and the defocused tracts of the right hemisphere. (d) Cell movements during the elongation process of zebrafish blastoderm explants (3D+t) [21]. Bundling, colour coding, and spatial selection highlight collective cell movements as the explant starts elongating, focusing on a subpopulation of cells driving this process. The colour code shows time from early (yellow) to late (red) for selected tracks.

We tested linus visualisation packets across various devices and found that performance is the most essential aspect of the user experience that varies between different devices. Desktop computers with mid-range graphics cards (e.g. the graphics processors that are built-in with current CPUs) can easily handle more than 10,000 trajectories at smooth framerates. Mid-range smartphones handle the same data with low framerates (ca. 10 fps), which is still usable but does not feel as smooth. For virtual reality applications, we also tested linus on the Oculus Go VR goggles. Here, a high frame rate is essential as the user experience would be quite discomforting otherwise, and we recommend reducing the number of trajectories further to about 1,000 in this use case. Due to the differences in performance and user experience, we recommend creating dedicated visualisation packets (or tours) for the intended type of output device.

Availability and future directions

Example visualisations are available by scanning the QR codes in Fig 3 directly or by visiting https://imb-dev.gitlab.io/linus-manuscript/. The linus software, including source code and documentation, is freely available at our repository at https://gitlab.com/imb-dev/linus.

In the future, we would like to support further advanced preprocessing options such as trajectory clustering, more generic transforms or feature extraction. We also would like to extend the visualisation part of linus, so the user can interactively annotate the data. Here, we envision that the user can easily label subsets of trajectories and then use this information for downstream analysis (such as building a trajectory classifier). We also invite the community to contribute further ideas to develop linus or integrate its functionality as plugins to other tools. To contribute, please see the notes at https://gitlab.com/imb-dev/linus/-/blob/master/CONTRIBUTING.md.

Our experience with linus shows that sharing relatively complex data visualisations in this interactive way makes it much more efficient to collaboratively find patterns in data and to create and discuss figures or videos for presentations and manuscripts. More generally, interactive data sharing is helpful when collaborations, presentations, or teaching occur remotely. At the same time, during an in-person event such as a talk or poster session at a conference, the target audience can explore the data instantly on their computers, tablets, or smartphones. In any case, touch screens or even virtual reality goggles increase the immersion with more natural controls and true 3D-rendering, helping to grasp the trajectories’ spatial relation. With these features, we are convinced that approaches like linus will improve considerably how we collectively explore, communicate, and teach the spatio-temporal patterns from information-rich, multi-dimensional data in biology.

Supporting information

S1 Fig. Tour editor.

The tour actions can be organised by drag and drop (reading order: from left to right, top to bottom). Every action can be scheduled with a time delay with respect to the end of the previous action. Some actions use transitions (e.g. camera motions or the adjustment of numeric values) whose duration can be configured as well. Eventually, a URL or a QR code can be created.

(TIFF)

S2 Fig. Overview of data structure.

The coordinate list holds the x/y/z values for each supporting point of the trajectories. For each such point, an arbitrary number (only limited by the graphics card’s capabilities) of attributes can be stored. The attributes must be provided in the same order as the points. To create trajectories from the point set, an index list is provided as well. Each pair of indices describes one segment of a trajectory. The number of such segments is not restricted, as any point (and its respective attributes) can be used multiple times.

(TIFF)

S3 Fig. Overview of settings.

An overview of the different visualisation settings available to the user from the GUI (two screenshots merged). For explanations regarding different settings, see text or documentation at https://gitlab.com/imb-dev/linus.

(TIFF)

S1 Video. A video showing a demo visualization created and recorded with linus.

The example shows 3D + t trajectories of cells during elongation of zebrafish blastoderm explants (see also Fig 3D).

(MP4)

S1 Text. Supplementary text discussing more technical details of the design and implementation of linus.

(DOCX)

Acknowledgments

The authors are grateful to Gopi Shah and Konstantin Thierbach for sharing data and contributing helpful feedback.

Data Availability

Example visualisations are available by scanning the QR codes in the figures of this manuscript directly or by visiting https://imb-dev.gitlab.io/linus-manuscript/. The software, including source code and documentation, is freely available at our repository at https://gitlab.com/imb-dev/linus.

Funding Statement

J.W. received funding from the International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity (Leipzig, Germany; https://imprs-neurocom.mpg.de). K.A. and V.T. acknowledge funding from European Molecular Biology Laboratory (EMBL) Barcelona and Mesoscopic Imaging Facility, EMBL Barcelona for help with imaging. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Liu C, Ye FQ, Newman JD, Szczupak D, Tian X, Yen CC-C, et al. A resource for the detailed 3D mapping of white matter pathways in the marmoset brain. Nat Neurosci. 2020;23: 271–280. doi: 10.1038/s41593-019-0575-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Shah G, Thierbach K, Schmid B, Waschke J, Reade A, Hlawitschka M, et al. Multi-scale imaging and analysis identify pan-embryo cell dynamics of germlayer formation in zebrafish. Nat Commun. 2019;10: 5753. doi: 10.1038/s41467-019-13625-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Romero-Ferrero F, Bergomi MG, Hinz RC, Heras FJH, de Polavieja GG. idtracker.ai: Tracking all individuals in small or large collectives of unmarked animals. Nat Methods. 2019;16: 179–182. doi: 10.1038/s41592-018-0295-5 [DOI] [PubMed] [Google Scholar]
  • 4.Wallingford JB. The 200-year effort to see the embryo. Science. 2019;365: 758–759. doi: 10.1126/science.aaw7565 [DOI] [PubMed] [Google Scholar]
  • 5.McDole K, Guignard L, Amat F, Berger A, Malandain G, Royer LA, et al. In Toto Imaging and Reconstruction of Post-Implantation Mouse Development at the Single-Cell Level. Cell. 2018;0. doi: 10.1016/j.cell.2018.09.031 [DOI] [PubMed] [Google Scholar]
  • 6.Kwok R. Deep learning powers a motion-tracking revolution. Nature. 2019;574: 137–138. doi: 10.1038/d41586-019-02942-5 [DOI] [PubMed] [Google Scholar]
  • 7.Pietzsch T, Saalfeld S, Preibisch S, Tomancak P. BigDataViewer: visualization and processing for large image data sets. Nat Methods. 2015;12: 481–483. doi: 10.1038/nmeth.3392 [DOI] [PubMed] [Google Scholar]
  • 8.Royer LA, Weigert M, Günther U, Maghelli N, Jug F, Sbalzarini IF, et al. ClearVolume: open-source live 3D visualization for light-sheet microscopy. Nat Methods. 2015;12: 480–481. doi: 10.1038/nmeth.3372 [DOI] [PubMed] [Google Scholar]
  • 9.Schmid B, Tripal P, Fraaß T, Kersten C, Ruder B, Grüneboom A, et al. 3Dscript: animating 3D/4D microscopy data using a natural-language-based syntax. Nat Methods. 2019;16: 278–280. doi: 10.1038/s41592-019-0359-1 [DOI] [PubMed] [Google Scholar]
  • 10.Saalfeld S, Cardona A, Hartenstein V, Tomancak P. CATMAID: collaborative annotation toolkit for massive amounts of image data. Bioinformatics. 2009;25: 1984–1986. doi: 10.1093/bioinformatics/btp266 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Perlman E. Visualizing and Interacting with Large Imaging Data. Microsc Microanal. 2019;25: 1374–1375. [Google Scholar]
  • 12.Shneiderman B. The eyes have it: a task by data type taxonomy for information visualizations. Proceedings 1996 IEEE Symposium on Visual Languages. 1996. pp. 336–343. [Google Scholar]
  • 13.Callaway E. The visualizations transforming biology. Nature News. 2016;535: 187. doi: 10.1038/535187a [DOI] [PubMed] [Google Scholar]
  • 14.Amat F, Lemon W, Mossing DP, McDole K, Wan Y, Branson K, et al. Fast, accurate reconstruction of cell lineages from large-scale fluorescence microscopy data. Nat Methods. 2014;11: 951–958. doi: 10.1038/nmeth.3036 [DOI] [PubMed] [Google Scholar]
  • 15.Gonzalez-Beltran AN, Masuzzo P, Ampe C, Bakker G-J, Besson S, Eibl RH, et al. standards for open cell migration data. GigaScience. 2020;9. doi: 10.1093/gigascience/giaa041 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.McQuin C, Goodman A, Chernyshev V, Kamentsky L, Cimini BA, Karhohs KW, et al. CellProfiler 3.0: Next-generation image processing for biology. PLoS Biol. 2018;16: e2005970. doi: 10.1371/journal.pbio.2005970 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Tinevez J-Y, Perry N, Schindelin J, Hoopes GM, Reynolds GD, Laplantine E, et al. TrackMate: An open and extensible platform for single-particle tracking. Methods. 2016. doi: 10.1016/j.ymeth.2016.09.016 [DOI] [PubMed] [Google Scholar]
  • 18.Imirzian N, Zhang Y, Kurze C, Loreto RG, Chen DZ, Hughes DP. Automated tracking and analysis of ant trajectories shows variation in forager exploration. Sci Rep. 2019;9: 13246. doi: 10.1038/s41598-019-49655-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Bailey H, Mate BR, Palacios DM, Irvine L. Behavioural estimation of blue whale movements in the Northeast Pacific from state-space model analysis of satellite tracks. Endanger Species Res. 2009. Available: https://www.int-res.com/abstracts/esr/v10/p93-106/ [Google Scholar]
  • 20.Egevang C, Stenhouse IJ, Phillips RA, Petersen A, Fox JW, Silk JRD. Tracking of Arctic terns Sterna paradisaea reveals longest animal migration. Proc Natl Acad Sci U S A. 2010;107: 2078–2081. doi: 10.1073/pnas.0909493107 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Trivedi V, Fulton T, Attardi A, Anlas K, Dingare C, Martinez-Arias A, et al. Self-organised symmetry breaking in zebrafish reveals feedback from morphogenesis to pattern formation. bioRxiv. 2019. p. 769257. doi: 10.1101/769257 [DOI] [Google Scholar]
PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009503.r001

Decision Letter 0

Manja Marz

2 May 2021

Dear Dr Scherf,

Thank you very much for submitting your manuscript "linus: Conveniently explore, share, and present large-scale biological trajectory data from a web browser." for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Manja Marz

Software Editor

PLOS Computational Biology

Manja Marz

Software Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The authors present a new web toolkit for visualizing 3D data. The need for tools that can perform these functionalities is great in neuroscience right now. And they have built something that they seem to be using profitably. However, I have a few serious concerns, having been involved in writing software like this many times before.

1. Writing software like this is difficult, maintaining it is much more difficult. In the last 10 years there have probably been at least 10 published neuroscience visualization toolboxes developed, several of them for local clients, but also several of them browser based. Every single one failed unless it was developed by an institution (such as Google or Janelia). All the ones developed by labs died. The reason is that browser infrastructure and graduate students both change so far. Any toolkit will fail to work in a browser relatively quickly (say, ~2 years) without active developments, because the browsers change. And graduate students graduate, meaning that somebody else must take over the maintenance of the code. But graduate students like developing their own stuff, and also, they should develop their own stuff, because they need to if they want to graduate (they usually do). So, while I have no doubt that this tool is useful within the lab, I have serious evidence-based concerns about its utility more broadly. If (and this is a big if) the authors had a serious commitment (including financial commitment) for at least 5 years with a software engineer (rather than graduate student) who was committed to staying for 5 years, then I'd recommend the authors discuss the level of commitment extensively in the article, as tools without maintenance are not particularly useful.

2. The tool actually does a few different things. First, it is a toolkit to enable visualization of fibers *at all*. Second, it is a set of features/functions/add-ons that enhance the visualization capabilities. It is actually the former that so worries me in my above comments. The latter could be generally useful, and even maintained long-term relatively easily, if they were plug-ins to already widely adopted software. For example, Neuroglancer (which was not cited) is currently used by hundreds of people. While my team is not actively developing NeuroGlancer, we were developing our own related tool until Neuroglancer came out, and then we switched. Fortunately, it seems there is lots of overlap between very Neuroglancer capabilities, and your functionality and code. For example, both are written primarily in TypeScript, and both have a Python API. While Neuroglancer was originally developed for electron microscopy data, people certainly visualize tracts using it as well. CATMAID can also render tracts in 3D. There are several other widely use web-browser based tools with overlapping functionality that are not mentioned at all. So, it is not clear the extent to which the functionality proposed here is novel/unique, nor it is clear how easy it would be to have features/plug-ins to other existing established tools, rather than building yet another neuroscience visualization engine. Here are some links to Neuroglancer that show some relevant functionality:

https://layer23.microns-explorer.org/

https://www.youtube.com/watch?v=zqjJkFmLauE

Much more in depth justification of a new tool rather than a fork/feature/add-on to NeuroGlancer, CATMAID, or other existing browser based web-visualizers would be required.

3. Is it actually useful? There is no evaluation of its practical utility. Who currently uses it? Why? Is there empirical data one can point to? I found it difficult to control and navigate, and I did not understand the controls.

4. There are 3 potential target audiences:

A. People who simply use the website to visualize some data. They need to understand user features, but nothing else.

B. Moderators: these people need to be able to deploy the code as well, ingest data, etc., and therefore understand more of the guts of the code.

C. Maintainers: these people need to be able to modify the code itself to add functionality, etc.

It is not clear to me exactly which of these audiences this manuscript is written for, it feels like it fluctuates between these three groups. Please make it clear to whom you are writing for any given paragraph, so it is clear.

Reviewer #2: J. Waschke et al. present a visualisation tool to investigate two- and three-dimensional trajectory data. The easy-to-use application allows to examine trajectories from all angles, modify the visual representation and select data subdomains for more in-depth examination in a web-based interface. The authors nicely place their tool in the context of a growing amount of tracking data and interdisciplinary research that would benefit from quick sharing of results such as visualizations.

Linus is a very useful application and many scientists will profit from this easy-to-use tool. The user interface is well designed, with beautiful visualizations and an intuitive workflow. The paper is well written and easy to understand. The authors clearly state which problems Linus is intended to solve. The documentation of the github repository contains all the relevant details and guides the user well. We found linus easy to apply from the git-repository and it worked as described on Mac on our own data. We particularly like the idea of shareable QR codes to switch between devices seamlessly. Finally, besides sharing visualizations with collaborators, the tool comes in handy to produce beautiful figures for manuscripts and talks.

We strongly support the publication of the tool and suggest addressing the issues below.

* Issues:

** Data handling:

- Data import should be more flexible. From a TrackMate .csv file we got the error “ValueError: could not convert string to float: '88.862;113.354;40'’. We solved this by using a comma (“,”) instead of a (“;”) as a separator and converting the numbers. As this might be a recurring error for other users the authors should make the data loader more flexible and robust and thereby easier to use.

- We were not able to visualize multiple tracks contained in one .csv file as generated from TrackMate (e.g. with columns: Label, ID, TRACK_ID, QUALITY ,POSITION_X, POSITION_Y, POSITION_Z, POSITION_T ,FRAME). Can you direct the user with a few how tos / a video tutorial for the major use cases / data formats?

Please add examples of files that can be imported and use cases to the github, which the user can follow. This would be very helpful.

** The tool:

- Selecting and highlighting single tracks would be extremely useful.

- Highlighting track directions between timepoints (in polar, spherical or cartesian coordinates) would be great, so one can easily explore movement patterns.

- Pls comment: How is the software going to be sustained?

* Minor issues:

** Wording:

- We suggest to use ‘often’ only once in the first sentences of abstract and into

- Not clear what is meant by ‘camera’ when first used on page 8(? please add page numbers)

- British vs American english: e.g. color vs colour, visualization vs visualisation

- Typo in Fig. 2 caption: “(d) The user can further create subselections of the tracks in space using cutting planes or refinable spatial selections.” Should be selections at the end of the sentence?

- Type in Fig 3: ‘tracts’

- Fig 3a: either ‘ant migration’ or ‘animal tracks’ we suggest.

** Structure:

- Last sentences of the intro (“We began…”) feel a bit misplaced. Use the zebrafish example as the motivation?

The authors mixed up subfigures in Figure 3: (e) and (f) should be exchanged with each other and changed to (c) and (d).

- “Exemplary visualizations are available by scanning the QR codes in Fig.1” - It should be Fig. 3 here

paragraph ‘We tested linus…’ is misplaced in ‘Availability and future directions’ we believe. Better separate into two sub chapters

** Presentation:

- Fig 1b looks great, also for a cover, but maybe change to an example that reminds the reader less of toupee? ;-)

- Caption Fig. 1: Pls describe the steps in words, one by one

move ‘Furthermore, we cannot provide…” sentence to a “Limitations of the approach” paragraph?

- Fig 2: Not clear to me what the thin gray lines mean. Am I supposed to read from left to right? c to b?

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Carsten Marr with Valerio Lupperger and Dominik Waibel

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009503.r003

Decision Letter 1

Manja Marz

30 Sep 2021

Dear Dr Scherf,

We are pleased to inform you that your manuscript 'linus: Conveniently explore, share, and present large-scale biological trajectory data in a web browser.' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Manja Marz

Software Editor

PLOS Computational Biology

Jason A. Papin

Editor-in-Chief

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #2: The authors addressed all raised issues and improved their work considerably. In particular, they updated the instructions (e.g. the step-by-step tutorial), improved the tool (e.g. opacity that can be coupled with time), fixed the trackmate .csv file import, and addressed tool maintenance.

We support publication of the manuscript.

Minor suggestions:

- Single track highlighting works nicely until settings (e.g. Mercator projections -> Rotation x) are changed. Only after closing the slider (e.g. of Mercator projections) it works again.

- Selecting a track bundle (from Render->State=1) would be probably helpful and nice to share with collaborators.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Carsten Marr

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009503.r004

Acceptance letter

Manja Marz

27 Oct 2021

PCOMPBIOL-D-21-00164R1

linus: Conveniently explore, share, and present large-scale biological trajectory data in a web browser.

Dear Dr Scherf,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Livia Horvath

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Tour editor.

    The tour actions can be organised by drag and drop (reading order: from left to right, top to bottom). Every action can be scheduled with a time delay with respect to the end of the previous action. Some actions use transitions (e.g. camera motions or the adjustment of numeric values) whose duration can be configured as well. Eventually, a URL or a QR code can be created.

    (TIFF)

    S2 Fig. Overview of data structure.

    The coordinate list holds the x/y/z values for each supporting point of the trajectories. For each such point, an arbitrary number (only limited by the graphics card’s capabilities) of attributes can be stored. The attributes must be provided in the same order as the points. To create trajectories from the point set, an index list is provided as well. Each pair of indices describes one segment of a trajectory. The number of such segments is not restricted, as any point (and its respective attributes) can be used multiple times.

    (TIFF)

    S3 Fig. Overview of settings.

    An overview of the different visualisation settings available to the user from the GUI (two screenshots merged). For explanations regarding different settings, see text or documentation at https://gitlab.com/imb-dev/linus.

    (TIFF)

    S1 Video. A video showing a demo visualization created and recorded with linus.

    The example shows 3D + t trajectories of cells during elongation of zebrafish blastoderm explants (see also Fig 3D).

    (MP4)

    S1 Text. Supplementary text discussing more technical details of the design and implementation of linus.

    (DOCX)

    Attachment

    Submitted filename: response-to-review.pdf

    Data Availability Statement

    Example visualisations are available by scanning the QR codes in the figures of this manuscript directly or by visiting https://imb-dev.gitlab.io/linus-manuscript/. The software, including source code and documentation, is freely available at our repository at https://gitlab.com/imb-dev/linus.


    Articles from PLoS Computational Biology are provided here courtesy of PLOS

    RESOURCES