Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Aug 1.
Published in final edited form as: Curr Protoc. 2021 Aug;1(8):e204. doi: 10.1002/cpz1.204

New Extensibility and Scripting Tools in the ImageJ Ecosystem

Niklas A Gahm 1,2,3, Curtis T Rueden 1, Edward L Evans III 1,3, Gabriel Selzer 1, Mark C Hiner 1, Jenu V Chacko 1, Dasong Gao 1, Nathan M Sherer 4,5,6, Kevin W Eliceiri 1,2,3,5,7
PMCID: PMC8363112  NIHMSID: NIHMS1715674  PMID: 34370407

Abstract

ImageJ provides a framework for image processing across scientific domains whilst being fully open source. Over the years ImageJ has been substantially extended to support novel applications in scientific imaging as they emerge, particularly in the area of biological microscopy, with functionality made more accessible via the Fiji distribution of ImageJ. Within this software ecosystem, work has been done to extend the accessibility of ImageJ to utilize scripting, macros, and plugins in a variety of programming scenarios such as from Groovy and Python and in Jupyter notebooks and cloud computing. We provide five protocols that demonstrate the extensibility of ImageJ for various workflows in image processing. We focus first on Fluorescence Lifetime Imaging Microscopy (FLIM) data since it requires significant processing to provide quantitative insights into the microenvironments of cells. Secondly, we show how ImageJ can now be utilized for common image processing techniques, specifically image deconvolution and inversion while highlighting the new built in features of ImageJ. Particularly its capacity to run completely headless and the Ops matching feature that selects the optimal algorithm for a given function and data input thereby enabling processing speedups. Collectively these protocols can be used as a basis for automating biological image processing workflows.

Keywords: ImageJ, Fiji, image analysis, scripting, Python, Jython, Ops, SciJava, Lifetime analysis, Deconvolution

Introduction

ImageJ is an open-source software tool for multidimensional image analysis, routinely used in the scientific imaging community (Schneider et al., 2012). It is usable in multiple forms, including as an end-user application, as a suite of software libraries, and as an extensible framework for writing your own image analysis plugins and scripts (Rueden et al., 2017). The ImageJ user and developer community has produced thousands of such plugins and scripts which can be reused and customized in diverse ways (Schindelin et al., 2015; Schroeder et al., 2021).

In the pursuit of extensibility and workflow optimization, ImageJ evolved from a standalone program developed at the National Institutes of Health into the Fiji and ImageJ2 projects co-developed across several organizations and targeting a broader range of technical requirements (Schneider et al., 2012; Schindelin et al., 2012; Rueden et al., 2017). Over the years, these architectural improvements to ImageJ have fostered the growth of a collaborative ecosystem of cross-compatible frameworks and software, enabling ImageJ to be combined more effectively with other tools useful in the field of scientific imaging, including KNIME Analytics Platform, CellProfiler, MATLAB, and others (Dietz et al., 2020; Kamentsky et al., 2011; Hiner et al., 2017; Möller et al., 2016). This reengineering work unified ImageJ’s algorithmic infrastructure under the ImageJ Ops framework, which enables programmers to create multiple versions of an algorithm targeting different specialized inputs in a way transparent to users calling the algorithms from scripts. It also extended ImageJ scripting support, useful for workflow automation, to more languages, as demonstrated in the variety of protocols we present in this manuscript. The scripts presented here illustrate the ability to invoke ImageJ’s algorithmic functionality in contexts other than just the graphical user interface, including: headless mode as part of an interprocess workflow involving multiple analysis tools; via the PyImageJ library enabling the use of ImageJ from a Python environment; and from Jupyter Notebook, a scientific notebook tool useful for encapsulating the steps of data analysis protocols and workflows in a visual, interactive, and reproducible way (Kluyver et al., 2016; Perkel, 2018).

The development of ImageJ’s new features has been driven by the major revolution optical microscopy has experienced in the last 25 years with the advent of significantly advanced optical imaging technology from classical brightfield microscopes to highly advanced approaches like super resolution microscopy (Schermelleh et al. 2019). As these methods are developed there is a collective need for corresponding image processing and analysis methods to be developed. Particularly in a research setting, building on the work of others is critical therefore an open-source framework such as ImageJ is invaluable for research within this field as it develops. In particular, fluorescence imaging has been significantly expanded and developed since it is capable of correlating phenotypic function and genetics (Blake et al. 2002; Giepmans et al. 2006). This has led to the usage of fluorescence microscopy in a wide number of biological research areas including cancer biology, cell trafficking, and pathogenetic studies (Chinen et al. 2015; Lamm and Ke 2010; Lagendijk et al. 2010; Gahm, Reinhardt, and Witte 1984; Giepmans et al. 2006). The core of fluorescence microscopy has been even further expanded via the development of fluorescence lifetime imaging microscopy (FLIM) which has seen increased adoption by biologists in recent years due to its ability for quantitative sensing of the state of cellular micro-environments spatially and temporally (Kalinina et al. 2016; Ryder et al. 2001). While fluorescence lifetime has previously been used by chemists to detect the pH of a micro-environment, the binding state of a molecule, and more, FLIM provides imaging based information of the spatial distribution of these metrics within a sample (Ryder et al. 2001; Lebakken, Hee Chol Kang, and Vogel 2007). Given the importance of this method and its inherent computational challenges, we present, as Basic Protocol 1, an open-source protocol for FLIM image processing independent of the underlying acquisition software. This protocol leverages FLIMJ through PyImageJ’s headless ImageJ extensible scripting mode.

In the following two protocols, we showcase how to use ImageJ for image data processing, how readily it can be automated for routine system data processing, and the ease of extending ImageJ for workflows. We focus on FLIM data processing in Basic Protocol 1: Using PyImageJ for FLIM Data Processing, by leveraging the FLIMJ library through PyImageJ for lifetime component fitting in Jupyter Notebooks without interacting with the ImageJ user interface(Gao et al., 2020). In Alternate Protocol 1: Groovy FLIMJ in Jupyter Notebooks, we show that the same lifetime fitting can be done through the Groovy programming language. Whilst these first two protocols are focused on processing raw data collected via a specific imaging modality, FLIM, the remaining protocols are all centered on using ImageJ for general image processing methods. In Basic Protocol 2: Using ImageJ Ops for Image Deconvolution we provide an example on how to perform image deconvolution with ImageJ in Jython, demonstrating the poly-glottal nature of ImageJ.

To deepen the readers understanding of one of the main features of the newest version of ImageJ we show the built-in Ops matching in Support Protocol 1: Using ImageJ Ops Matching Feature for Image Inversion. This protocol is centered around demonstrating Ops’ matching feature that will intelligently determine the best algorithm available within its scope for a given function and input type on a simplistic example of image inversion. This further illustrates the increased polyglot nature of ImageJ by being executed with a Groovy script as well as Jython in the underlying Basic Protocol 2. Finally, to display how readily the power of ImageJ for image processing can be integrated into larger computing environments (e.g. computing clusters), we show how to use ImageJ without a graphical user interface in Support Protocol 2: Headless ImageJ Deconvolution, which presents a deconvolution script that can be run from the command line.

Basic Protocol 1

Using PyImageJ for FLIM Processing

This protocol uses PyImageJ to run FLIMJ and process Fluorescence Lifetime Imaging Microscopy (FLIM) datasets and extract individual pixel fluorescence lifetimes within a Jupyter Notebook. Using this protocol, it is possible to estimate fluorescence lifetimes using curve-fitting routines without the need to use a third party proprietary software.

Materials:

This protocol has been tested with the following configuration:

Protocol Steps:

Setup:

  1. Install FLIMJ Plugin for FIJI
    1. Open the local install of FIJI
    2. Open the ImageJ Updater by selecting “Help → Update…”
    3. Open the Update Site List by selecting “Manage update sites” in the bottom left corner of the window
    4. In the list of Update sites scroll down and check “FLIMJ”
    5. Close the “Manage update sites” window
    6. Select “Apply Changes” in the “ImageJ Updater” Window
    7. Restart FIJI
  2. Start Anaconda Navigator

  3. Install PyImageJ Environment
    1. Launch the command terminal from Anaconda Navigator
    2. Type in Commands:
      1. conda create -n pyimagej -c conda-forge pyimagej openjdk=8
      2. conda activate pyimagej
  4. Switch to the PyImageJ Environment in Anaconda Navigator by selecting it from the dropdown menu next to “Applications on”

  5. Install BeakerX and PyWidgets
    1. Launch the command terminal from Anaconda Navigator
    2. Type in Commands:
      1. conda install -c conda-forge ipywidgets beakerx
  6. Install sdtfile module
    1. Launch the command terminal from Anaconda Navigator
    2. Type in Commands:
      1. conda install -c conda-forge sdtfile
  7. On the Jupyter Notebooks Panel select install

  8. Once installed select launch

  9. On the Jupyter Notebooks screen navigate to the directory where BP1_ PyImageJ ipynb and epithelial_human_FLIM.sdt were downloaded

  10. Open BP1_ PyImageJ.ipynb

  11. In the User Input code block, change “filename” to the path for the sample data set.

  12. In the User Input code block, change “ij_local_installation” to the full path to your local FIJI installation.

  13. Then run the User Input code block

  14. On the screen select the code block under “Dependencies” and run it.

    This will install all the libraries and dependencies necessary to read in sdt data and perform lifetime fitting in the notebook.

  15. Scroll down to the block of “File Read In” and then sequentially run the two blocks.

    This first loads in a 3-dimensional image file to be processed from the associated testing data. Then analyzes the 3-dimensional image file to determine the dimensions and displays the summation of all collected photons.

  16. Scroll down to the code block for “Lifetime Fitting” and then sequentially run the two blocks.

    This performs lifetime fitting on the data and displays how long the fitting process took.

  17. Finally scroll down to the code block for “Plot Fitted Results” and then sequentially run the two blocks.

    This plots the Offset, Amplitude, and Lifetimes of the fitted data.

Sample Data:

This protocol was designed to operate on fluorescence lifetime data sets, and is shown in this protocol running on epithelial_human_FLIM.sdt. The underlying sample is unstained epithelial cells from human mammary gland tissue with fibrocystic disease. NAD(P)H auto-fluorescence signal was collected with 740nm multiphoton excitation and 460/80 spectral emission range. The hardware used was a Nikon Apo 40x water immersion objective with a 1.25 NA and a SPC-150 Becker & Hickl Time correlated Single Photon Counting electronics.

Alternate Protocol 1

Groovy FLIMJ in Jupyter Notebooks

This protocol extends the basic protocol for Groovy and FLIMJ to be run through Jupyter Notebooks. This methodology enables Groovy to run FLIMJ to process FLIM datasets and extract individual pixel fluorescence lifetimes in an annotatable graphic interface thereby improving repeatability and consistency across data processing.

Materials:

This protocol has been tested with the following configuration:

Protocol Steps:

Setup:

  1. Start Anaconda Navigator

  2. Install BeakerX and PyWidgets
    1. Launch the command terminal from Anaconda Navigator
    2. Type in Commands:
      1. conda install -c conda-forge ipywidgets beakerx
  3. On the Jupyter Notebooks Panel select install

  4. Once installed select launch

  5. On the Jupyter Notebooks screen navigate to the directory where AP1_FLIM_Notebook.ipynb and epithelial_human_FLIM.sdt were downloaded

  6. Open AP1_FLIM_Notebook.ipynb

  7. Change the kernel to Groovy by selecting Groovy in “Kernel → Change kernel”

  8. On the screen select the code block under “Dependencies” and run it.

    This will install all the libraries and dependencies necessary to run FLIMJ in the notebook. It is possible this step can cause errors. If so see Troubleshooting.

  9. Scroll down to the block of “Utility Code” and run it.

    This sets up the displays for the data.

  10. Scroll down to the code block for “Loading Dataset” and then sequentially run the two blocks.

    This first loads in a 3-dimensional image file to be processed from the associated testing data. Then analyzes the 3-dimensional image file to determine the dimensions and displays the raw photon counts of time bin 45.

  11. Scroll down to the code block for “Hyperparameter Setup” and run it.

    This calculates the parameters to be used for lifetime fitting.

  12. Finally scroll down to the code block for “Performing Image Fitting” and then sequentially run the blocks to step through the process.

    This is where image fitting actually occurs and presents two forms of Levenberg-Marquardt Algorithm fitting, Global Fitting, and Multi-Component Fitting (1, 2, and 3 components). It further displays two common data processing elements: Region of Interest, and Binning both for improved processing speeds in scenarios with large data sets.

Sample Data:

The AP1_FLIM_Notebook.ipynb script was designed to operate on spectral fluorescence lifetime data sets and is shown in this protocol running on epithelial_human_FLIM.sdt. The underlying sample is unstained epithelial cells from human mammary gland tissue with fibrocystic disease. NAD(P)H auto-fluorescence signal was collected with 740nm excitation and 460/80 emission. The images were collected with a lab built multiphoton system and a Nikon Apo 40× 1.25 NA water immersion objective and a Becker & Hickl Time Correlated Single Photon Counting system.

Basic Protocol 2

Using ImageJ Ops for Image Deconvolution

The ImageJ Ops framework enables easy access to a large library of image processing functions. Image deconvolution is a common operation in image processing for microscopy, as it can notably increase the contrast and resolution of a captured data set. This is done by calculating the point spread function (PSF) at each z-level, then using a reconstruction technique to remove the effects of the PSF, specifically aberrations, from the captured image and reduce the light blurring present in the image. In this example we use a script written in Jython, a Python implementation on the Java platform. Jython is able to import and use any Java class, enabling easy access to ImageJ’s Java libraries with a familiar Python language syntax.

Materials:

This protocol has been tested with the following configuration:

Protocol Steps:

  1. Start Fiji

  2. Load the input image into Fiji using “File → Open” or by drag & drop. The example data set “HeLa_microtubules.tif”.

  3. Load Decon_richardsonLucyTV.py using “File → Open” or by drag & drop. The script opens within the script editor.

  4. Run the Decon_richardsonLucyTV.py script from the open script editor.

  5. A pop-up with variables will appear, the variables are explained further in the Critical Parameters Section. Select “Okay”.

    Automatically the script will open two windows one that displays the deconvolved image stack and one that displays the point spread function across the image stack.

Sample Data:

The Decon_richardsonLucyTV.py script was designed to operate on any z-stack but is shown in this protocol operating on HeLa_microtubules.tif. The test data set, HeLa_microtubules.tif, is a 41 slice z-stack of microtubules stained with a monoclonal anti-α-Tubulin primary antibody and Alexa Fluor 568 secondary antibody to visualize the mitotic spindle assembly of a HeLa cell in metaphase. Imaged with a Plan Apo 60x oil 1.4 NA objective and epi-fluorescence microscopy.

Support Protocol 1

Using ImageJ Ops Matching Feature for Image Inversion

Image inversion is a common procedure in segmentation workflows but is often computationally expensive. Inversion algorithms often produce pixel outputs by subtracting the input from the maximum value of the pixel type. Such an implementation struggles with signed pixel values, however, as this subtraction operation often causes overflow to occur. For this reason, ImageJ Ops contains two different image inversion Ops. One inversion algorithm optimizes over the set of pixel data types whose entire range can be represented within double precision math. A second algorithm is designed for data types that cannot be accurately represented with double precision math, gaining accuracy at the cost of performance. Given an image whose pixel type is not known beforehand, the ImageJ Ops matcher can intelligently determine the Op most suitable for the given arguments. This allows scripts to easily adapt to different input sets and increases reusability.

Materials:

This protocol has been tested with the following configuration:

Protocol Steps:

  1. Start Fiji

  2. Load the input image into Fiji using “File → Open” or by drag & drop. The example data set SUM_sp2_gag_still_KIP.tif.

  3. Load protocol_subprocedure_1.groovy using “File → Open” or by drag & drop. The script opens within the script editor.

  4. Run the protocol_subprocedure_1.groovy script from the open script editor.

    Automatically the script will open two windows one that inverts the original unsigned long type image with an Ops function tailored for large data types and one where the image has been converted into a smaller bit data type which is inverted with a different Ops function that is tailored for smaller data types. Both of which are generated using the same call to Ops, and Ops intelligently decides which function to use on the input. Additionally we can inspect which Ops function was called in the Script Editor’s log.

Sample Data:

The sample script was designed to operate on any image, but is shown in this protocol running on SUM_sp2_gag_still_KIP.tif which is a summation of three imaging channels. Here the data we used consists of time lapse fluorescence microscopy of HeLa cells being infected by a modified replication deficient HIV-1, NL4–3 strain, fluorescent reporter virus (Evans et al., 2018). Upon infection, the reporter virus will integrate its viral genome into the host cell’s genome, transcribe viral mRNA and translate viral proteins. The modified HIV-1NL4–3 reporter virus encodes two fluorescent proteins, mCherry and mVenus. The HIV-1 nef gene was deleted and replaced with the mCherry fluorescent protein, tracking early viral gene expression. The mVenus fluorescent protein was fused to the gag open reading frame, enabling us to observe nascent HIV-1 assembly and budding at the cell periphery and tracking late gene expression. These cells were imaged with a Plan Apo 20x .75 NA air objective and epi fluorescence microscopy.

Support Protocol 2

Headless ImageJ Deconvolution

So far all the protocols presented require user interaction with the Fiji GUI (via the built-in scripting window running Jython and Groovy scripts) or the Jupyter Notebooks GUI. Herein we show how to call ImageJ completely headless directly from a command line, such that its image processing power can be leveraged by other large computing environments such as high performance computing clusters, where access to a GUI is either undesired or otherwise unavailable.

Materials:

This protocol has been tested with the following configuration:

Protocol Steps:

  1. Open the Decon_richardsonLucyTV_headless.py file in a text editor

  2. Change the path on line 11 to the path for the input file “HeLa_microtubules.tif”

  3. Change the path on line 47 to the path for the output file

  4. Open a terminal/cmd prompt and navigate to the Fiji install.

  5. Run the script by using the appropriate following command and changing the path input to the path for Decon_richardsonLucyTV_headless.py
    1. Windows:
      1. ImageJ-win64.exe --ij2 --headless --console --run “C:/path/to/script.py”
    2. Linux:
      1. ./ImageJ-linux64 --ij2 --headless --run “/path/to/script.py”
    3. MacOS:
      1. Contents/MacOS/ImageJ-macosx --ij2 --headless --run “/path/to/script.py”

    The script will automatically read in the file specified in the input file path, perform deconvolution on it, then save the deconvolved image to the specified output path.

Sample Data:

The Decon_richardsonLucyTV_headless.py script was designed to operate on any z-stack, but is shown in this protocol operating on HeLa_microtubules.tif. The test data set, HeLa_microtubules.tif, dataset is a 41 slice z-stack of microtubules stained with a monoclonal anti-α-Tubulin primary antibody and Alexa Fluor 568 secondary antibody to visualize the mitotic spindle assembly of a HeLa cell in metaphase. Imaged with a Plan Apo 60× 1.4 NA oil objective and epi fluorescence microscopy.

Commentary

Background Information

ImageJ offers to customize image analysis workflows through multiple approaches, notably via its scripting framework, which enables users to code processing steps in whichever programming language is most comfortable for them; options include Python, Jython, JavaScript, Ruby, Clojure, Groovy, R, and ImageJ’s own macro language, among others (Schindelin et al., 2012; Rueden et al., 2017). Scripting enables scientists to tailor their analysis to the parameters of their experiments, while providing reusable building blocks accessible by the community via ImageJ’s update site mechanism (Schindelin et al., 2015; Schroeder et al., 2021). All protocols demonstrated in this paper make use of ImageJ scripts and/or plugins in some form.

ImageJ’s structured design for plugins, together with its strict separation of graphical elements from algorithmic functionality, enables ImageJ commands and scripts to be used in a variety of contexts besides only the ImageJ user interface (Rueden et al., 2017; Schindelin et al., 2015; Schroeder et al., 2021). They can be invoked from other SciJava-compatible applications including KNIME Analytics Platform, CellProfiler, OMERO, and others, as well as from user scripts, including executing in headless mode from the command line, without any graphical user interface (Dietz et al., 2020; Kamentsky et al., 2011; Allan et al., 2012; Möller et al., 2016; Ouyang et al., 2019). This is useful in several scenarios, including: for batch analysis across large quantities of data; when combining ImageJ functionality with other systems via standard interprocess interoperability approaches (files and/or pipes); for use within modules of container-based workflow systems; and for distributed execution of analysis pipelines on server clusters (Apeer, n.d.; Krumnikl et al., 2018; Rubens et al., 2020). See Support Protocol 2 for an example of invoking ImageJ headless from the command line.

To further expand ImageJ’s capabilities, it has been extended with ImageJ Ops which is an extensible framework for implementing algorithm plugins, paired with a collection of built-in plugins for common image analysis tasks (Rueden et al., 2017). The Ops framework enables programmers to create multiple versions of an op targeting different specialized inputs, e.g. convolution ops optimized for different kernel sizes. These ops are typically invoked from user scripts specifying the desired algorithm at a conceptual level (e.g. “convolve”), leaving selection of the most appropriate plugin to the framework, which determines the best match situationally based on the given inputs. This approach transparently realizes the extensible use of specialized code, without the need for explicit conditional case logic in either user scripts or within the op implementations themselves. In this way, software developers can improve performance of routines by writing more specialized plugin implementations as appropriate, and extend the reach of ops to cover new scenarios such as large images, various image storage mechanisms, or new paradigms such as image data accessed from remote databases (i.e. cloud computing). To assist the community in tapping into the power of ImageJ Ops, this manuscript highlights its use as part of Support Protocol 1.

ImageJ is grounded in the Java programming language and ecosystem (Schneider et al., 2012). However, in many cases, scientists benefit from combining ImageJ-based tools with other software ecosystems (Schindelin et al., 2015; Schroeder et al., 2021). One very powerful ecosystem used in cross-domain scientific inquiry is the PyData software stack, written in the Python programming language, including NumPy, SciPy, scikit-image, and scikit-learn, among other libraries (Harris et al., 2020; Pedregosa et al., 2011; van der Walt et al., 2014; Virtanen et al., 2020). Another mature ecosystem for working with scientific images accessible from Python is the C++-based Insight Toolkit (ITK) popular in the medical imaging community (Yoo et al., 2002). To realize a more seamless blending of ImageJ across these and other technology suites, we have created PyImageJ, a software layer implemented in Python that facilitates the use of ImageJ from a Python environment. It contains logic for translating between Python and ImageJ data structures, enabling interchangeable use of NumPy arrays and ImageJ datasets, and image processing routines written for either environment to be combined across language boundaries. To display the power and flexibility of PyImageJ, we highlight it in Basic Protocol 1.

This cross-programming language and ecosystem even enables using scientific notebooks which are a popular paradigm for encapsulating the steps of data analysis protocols and workflows. One notebook software in common use in the data sciences community is Jupyter Notebook (Kluyver et al., 2016), which makes it easy to showcase workflows in a visual, interactive, and reproducible way, by writing snippets of Groovy or other code in so-called notebook cells, which when executed produce numerical and/or visual output. Notebooks allow for both experimentation and data mining—by executing and revising cells out of order to explore one’s data—as well as communication of completed workflows with others, by embedding the computation results into the final notebook, which can be made readily accessible online from a web browser (Perkel, 2018). In Basic Protocol 1 and Alternate Protocol 1, we show how to use ImageJ as part of a Jupyter notebook.

The need for improved, accessible, extensible, and open-source image processing has been primarily driven by the explosion of optical microscopy methods in the last 25 years. One such method, which is powerful but requires substantial processing to analyze the collected data, is Fluorescence Lifetime Imaging Microscopy (FLIM)(Lakner et al., 2017; O’Connor, 2012; Verveer & Hanley, 2009). Unfortunately, most current FLIM analysis frameworks are highly dependent on the underlying collection hardware and the associated third party proprietary algorithm from the same manufacturer for data processing (Becker, 2005). Due to the proprietary nature of the algorithm used in such commercial software, it is opaque and typically non-extensible meaning that users cannot build further analysis into the software nor experiment with the processing pipeline for artifact correction etc. There is a strong need for FLIM microscopy methods to have not only a processing framework that functions across multiple hardware acquisition platforms but is also extensible and open-source. Given the importance of this method, we present, as Basic Protocol 1, an open-source protocol for FLIM image processing independent of the underlying acquisition software that is extensible with the suite of image processing tools available in ImageJ.

Critical Parameters

Protocol 1

This protocol is centered around FLIM data and has been designed to calculate all basic critical parameters from the data. Specifically of note is the time resolution since a mis-match between what is used and the underlying data leads to incorrect lifetime fitting.

Alternate Protocol 1

There is only one critical parameter that is user tweakable in the main fitting methods. Specifically which range of time bins to analyze. This alters the underlying data that the fitting method uses. Furthermore it permits removal of data captured before a fluorescence event and after it is complete, thereby reducing the amount of data processed which improves the speed and reduces the amount of noise going into the processing. For further data reduction, there are two more parameters that are alterable. Specifically for the region of interest (ROI), the bounds of the ROI mask to be analyzed are defined in image pixels by the coordinates within the image. For binning, the kernel used for binning is alterable to provide differences in weighting and size.

Protocol 2

Within this protocol, there are multiple experiment/microscope derived inputs which are data set collection dependent. These parameters are critical for the correct operation of image deconvolution. Going through them, there is NumericalAperture which is the NA of the objective used to image the data set. There is Wavelength which is the excitation wavelength used in nm. There is RiImmersion which is the refractive index of the immersion medium generally this is oil (~1.5), water (~1.3), or air (~1.0). Similarly, RiSample is the estimate of the refractive index of the sample. For biological purposes this is estimated as the average between the cover slip (normally crown glass at ~1.5) and the sample (since most biology is primarily water, ~1.3). Then within a single field of view the xySpacing is the XY pixel spacing in nm and assumes equal pixel spacing in X and Y dimensions. Then there is zSpacing which is the distance in nm between each Z-pane in the data set. Finally there is one parameter that affects the algorithm and is independent of the data collection; the NumIterations which is the number of iterations of RichardsonLucyTV deconvolution to run, the number of iterations specified determines the bulk of how long the protocol takes the run.

Support Protocol 1

Users should take note of the data types of the arguments provided during a matching call. To maximize efficiency, these types should be compared to those of each Op that is a potential match.

Support Protocol 2

This protocol is an extension of Protocol 2, so all the critical parameters from Protocol 2 are applicable here as well. Additionally, this protocol requires two file paths “/path/to/image.tif” and “/path/to/output_image.tif”. Naming within the file path has to be done carefully since the Python interpreter does not recognize spaces nor overarching drive directory such as “D:/”.

Troubleshooting

Protocol 1

The primary issues that arise from this protocol operating on the example data stem from incorrectly setting up the environment. Secondarily errors will be thrown if the ImageJ initialization code block is re-run multiple times since an instance of ImageJ is already running in the environment.

Additionally, if this protocol is run on a different data set than the example data, some modification of data read in may be necessary to account for differences in how the file formats store the collected data.

Alternate Protocol 1

Due to the nature of the extension, most issues arise from setting up the environment. Particularly importing the FLIMJ Ops in the notebook can have problems with the classpath for dependencies. If this causes errors, there is a code block immediately below the Dependencies block that can be uncommented and run to fix this issue.

Additionally, if this protocol is run on a different data set than the example data, some modification of data read in and splitting may be necessary to account for differences in how the file formats store the collected data.

Protocol 2

The main issues that can arise in this protocol are all centered around mismatches between the critical parameters for this and the data on which it is run. As such, if you are running this protocol on data other than the example data set, make sure that the critical parameters are updated to match the data.

Support Protocol 1

Due to the strong typing of the Java language, most Ops matching errors are due to a mismatch between the argument types and the parameter types of the Op(s) of interest. Ensure that all arguments provided in the Op call are assignable to the parameter types of the Op.

Another set of issues can arise when attempting to reuse an Op on a different set of arguments than those used in the matching request. The matching framework uses the arguments it is given to return the optimal Op for those arguments; other sets of arguments may not be accepted by the returned Op. To operate on arguments of different types, it is advised to either create a matching request on the greatest common supertype or to create two separate matching requests.

Support Protocol 2

The primary issues that arise within this protocol are file path naming issues. In particular the command line/terminal will throw errors on spaces in the path names. Additionally the overarching directory i.e. “C:\” cannot be used within the scripts input and output image file paths since the parser errors on the character “:”. Additionally, if run on data other than the example data set, the script is subject to the same data to critical parameter mismatch as described in Protocol 2 Troubleshooting.

Understanding Results

General

Overall, all protocols presented are designed to be clear examples of how readily the image processing power of ImageJ can be integrated into repeatable workflows and data processing for biological applications.

Protocol 1

This protocol exports lifetime fitted FLIM images. As such the core element to note is the ability to visualize the lifetime components via the three component images, offset, amplitude, and lifetime estimate (See Figure 1). These images can provide insights into cellular micro-environments with quantitative sensing abilities.

Figure 1.

Figure 1.

In Basic Protocol 1: Using PyImageJ for FLIM Data Processing, (A) the summation of all collected photons before processing, Levenberg-Marquardt Fitted Data of NADH lifetimes of unstained epithelial cells from human mammary gland tissue with fibrocystic disease (B) offset, (C) amplitude, and (D) estimated lifetime. NAD(P) auto-fluorescence signals were collected with 740nm excitation and 460/80 emission. The hardware used was a Nikon Apo 40x water immersion objective with a 1.25 NA and a Becker & Hickl Single Photon Counting system.

Alternate Protocol 1

There are five lifetime fitting methods and two common data reduction methods presented in this protocol. At their core, all lifetime fitting methods are attempting to take advantage of the fluorescence photons time of flights to estimate the underlying physiological parameters and use those to generate contrast for images (See Figure 2). The two common data reduction methods are used to allow for processing of larger data sets.

Figure 2.

Figure 2.

In Alternate Protocol 1: Groovy FLIMJ in Jupyter Notebooks, the (Top) a time bin from the raw FLIM intensity data and the (Bottom) Levenberg-Marquardt Fitted pseudo colored NADH lifetimes of unstained epithelial cells from human mammary gland tissue with fibrocystic disease. NAD(P) auto-fluorescence signals were collected with 740nm excitation and 460/80 emission. The hardware used was a Nikon Apo 40x water immersion objective with a 1.25 NA and a Becker & Hickl Single Photon Counting system.

Protocol 2

The resultant image should be clearer than the input image (See Figure 3) since the image deconvolution process is designed to remove blur caused by the inherent point spread function (PSF) of the imaging system as characterized by the experimental components that primarily determine the PSF. This provides image quality improvement at minimal cost to users time and experimental overhead.

Figure 3.

Figure 3.

In Basic Protocol 2: Using ImageJ Ops for Deconvolution and Support Protocol 1: Headless ImageJ Deconvolution, the difference in clarity between (A) raw data and (B) deconvolved data of a z-stack of microtubules stained with a monoclonal anti-α-Tubulin primary antibody and Alexa Fluor 568 secondary antibody to visualize the mitotic spindle assembly of a HeLa cell in metaphase, imaged with a Plan Apo 60x oil objective and epi fluorescence.

Support Protocol 1

The Ops returned by each inversion Op request are of particular importance, namely due to the fact that different Ops are returned for each request (See Figure 4). In light of the different argument types, the juxtaposition of these two returns highlights the ease of optimization made possible through the matching framework. This feature is present throughout the Ops library where useful rather than restricted to the inversion function.

Figure 4.

Figure 4.

In Support Protocol 1: Using ImageJ Ops Matching Feature for Image Inversion, (A) the original image, (B) inverted unsigned long type image, and (C) inverted bit type image, with (D) the different Ops used enumerated. The sample imaged is a HeLa cell being infected by a vesicular stomatitis virus G (VSV-G)-pseudotyped single-round fluorescent HIV-1NL4–3 virus. Upon infection, the virus produces mVenus-tagged HIV Gag proteins and free mCherry. These cells were imaged with a Plan Apo 20x air objective and epi fluorescence.

Support Protocol 2

The same as in Protocol 2, the resultant image should be clearer than the input image (See Figure 3), the core element of note is that it was run in a completely headless manner. Meaning no interaction with the Fiji GUI was necessary.

Time Considerations

Protocol 1

Setting up the environment to run this protocol takes about fifteen minutes, but once set up doesn’t need to be done again. Stepping through this protocol is straightforward and takes around five minutes. The amount of time needed is dependent on the size of the dataset used.

Alternate Protocol 1

If the environment hasn’t already been set up for Protocol 1, then setting up the environment de novo to run this protocol takes about fifteen minutes. Processing the example dataset takes about ten minutes to step through each component in the Jupyter Notebook interface. When running this protocol on other datasets the time required will scale with the size of the underlying data set.

Protocol 2

Processing of the example dataset takes up to five minutes depending on the computational power of the hardware and the number of iterations used in the protocol. When running this protocol on other datasets the time required will scale with the size of the image and the number of iterations of deconvolution used.

Support Protocol 1

Processing of the example dataset takes only a few seconds (image conversion and then two inversions) in Fiji. When running this protocol on other datasets the time required will scale with the size of the image, which only makes it more effective as an example displaying the speed-ups that can be gained from using Ops matching feature.

Support Protocol 2

Processing of the example dataset with this protocol takes just as long in headless mode as it does with the Fiji interface and the time will scale with the size of the data set and the number of iterations of deconvolution used. However, the headless ImageJ mode does allow for cloud computing so that a user can farm out processing of large data sets to more powerful computers and servers than their local machine, thereby decreasing the time it takes to process the data set.

Future Directions

With the readily extensible framework that ImageJ provides, new and improved image processing methods and protocols are constantly being added by users, and shared via ImageJ’s update site mechanism, as well as in public repositories in source code and notebook forms. Furthermore, ImageJ has already been extended to be accessible from many scripting languages and environments and will continue to expand and support more languages used by scientists such as R and MATLAB. We are particularly excited to continue improving the PyImageJ library, such that users may seamlessly utilize Python-based routines from ImageJ scripts running within the ImageJ user interface, as well as further developing ImageJ’s web-oriented capabilities, enabling ImageJ functionality to be combined with JavaScript-based tools such as ImJoy (Ouyang et al., 2019) expanding its utility to more platforms such as mobile devices. We also look forward to ImageJ continuing to grow in its support for larger and more complex scientific image datasets, driven by the underlying ImgLib2 (Pietzsch et al., 2012, p. 2) and BigDataViewer (Pietzsch et al., 2015) libraries.

Acknowledgements

We would like to acknowledge funding from the Chan Zuckerburg Initiative (CZI) and NIH P41-GM135019-01. We would also like to thank all ImageJ contributors across the community for helping to create the vibrant ecosystem we have today. We are especially grateful for continuing fruitful collaborations with the groups of Florian Jug and Pavel Tomancak and principal software architect Tobias Pietzsch of the Max Planck Institute for Molecular Cell Biology and Genetics (MPI-CBG) and Center for Systems Biology Dresden (CSBD), as well as the groups of Stephan Saalfeld and Stephan Preibisch at Janelia Research Campus, all of whom have been instrumental in driving forward the state of the art of ImageJ and Fiji, as well as maintaining the ImgLib2 library which provides the foundation of the ImageJ image data model. We are also ever thankful to Wayne Rasband for his development of ImageJ 1.x as well as its continuing maintenance and support.

Literature Cited

  1. Allan C, Burel J-M, Moore J, Blackburn C, Linkert M, Loynton S, MacDonald D, Moore WJ, Neves C, Patterson A, Porter M, Tarkowska A, Loranger B, Avondo J, Lagerstedt I, Lianas L, Leo S, Hands K, Hay RT, … Swedlow JR (2012). OMERO: Flexible, model-driven data management for experimental biology. Nature Methods, 9(3), 245–253. 10.1038/nmeth.1896 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Apeer. (n.d.). Apeer. RetrievedMarch 27, 2021, from https://www.apeer.com/home
  3. Becker W (2005). Advanced Time-Correlated Single Photon Counting Techniques. Springer Science & Business Media. [Google Scholar]
  4. Dietz C, Rueden CT, Helfrich S, Dobson ETA, Horn M, Eglinger J, Evans ELI, McLean DT, Novitskaya T, Ricke WA, Sherer NM, Zijlstra A, Berthold MR, & Eliceiri KW (2020). Integration of the ImageJ Ecosystem in KNIME Analytics Platform. Frontiers in Computer Science, 2. 10.3389/fcomp.2020.00008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Evans EL, Becker JT, Fricke SL, Patel K, & Sherer NM (2018). HIV-1 Vif’s Capacity To Manipulate the Cell Cycle Is Species Specific. Journal of Virology, 92(7), e02102–17. 10.1128/JVI.02102-17 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Gao D, Barber PR, Chacko JV, Sagar MAK, Rueden CT, Grislis AR, Hiner MC, & Eliceiri KW (2020). FLIMJ: An open-source ImageJ toolkit for fluorescence lifetime image data analysis. PLOS ONE, 15(12), e0238327. 10.1371/journal.pone.0238327 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Harris CR, Millman KJ, van der Walt SJ, Gommers R, Virtanen P, Cournapeau D, Wieser E, Taylor J, Berg S, Smith NJ, Kern R, Picus M, Hoyer S, van Kerkwijk MH, Brett M, Haldane A, del Río JF, Wiebe M, Peterson P, … Oliphant TE (2020). Array programming with NumPy. Nature, 585(7825), 357–362. 10.1038/s41586-020-2649-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Hiner MC, Rueden CT, & Eliceiri KW (2017). ImageJ-MATLAB: A bidirectional framework for scientific image analysis interoperability. Bioinformatics, 33(4), 629–630. 10.1093/bioinformatics/btw681 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Kamentsky L, Jones TR, Fraser A, Bray M-A, Logan DJ, Madden KL, Ljosa V, Rueden C, Eliceiri KW, & Carpenter AE (2011). Improved structure, function and compatibility for CellProfiler: Modular high-throughput image analysis software. Bioinformatics, 27(8), 1179–1180. 10.1093/bioinformatics/btr095 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Kluyver T, Ragan-Kelley B, Pérez F, Granger BE, Bussonnier M, Frederic J, Kelley K, Hamrick JB, Grout J, Corlay S, & et al. (2016). Jupyter Notebooks-a publishing format for reproducible computational workflows. (Vol. 2016). [Google Scholar]
  11. Krumnikl M, Bainar P, Klímová J, Kožusznik J, Moravec P, Svatoň V, & Tomančák P (2018). SciJava Interface for Parallel Execution in the ImageJ Ecosystem. In Saeed K & Homenda W (Eds.), Computer Information Systems and Industrial Management (pp. 288–299). Springer International Publishing. 10.1007/978-3-319-99954-8_25 [DOI] [Google Scholar]
  12. Lakner PH, Monaghan MG, Möller Y, Olayioye MA, & Schenke-Layland K (2017). Applying phasor approach analysis of multiphoton FLIM measurements to probe the metabolic activity of three-dimensional in vitro cell culture models. Scientific Reports, 7(1), 42730. 10.1038/srep42730 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Möller B, Glaß M, Misiak D, & Posch S (2016). MiToBo—A Toolbox for Image Processing and Analysis. Journal of Open Research Software, 4(1), e17. 10.5334/jors.103 [DOI] [Google Scholar]
  14. O’Connor D (2012). Time-correlated single photon counting. Academic Press. [Google Scholar]
  15. Ouyang W, Mueller F, Hjelmare M, Lundberg E, & Zimmer C (2019). ImJoy: An open-source computational platform for the deep learning era. Nature Methods, 16(12), 1199–1200. 10.1038/s41592-019-0627-0 [DOI] [PubMed] [Google Scholar]
  16. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, & Duchesnay É (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research. https://hal.inria.fr/hal-00650905 [Google Scholar]
  17. Perkel JM (2018). Why Jupyter is data scientists’ computational notebook of choice. Nature, 563(7732), 145–147. 10.1038/d41586-018-07196-1 [DOI] [PubMed] [Google Scholar]
  18. Pietzsch T, Preibisch S, Tomančák P, & Saalfeld S (2012). ImgLib2—Generic image processing in Java. Bioinformatics, 28(22), 3009–3011. 10.1093/bioinformatics/bts543 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Pietzsch T, Saalfeld S, Preibisch S, & Tomancak P (2015). BigDataViewer: Visualization and processing for large image data sets. Nature Methods, 12(6), 481–483. 10.1038/nmeth.3392 [DOI] [PubMed] [Google Scholar]
  20. Rubens U, Mormont R, Paavolainen L, Bäcker V, Pavie B, Scholz LA, Michiels G, Maška M, Ünay D, Ball G, Hoyoux R, Vandaele R, Golani O, Stanciu SG, Sladoje N, Paul-Gilloteaux P, Marée R, & Tosi S (2020). BIAFLOWS: A Collaborative Framework to Reproducibly Deploy and Benchmark Bioimage Analysis Workflows. Patterns, 1(3). 10.1016/j.patter.2020.100040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Rueden CT, Schindelin J, Hiner MC, DeZonia BE, Walter AE, Arena ET, & Eliceiri KW (2017). ImageJ2: ImageJ for the next generation of scientific image data. BMC Bioinformatics, 18(1), 529. 10.1186/s12859-017-1934-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, Tinevez J-Y, White DJ, Hartenstein V, Eliceiri K, Tomancak P, & Cardona A (2012). Fiji: An open-source platform for biological-image analysis. Nature Methods, 9(7), 676–682. 10.1038/nmeth.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Schindelin J, Rueden CT, Hiner MC, & Eliceiri KW (2015). The ImageJ ecosystem: An open platform for biomedical image analysis. Molecular Reproduction and Development, 82(7–8), 518–529. 10.1002/mrd.22489 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Schneider CA, Rasband WS, & Eliceiri KW (2012). NIH Image to ImageJ: 25 years of image analysis. Nature Methods, 9(7), 671–675. 10.1038/nmeth.2089 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Schroeder AB, Dobson ETA, Rueden CT, Tomancak P, Jug F, & Eliceiri KW (2021). The ImageJ ecosystem: Open-source software for image visualization, processing, and analysis. Protein Science, 30(1), 234–249. 10.1002/pro.3993 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. van der Walt S, Schönberger JL, Nunez-Iglesias J, Boulogne F, Warner JD, Yager N, Gouillart E, & Yu T (2014). scikit-image: Image processing in Python. PeerJ, 2, e453. 10.7717/peerj.453 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Verveer PJ, & Hanley QS (2009). Chapter 2 Frequency domain FLIM theory, instrumentation, and data analysis. In Laboratory Techniques in Biochemistry and Molecular Biology (Vol. 33, pp. 59–94). Elsevier. 10.1016/S0075-7535(08)00002-8 [DOI] [Google Scholar]
  28. Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J, van der Walt SJ, Brett M, Wilson J, Millman KJ, Mayorov N, Nelson ARJ, Jones E, Kern R, Larson E, … van Mulbregt P (2020). SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nature Methods, 17(3), 261–272. 10.1038/s41592-019-0686-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Yoo TS, Ackerman MJ, Lorensen WE, Schroeder W, Chalana V, Aylward S, Metaxas D, & Whitaker R (2002). Engineering and Algorithm Design for an Image Processing API: A Technical Report on ITK - the Insight Toolkit. Medicine Meets Virtual Reality 02/10, 586–592. 10.3233/978-1-60750-929-5-586 [DOI] [PubMed] [Google Scholar]

Internet Resources

  1. https://forum.image.sc/The Scientific Community Image Forum. [DOI] [PMC free article] [PubMed]
  2. https://openbioimageanalysis.org/The official homepage for The Center for Open Bioimage Analysis (COBA).
  3. https://imagej.net/The official wiki homepage for the ImageJ Ecosystem, including ImageJ and Fiji.
  4. https://imagej.net/Fiji/DownloadsThe wiki page for downloading Fiji.
  5. https://imagej.net/FLIMJThe official wiki homepage for the FLIMJ Plugin.
  6. https://imagej.net/ImageJ_OpsThe official wiki homepage for ImageJ Ops.
  7. https://scijava.org/The official homepage for the SciJava collaboration project.
  8. https://www.anaconda.com/The official homepage for the Anaconda Data Science Platform.
  9. https://jupyter.org/The official homepage of Project Jupyter, including Jupyter Notebooks.
  10. http://beakerx.com/The official homepage of BeakerX, a collection of kernels for Jupyter Notebooks.
  11. https://github.com/uw-loci/ScriptingExampleForFLIMJandOpsThe GitHubrepository for all scripts shown in this paper.
  12. 10.5281/zenodo.4642146The Zenodo repository for all the data files shown in this paper. [DOI]

RESOURCES