Summary
Here, we present a step-by-step protocol for the implementation of deep-learning-enhanced light-field microscopy enabling 3D imaging of instantaneous biological processes. We first provide the instructions to build a light-field microscope (LFM) capable of capturing optically encoded dynamic signals. Then, we detail the data processing and model training of a view-channel-depth (VCD) neural network, which enables instant 3D image reconstruction from a single 2D light-field snapshot. Finally, we describe VCD-LFM imaging of several model organisms and demonstrate image-based quantitative studies on neural activities and cardio-hemodynamics.
For complete details on the use and execution of this protocol, please refer to Wang et al. (2021).1
Subject areas: Biotechnology and Bioengineering, Computer Sciences, High-Throughput Screening, Microscopy, Model Organisms, Neuroscience
Graphical abstract

Highlights
-
•
Step to build LFM encoding 3D biological dynamics into 2D light-field snapshots
-
•
A VCD-Net for rapid 2D-to-3D light-field decoding at high resolution
-
•
Instructions for light-field imaging of moving C.elegans and beating zebrafish heart
-
•
Quantitative analysis of behavior-related neural activities and cardio-hemodynamics
Publisher’s note: Undertaking any experimental protocol requires adherence to local institutional guidelines for laboratory safety and ethics.
Here, we present a step-by-step protocol for the implementation of deep-learning-enhanced light-field microscopy enabling 3D imaging of instantaneous biological processes. We first provide the instructions to build a light-field microscope (LFM) capable of capturing optically-encoded dynamic signals. Then, we detail the data-processing and model training of a view-channel-depth (VCD) neural network, which enables instant 3D image reconstruction from a single 2D light-field snapshot. Finally, we describe VCD-LFM imaging of several model organisms and demonstrate image-based quantitative studies on neural activities and cardio-hemodynamics.
Before you begin
Software for data processing and network implementation installation
Timing: ∼1 h
-
1.Download VCD-Net Code.
-
a.Click the URL (Zenodo: https://doi.org/10.5281/zenodo.7502869) and download ‘VCD-Net-main.zip’.
-
b.Extract this file, and you will get two folders and several description files.
-
a.
Note: One folder named ‘vcdnet’ contains the neural network implementation and inference code which are used to reconstruct 3D stacks from light-field (LF) images. The second folder named ‘datapre’ includes data-preprocessing code which is used for training pair generation.
-
2.Install required software and extra packages.Note: For running data-processing code and VCD network, MATLAB, Python and other packages are required.
-
a.Download MATLAB from http://www.mathworks.com/products/matlab and install.
CRITICAL: The version should be latter than R2016a. To run the data-preprocessing code, Parallel Computing Toolbox and Image Processing Toolbox must be chosen when installing MATLAB. -
b.Download Anaconda from https://docs.anaconda.com/anaconda/install/ and install.Note: For different systems like Windows, macOS and Linux, users can find corresponding installation guide at abovementioned websites.
-
c.Create a new environment for running VCD-Net and install required packages.
-
i.Run the Anaconda Prompt as an Administrator.
-
ii.Change the default path to VCD-Net code folder. For example, typing the following commands:>cd /d {your path} ./VCD-Net/vcdnet/
-
iii.Install the required packages for network implementation.Note: We provide a ‘. yml’ file which includes all packages’ name and version for quick installation. Typing one command in Anaconda Prompt:> conda env create -f vcd_environment.yml
CRITICAL: We recommend to use a GPU with more than 8 GB graphic memory such as 2080Ti. If your device does not have a compatible GPU for deep learning, you should install the CPU-only Tensorflow version by the following command:> conda install tensorflow==1.14.0
-
i.
-
d.Test ‘vcdnet’ code via referring to the jupyter notebook named ‘predict_demo.ipynb’ in the unzipped folder.
-
i.Download VCD example data (see key resources table).
-
ii.Migrate the ‘data/’ and ‘checkpoint/’ folders under ‘vcdnet/’.
-
iii.Step-by-step run abovementioned jupyter notebook, which will navigate users through the pipeline. Troubleshooting 1, 2.
-
i.
-
a.
Acquisition of training datasets using confocal microscope and anesthetized biological samples
Timing: 10 h
This section describes the procedures for acquiring training datasets on static biological samples using laser-scanning confocal microscopy, thus permitting high-resolution imaging in three dimensions but with sacrificed speed and phototoxicity. Typically, we acquire 3D datasets from anesthetized samples for network training.
-
3.Image neurons in static C.elegans using confocal microscope (FV 3000, 40×/0.95 NA objective, Olympus) (see key resources table).
-
a.Place the worms on agarose gel (1% low-melting agarose solution) (see key resources table).
-
b.Anesthetize the L4-stage C.elegans by ∼1-min levamisole treatment (2.5 mM levamisole in M9 buffer) (see materials and equipment) for subsequent confocal imaging.
-
c.Use confocal microscope to acquire high-resolution 3D stacks (20 samples are recommended for network training).
-
a.
Note: The C.elegans GCaMP6 reporter is excited by the 488 nm laser at 5% of the maximum laser power and detected at the wavelength of 500–540 nm. The sampling speed is set to 2 μs/pixel. For each worm, we image a volume of 318 × 318 × 31 μm3 with voxel size of 0.311 × 0.311 × 1 μm3 by using a 40×/0.95 NA objective (see key resources table) working at 1-μm z-step size, yielding a 3D image stack containing 1024 × 1024 × 31 voxels. To ensure high-quality datasets for network training, it’s important to have single neurons clearly distinguishable in the recorded confocal image stacks.
-
4.Image zebrafish heart using confocal microscope (SP8-STED/FLIM/FCS, Leica) (see key resources table).
-
a.Deeply anesthetize the fish embryos with ∼300 μL tricaine (3-aminobenzoic acid ethyl ester, 0.2 mg/mL, Sigma Aldrich, MO) (see key resources table) until the hearts have been immobilized.
-
b.Mount the embryos on a cover glass using 1% low-melting agarose.
-
c.Image the static blood cells and nuclei of the anesthetized fish heart using a 20×/0.75 NA objective (see key resources table) on the confocal microscope.
-
a.
Note: The nuclei and blood cells of zebrafish heart are imaged using GFP (EM: 510–550 nm; EX: 460–495 nm) and RFP filters (EM: 575–625 nm; EX: 530–550 nm), respectively. A laser of 5% maximum power is set to ensure a sufficiently high SNR suited for network training. For each zebrafish heart, we image a volume of 581 × 581 × 150 μm3 with voxel size of 0.568 × 0.568 × 2 μm3 by using a 20×/0.75 NA objective working at 2 μm z-step size, yielding a 3D image stack containing 1024 × 1024 × 75 voxels. More than 10 fishes are recommended for training.
Key resources table
| REAGENT or RESOURCE | SOURCE | IDENTIFIER |
|---|---|---|
| Chemicals, peptides, and recombinant proteins | ||
| Levamisole | Sigma-Aldrich | PHR1798; CAS 16595-80-5 |
| Tricaine | Sigma-Aldrich | E10521; CAS 886-86-2 |
| Low-melting agarose | BBI Life Sciences | A6000 15-0025 CAS:9012-36-6; |
| Sub-diffraction bead | BaseLine | 7-3-0100 |
| Deposited data | ||
| VCD example data | Wang et al.1 | ZENODO: https://doi.org/10.5281/zenodo.7568424 |
| Experimental models: Organisms/strains | ||
| C.elegans: Strain ZM9128 hpIs595[Pacr-2(s)::GCaMP6(f)::wCherry] | Huazhong University of Science and Technology | N/A |
| Zebrafish: Tg(gata1a: dsRed) | David Geffen School of Medicine at UCLA | ZDB-TGCONSTRCT-070117-164 |
| Zebrafish: Tg(cmlc2:gfp) | David Geffen School of Medicine at UCLA | ZDB-ALT-050809-20 |
| Software and algorithms | ||
| VCD-Net | Wang et al.1 | ZENODO: https://doi.org/10.5281/zenodo.7502869 |
| Retrospective gating algorithm | Mickoleit et al.2 | N/A |
| Light field 3D reconstruction software package | Prevedel et al.3 | https://static-content.springer.com/esm/art%3A10.1038%2Fnmeth.2964/MediaObjects/41592_2014_BFnmeth2964_MOESM189_ESM.zip |
| HCImage Live 4.3.5.1 | Hamamatsu Photonics | https://hcimage.com/hcimage-overview/hcimage-live |
| LABVIEW 2020 | National Instruments Corporation | https://www.ni.com/en-us/shop/labview.html |
| MATLAB | MathWorks | https://www.mathworks.com/products/matlab.html |
| ImageJ | Schneider et al.4 | https://imagej.nih.gov/ij/` |
| LFDisplay | The Board of Trustees of The Leland Stanford Junior University | http://graphics.stanford.edu/software/LFDisplay/ |
| Other | ||
| Confocal microscope | Olympus | FV3000 |
| Leica | SP8-STED/FLIM/FCS | |
| Commercial fluorescence microscope | Olympus | BX51 |
| Objective | Olympus | LUMPlanFLN ×40/NA 0.8 water |
| Nikon | Fluor ×20/0.5 water | |
| Olympus | UPLSAPO 40X2 ×40/NA 0.95 | |
| Leica | HC PL APO CS2 ×20/NA 0.75 Oil | |
| Relay system | Nikon | AF 60 mm 2.8D |
| Relay lens | Thorlabs | AC508-080-A |
| Camera | Hamamatsu | Flash 4.0 V2 |
| Microlens array | OKO Optics | APO-Q-P150-F3.5 (633) |
| Objective scanner | Physik Instrumente (PI) | P-725.4CD |
| Mirror | Thorlabs | PF20-03-P01 |
| Optomechanical components | Thorlabs | KCB2C |
| LCP01T | ||
| AC508-080 | ||
| CH1060 | ||
| RS6P | ||
| Microfluidic chamber | Zhu et al.5 | N/A |
Materials and equipment
M9 buffer
| Reagent | Final concentration | Amount |
|---|---|---|
| Na2HPO4 | 6 g/L | 6 g |
| NaCl | 5 g/L | 5 g |
| KH2PO4 | 3 g/L | 3 g |
| MgSO4·7H2O | 0.25 g/L | 0.25 g |
| dd H2O | N/A | 1 L |
| Total | N/A | 1 L |
Can be store at 4°C.
Anesthetic for C.elegans
| Reagent | Final concentration | Amount |
|---|---|---|
| Levamisole Hydrochioride | 0.602 g/L | 6.02 mg |
| M9 buffer | N/A | 10 mL |
| Total | N/A | 10 mL |
Prepare M9 buffer in advance and store at 4°C.
Step-by-step method details
Implementation of light-field microscopy
Timing: ∼1–4 days
This section describes the construction procedures of a light-field microscope (Figure 1) by augmenting a commercial epifluorescence microscope (Olympus, BX51) (see key resources table). The basic principle of LFM is that light with different angles can be re-allocated on different pixels of the camera sensor by using a microlens array (MLA) (see key resources table), thereby permitting the recording of both position and angular information of 3D signals through a single 2D light field snapshot. So, the critical steps of building a light-field microscope include 1. the insertion of MLA at the native image plane (NIP) of the microscope to modulate the light with different angles and 2. correct positioning of the camera sensor at the back focal plane of MLA to collect the encoded light. Considering the focal length of MLA is usually very short (e.g., 3,500 μm) to allow the native focal plane of MLA being directly placed on the camera sensor, a relay of 1:1 lens (RL3, RL4) is required to relay the back focal plane of the MLA onto the camera sensor.
-
1.
Mount a water immersion objective (OBJ1) on the microscope to collect the epifluorescence signals from samples.
-
2.
Place a mirror (M1) to reflect the upright detected light from the microscope and then position a pair of 1:1 relay lens (RL1, RL2) to relay the reflected light.
-
3.
Build a pair of 1:1 relay lens (RL3, RL4) and camera before the insertion of the MLA to relay the image of NIP, as shown in Figure 2A.
-
4.
Focus the image via twisting the focus ring of the microscope until the image in the ocular is as sharp as possible.
Note: Now you know the sample is in the focus of the objective, so you can find the conjugate focal plane with the camera.
-
5.
Adjust the camera along the optical axis until the image in the camera is as clear as in the ocular.
-
6.
Insert the MLA at the relayed NIP after the first pair of relay lens RL1 and RL2 (Figure 2B) to double check the precise positioning of MLA.
Note:Figures 2D and 2E shows the correct/incorrect images when the MLA has/hasn’t been placed at NIP precisely. When the MLA is placed at NIP, each microlens is adjacently arranged without any overlap or gap.
CRITICAL: Rotate the camera or MLA to ensure that the microlens grid is as aligned with the pixel of the camera as possible. Troubleshooting 3.
-
7.
Check the size of each microlens in the image, make sure the pitch of microlens is the same as the theoretical size (e.g., 150 μm in our setup). Troubleshooting 4.
-
8.
Move RL3 backward to relay the back focal plane of the MLA (e.g., 3.5 mm) instead of the native image plane (Figure 2C).
-
9.
Check the light-field PSF by imaging the sub-diffraction fluorescent beads (distributed in a piece of 0.8% low-melting agarose hydrogel) (see key resources table). Make sure that the RL3 has been placed at the precise position to relay the back focal plane.
-
10.As shown in Figure 3A, finely adjust the position of RL3 until the experimental PSF looks as sharp as possible and the distribution at different depth is highly similar with the theoretical PSFs generated by our forward projection code in VCD-Net package (see key resources table). Troubleshooting 5.Optional: Add another light field-channel for dual-color imaging.
-
a.Insert a dichroic mirror (DM2) after the RL4 to divide the detection light.
-
b.Add an extra camera sensor (CAM2) for simultaneously capturing dual-color light-field signals.
-
c.Adjust the camera’s position until the signals of the dual light-field channel are registered.
-
d.Insert different bandpass filter (BF1, BF2) before the cameras to filter the dual-channel signals, respectively.
-
a.
Figure 1.
Epi-illumination light-field microscope based on a simple retrofit of an upright fluorescence microscope
(A) The schematic drawing of epi-illumination mode. Objectives: OBJ; dichromatic mirror: DM1, DM2; Tube lens: TL; mirror: M1, M2; relay lens: RL1-RL4; bandpass filter: BF1-BF2; microlens array: MLA; camera: CAM1-2.
(B) The off-the-shelf optical and mechanical elements of the LFM are labeled with black and blue annotations, respectively (camera should be labeled in black).
(C) The overview and top view of the epi-illumination light-field imaging prototype.
Figure 2.
The schematic drawing of steps 3–9
(A) Relay the NIP (indicated with dash line) onto camera sensor plane in steps 3–5.
(B) Insert the MLA at the NIP in steps 6 and 7.
(C) Move the RL3 to relay the back focal plane of the MLA on camera senor plane in steps 8 and 9.
(D and E) The reference images showing how to visually judge whether the MLA has been correctly inserted at the relayed NIP in step 5.
Figure 3.
Characterization of VCD-LFM system
(A) Comparison between theoretical and experimental light-field PSFs. Scale bar, 5 μm.
(B)The reconstruction results of fluorescent beads. Scale bar, 10 μm.
Pre-processing of training datasets
Timing: 2–3 h
- 11.
Note: After PSF computation, users will get light-field PSF data in the term of ‘.mat’ files stored in ‘PSFMatrix’ folder. Users can use this file to convert 3D stack image (like TIFF images) to 2D LF image.
-
12.Generate training dataset for VCD network.
-
a.Open ‘Main.m’ in the folder ‘VCD-Net/datapre/Code’.
-
b.Set the parameters in the ‘’Rectify and Augment HR data’ panel.
-
c.Rectify the raw images and augment the rectified images by cropping, rotating and flipping for training.Note: This program will create a folder (‘VCDNetPre/Data/Substacks’) automatically to save the cropped ‘Substacks’ from selected original stacks. Troubleshooting 6.
-
d.Generate light-field projection.
-
i.Set the projection parameters and click projection. A file selecting window will pop up.
-
ii.Choose the light field PSF calculated in step 12. The light-field projections will be saved in the folder (‘./Data/LFforward’).
CRITICAL: There must only exist cropped stacks in the folder (‘./Data/Substacks’) mentioned above. Troubleshooting 7.
-
i.
-
e.Crop ‘Substacks’ and light-field projection into training pairs.
-
i.Click ‘Crop Test’. The sum value and variance of each cropped block from several sub-stacks will be displayed in command window of MATLAB. And these blocks will be save at ‘./Data/Training Pair/WF’.
-
ii.Check these saved images. Users can get a threshold to decide one patch whether to be saved or not. Then, set the parameters in Section C and click ‘Crop’. Those cropped patches will be saved in a folder (‘./Data/TrainingPair’).
CRITICAL: Make sure depth of the patches match the ‘StackDepth’ in step a or you choose to discard some slices when it’s less than ‘StackDepth’. (Error when bigger than ‘StackDepth’).
-
i.
-
f.Check the training dataset of VCD network.Note: After cropping high-resolution stacks and their corresponding light field images, the users need a quick check on the datasets, for example, drag all the LF images into ImageJ to find whether exist images with no signal. Also, make sure the lateral size and numbers of training pairs are identical.
-
a.
Table 1.
Detailed descriptions of parameters related to light-field PSF computation
| Parameters | Description |
|---|---|
| Magnification | Magnification of the objective lens |
| NA | Numerical aperture of the objective lens |
| Fml | Focal distance of the microlens array |
| ML Pitch | Pitch of microlens array |
| n | Refractive index of the immersion material |
| wavelength | Wavelength of emission light |
| OSR | Spatial oversampling ratio for computing PSF |
| z-spacing | The spacing between of two adjacent planes |
| z-max | The distance between the lowest plane and the focal plane |
| z-min | The distance between the highest plane and the focal plane |
| Nnum | The number of virtual pixels behind one microlens |
Figure 4.
Screenshot of light field projection module
Table 2.
Detailed descriptions of parameters related to light-field forward projection
| Parameters | Description |
|---|---|
| Section A: Rectify and augment high-resolution 3D images | |
| Stack depth | Magnification of the objective lens |
| Axial Overlap | Numerical aperture of the objective lens |
| Axial sampling | Focal distance of the microlens array |
| dx & Nnum | Pitch of microlens array |
| Brightness | Refractive index of the immersion material |
| Rotation Step | Wavelength of emission light |
| Rectify image | Spatial oversampling ratio for computing PSF |
| Rotate | The spacing between of two adjacent planes |
| Complement Insufficient Stacks | The distance between the lowest plane and the focal plane |
| z-min | The distance between the highest plane and the focal plane |
| Nnum | The number of virtual pixels behind one microlens |
| Section B: Forward projection | |
| Brightness Adjust | A parameter to changing the intensity of light field projection images. |
| Poisson Noise | Whether to add Poisson noise on synthetic light field image. |
| Poisson Noise sigma | Whether to add gaussian noise on synthetic light field image. |
| Use GPU | Enable to use GPU for processing |
| Section C: Crop training pairs | |
| Patch size | The size of each block you want to generate. These three numbers represent height, width and depth respectively |
| SumThreshold | A threshold to decide whether to discard cropped patches. If the threshold is larger than the sum of pixel values of a block, this block won’t be saved |
| VarThreshold | Like ‘SumThreshold’, via calculating the variance of a block, this program decides to save this block or not |
Training a VCD model
Timing: 2–5 h
- 13.
-
14.Train VCD model.
-
a.Run ‘train.py’ to start model training as illustrated in Figure 5C. For example, type the following commands in the command prompt of Anaconda. Troubleshooting 8.> activate vcd-net> python train.py
-
b.Check the performance of the training model.
-
i.Check the loss printed in the console panel to determine whether the model is properly optimized. Troubleshooting 9.
-
ii.Open the folder ‘./sample/{label}’ which contains the temporal outputs of network and compare the quality of reconstruction stacks and corresponding ground truths. Troubleshooting 10.Optional: If the quality of network inference is not satisfactory, users can stop the training procedure early. Then adjust the data processing function (e.g., normalization), hyper-parameters (e.g., learning rate, batch size, etc.) and model structure (e.g., non-linear activation function, the number of layers, base blocks, features, etc.) in ‘config.py’ to enhance the fitting ability of model, which permits the training loss to be further converged.Note: This step could be time-consuming, depending on the performance of GPU and the size of training datasets (typically 4–8 h for 1,500–3,000 training pairs with data size of 176 × 176 (∼1.4 GB–2.8 GB), varying according to the size of the data and the performance of the computer’s processors (CPU or GPU) and memory.
-
i.
-
a.
Figure 5.
Initiation of VCD model training
(A) VCD network package is available in official code source (see key resources table).
(B) Setting various parameters about VCD training (basic information of training patches and model settings).
(C) Starting training via Anaconda Prompt.
Light-field imaging of dynamic biological samples
Timing: 1–5 min
-
15.Image freely moving C.elegans using the light-field microscope.
-
a.Load the awake L4-stage worms into a microfluidic chamber.Note: To allow the worms free movement within the field of view (FOV) of a 40× objective (LUMPlanFLN 40×/0.8 NA, Olympus), the size of this chamber is about the same size as the FOV, namely, about 300 × 300 × 50 μm3.
-
b.Mount the microfluidic chamber on the stage installed on the microscope.
-
c.Move the sample stage of the microscope to make the microfluidic chamber positioned within the center of the FOV.
CRITICAL: The focus ring of the microscope needs to be adjusted before imaging in order to focus the imaging plane at the desirable plane which is the center plane of the whole sample in most cases.Note: When the imaging depth of the light field system is much larger than the thickness of the sample (e.g., the thickness of C.elegans is about 30 μm and the imaging depth of our system using 40×/0.8 NA is more than 60 μm), the imaging plane can be focused on the top of the sample to acquire more angular information. -
d.In order to remove motion noise, record the calcium signals (GCaMP6(f)) and red fluorescent protein (RFP) signals simultaneously at a volume rate of 100Hz by the dual-channel light-field path.Note: The live C.elegans labeled with GFP and RFP are excited under the illumination of mercury lamp at 10%–30% of the maximum power and imaged by two cameras equipped with corresponding emission filters (510 nm–550 nm for GFP, 575 nm–625 nm for RFP). The exposure time is set as 2 ms to avoid motion blur. With the 40× magnification objective, the pixel size of captured 2048 × 2048 LF images is 0.1625 μm, yielding a FOV of 332.8 μm × 332.8 μm.
-
a.
-
16.Image fast dynamics in beating zebrafish heart using the light-field microscope.
-
a.Slightly anesthetize the zebrafish larvae with ∼100 μL tricaine (3-aminobenzoic acid ethyl ester, 0.1 mg/mL, Sigma Aldrich, MO) while maintaining the heart beating.
-
b.Mount the fish on a cover glass or a culture dish (e.g., 500 μL) using 1% low-melting agarose.Note: Dilute the agarose by 0.1 mg/mL tricaine to prevent the revival of the zebrafish larvae.
-
c.Move the sample by the sample stage of microscope to ensure the signals of interest to be positioned within the FOV of the detection objective (Fluor 20X/0.5 NA water, Olympus).
CRITICAL: Before cardiac imaging, the focal plane should be focus at the center of the heart by twisting the focus ring of the microscope. -
d.Record light-field videos of the RBCs, myocytes nuclei or myocardium of the beating zebrafish heart.Note: The exposure time is set to be 5 ms enabling 200 Hz high-speed volumetric imaging. The frame size is 768 × 768 that corresponds to a lateral FOV of around 250 × 250 μm2. About 450 frames should be imaged which covers 4–5 cardiac cycles.
-
a.
Model forward inference
Timing: Typically 0.05 sper light field image patch (varies according to the size of the data, as well as the performance of the computer’s processors (CPU or GPU) and memory)
-
17.Preprocess experimental data (Figure 6A).
-
a.Get rectification parameters for subsequent light-field image calibration.Note: In order to extract different views accurately from one light-field image, we use LFDisplay (see key resources table) to manually identify the center position of each lenslet. Then the rectification settings would be exported as a text file. More detailed guides can be found in the manual of LFDisplay.
-
b.Open the ‘Image rectification’ section in light field 3D reconstruction software package (see key resources table) and input the text file from step a.
-
c.Click ‘Run Rectification’ and the results will be saved at ‘./Data/02_Rectified’.
-
d.Subtract average background value in the rectified light-field image.Note: The pixel value of even background can be estimated by selecting an area of background (usually located at the corner of an image) in ImageJ.
-
a.
-
18.Load trained model and implement network inference (Figure 6B).
-
a.Change the following inference settings in ‘config.py’:LF2D_path: The location of the light-field images.Saving_path: The location of the reconstructed stacks (If this folder doesn’t exist, it will be created automatically).
-
b.Run the following commands in the command prompt of Anaconda:
-
a.
> python eval.py
Figure 6.
Data preprocessing of experimental LF and VCD model inference
(A) Rectification and background subtraction of raw LF measurement.
(B) Parameters of VCD model inference and model inference via Anaconda Prompt.
The reconstruction results will be saved in the ‘Saving_path’ or the default folder ‘./results’.
Note: For convenience, an ImageJ plugin that implements the inference stage of VCD-Net is contained in the VCD-net package (located at ‘VCD-Net/vcdnet/ImageJ’). We provide a guidance how to install this plugin and load model at https://github.com/feilab-hust/VCD-Net/tree/main/vcdnet/ImageJ. When the inference is completed, reconstruction results will be opened in ImageJ, which is more convenient for subsequent analysis.
CRITICAL: Make sure ‘label’ in ‘config.py’ is the same as the trained model’s name when in inference stage. Troubleshooting 11.
Data analysis
Timing: 1–3 h
-
19.Quantitatively analyze neuronal activity and behavior of moving C.elegans.
-
a.Map calcium dynamics based on the reconstructed results of VCD-Net.
-
i.Perform semi-automated tracking of the intensity fluctuation of each individual neuron using TrackMate Fiji Plugin.6 Troubleshooting 12.
-
ii.Average all the pixels within the ROI of the neurons detected to generate the signal intensity of the neurons.Note: The calculated intensity of GCaMP and RFP is represented as Fg and Fr.
-
iii.Calculate the real calcium dynamics by measuring (F-F0)/F0.Note: Where F=Fg/Fr is the ratio of GCaMP fluorescence Fg to RFP fluorescence Fr and F0 is the neuron-specific baseline being the average of the lowest 100 value of F.
-
i.
-
b.Analyze the behavior of moving C.elegans based on the reconstructed results of VCD-Net.
-
i.Highlight weak worm outline by implementing histogram equalization towards the MIPs of the reconstructed volumes.
-
ii.Use a standard U-Net7 to automatically segment the shape of the worm body.
CRITICAL: Manually annotate the worm body to build a training dataset at first.Note: The number of basic filter and depth of the U-Net are 32 and 4. -
iii.Compute the edge of the worm body using Canny filter based on the segmented images.
-
iv.Calculate the Euclidean distance transform inside the worm body on the edge image and the ridge of distance transformation indicates the center line of the worm.
-
v.Calculate the changing curvatures and motion velocities of the worm based on the segmented center lines at different time.
-
i.
-
a.
-
20.Quantitatively analyze the Velocity map of RBCs and volume-based ejection fraction in the beating zebrafish heart.
- a.
-
b.Analyze the volume-based ejection fraction of myocardium during the heartbeat.
-
i.Segment the volume of the reconstructed myocardium using Amira (v 6.0.1).Note: Use a blow tool (in segmentation section) to semi-automatically segment the inner wall of the ventricle at each slice. Calculate the volume of the ventricle based on the thickness and segmented area at all slices.
-
ii.Calculate the volume change ratio of the ventricle by (V − ESV)/EDV.Note: Where V is the volume of the ventricle and ESV and EDV represent the volumes at the end of systole and diastole during heartbeat.
-
i.
Expected outcomes
The performance of the VCD-LFM could be preliminarily demonstrated by measuring the PSF of the raw light-field images and the reconstruction results. Figure 3A shows example images of the experimental light-field PSFs at different depths, which should be as close as possible to the theoretical PSFs. The ideal reconstruction result of the beads is shown in Figure 3B, which is verified to be correctly localized throughout the entire volume through the comparison with the 3D wide-field imaging result of the same volume. For demonstrating the superiority of VCD-LFM imaging of biological dynamics, we show the light-field image and the reconstruction result of neuronal signals in a moving C.elegans (Figure 7), the flowing red blood cells (RBCs) and cardiomyocyte nuclei of a beating zebrafish heart (Figure 8). The neuron calcium signaling of the moving C.elegans was captured at an acquisition rate of 100 Hz and reconstructed at 13 volumes s-1 and the reconstructed neurons (shown in Figure 7A) could be clearly distinguished in three dimensions. Based on the 4D worm movie at single-cell resolution, each neurons are tracked using TrackMate Fiji Plugin6 and the four-dimensional spatiotemporal patterns of calcium activity along 1-min observation are extracted and correlated worm behavior including the curvatures and velocity are calculated by applying an automatic segmentation of worm body contours (Figures 7B–7D). The RBCs of zebrafish (shown in Figure 8) are captured at a volumetric imaging rate at 200 Hz, and the synchronized reconstruction results of RBCs and the myocyte nuclei are clearly visualized at single-cell resolution (Figures 8D and 8E). These results enable the tracking of fast flowing RBCs and the velocity maps during a systole of reconstructed heartbeat are further shown Figures 8B and 8C.
Figure 7.
The intoto mapping of neuronal activities in a moving C.elegans using VCD-LFM
(A) 4D visualization (space + time) of the reconstructed volumes of C.elegans. Scale bar, 40 μm.
(B) Schematics of the worm with identified motor neurons labeled.
(C) The tracking results of neurons with time-coded traces during 150 ms and 500 ms, respectively.
(D) The Ca2+ activity of 8 identified motor neurons during a 1-min observation of the moving worm.
Figure 8.
3D imaging of blood flow inside a beating zebrafish heart using dual-channel VCD-LFM
(A) 4D visualization (space + spectrum) of the heart in a transient moment. Arrows indicate the direction of blood flow. A: atrium; V: ventricle.
(B) The tracking results of 19 single RBCs throughout the cardiac cycle. A static heart has been outlined for reference.
(C) The Velocity map computed from two consecutive volumes of RBCs during systole. A static bisected heart is shown for reference of cardiac geometry.
(D and E) The 5D (space + time + spectrum) bisected heartbeat along sectioning plane indicated in (A) during the diastole and systole in one cardiac cycle. Scale bar, 30 μm.
Limitations
As a deep-learning-based approach, VCD-Net still requires the pre-acquisition of a considerable number of high-resolution label data for network training. Since the reconstruction results are strongly related with the quality of label datasets, the network can’t generalize well when the input signals are very different with the training datasets. For example, the reconstruction quality will be compromised when applying a network trained on cell nuclei to the inference of continuous signals of myocardium. In this case, a transfer learning based on a small amount of continuous data (∼20% of the original training data) is recommended to leverage the knowledge already learned by the previously-trained network. It should be also noted that since signals within an entire volume are projected onto a single 2D snapshot, the 3D reconstruction quality of LFM will be limited when the signals are highly dense. This problem is especially severe in conventional LFD but also exists in VCD as well.
Troubleshooting
Problem 1
Takes long time to initiate model training/inference (step 2).
Potential solution
Make sure CUDA’s version is compatible with Tensorflow 1.x. For Nvidia Ampere GPUs, choose Nvidia Tensorflow (https://github.com/fo40225/tensorflow-windows-wheel). For earlier GPUs, please refer to Tensorflow official document (https://www.tensorflow.org/install/gpu#hardware_requirements).
Problem 2
Some function in VCD-Net can’t be compatible with Tensorflow 2 (step 2).
Potential solution
This is because VCD-Net and its extra package are built in Tensorflow 1. However, TF 2 is fundamentally different from TF 1. Users can refer to official document (https://tensorflow.google.cn/guide/migrate).
Problem 3
The captured image looks more like Figure 2E rather than Figure 2D (step 6).
Potential solution
The MLA need to be adjusted along the optical axis until the image is the same as Figure 2D.
Problem 4
The pitch size of microlens in image is not the same as theoretical size (step 7).
Potential solution
This is because the magnification of the second relay is not 1:1. At this time, finely adjust the distance between the MLA and the first relay lens, and the distance between the second relay lens and the camera until the magnification is 1:1 while maintaining the image shown in camera is as clear as that shown in eyepiece.
Problem 5
The experimental light-field PSFs are not the same as the theoretical PSFs (step 10).
Potential solution
This is because the back focal plane of the MLA is not projected on the camera. Adjust RL4 along the optical axis until ideal light-field PSF is obtained.
Problem 6
No cropped images in ‘Substacks’ folder (step 13c).
Potential solution
This is because the depth of raw image stacks is smaller than ‘Stack Depth’ in the panel. Enable ‘Complement Insufficient Stacks’ can automatically complement the insufficient stacks to the ‘Stack Depth’.
Problem 7
No light-field projection in ‘LFforward’ folder (step 13d).
Potential solution
Recheck if the depth of light-field PSF is equal to ‘Stack Depth’.
Problem 8
Error occurs when executing ‘train.py’ when establishing network (step 15a).
Potential solution
Check training dataset and make sure the lateral size of LF image is a multiple of ‘Nnum’ in ‘config.py’.
Problem 9
The VCD model does not converge (step 15b).
Potential solution
There are three possible solution: (1) The input data is complex for ‘function’ mode of VCD-Net, please use ‘structure mode’. (2) Hyper-parameter setting is not proper, please reduce ‘learning rate’ or ‘batch’. (3) There exists much noise in training dataset, please check training data and make sure the input is proper.
Problem 10
The outputs of network have no signal (almost all pixel values are zero) (step 15b).
Potential solution
Change the type of activation function used in current VCD model (e.g., Relu -> Leakyrelu). Users can easily set this type in ‘config.py’ before model training (namely, change the value of ‘config.act_function’ at line 22 in ‘config.py’).
Problem 11
Error occurs when executing ‘eval.py’ (step 19).
Potential solution
Check if the LF image’s size can be divided evenly by Nnum.
Problem 12
Automatic tracking of some neurons failed due to fast movement of the neuron (step 20).
Potential solution
The missing detections and tracking mistakes can be manually corrected in the TrackMate Fiji plugin.
Resource availability
Lead contact
Further information and requests for resources and codes are available from the corresponding author Peng Fei (feipeng@hust.edu.cn) upon request.
Materials availability
This study did not generate any reagents.
Acknowledgments
We thank Jiahao Sun and Dongyu Li for their discussions and comments on the manuscript. We thank the funding supports from National Key R&D Program of China (2022YFC3401100) and National Natural Science Foundation of China (T2225014, 61860206009).
Author contributions
L.Z. performed research, analyzed data, and contributed to writing the manuscript; C.Y. modified codes and contributed to writing the manuscript; P.F. oversaw the project and wrote the paper.
Declaration of interests
The authors declare no competing interests.
Contributor Information
Lanxin Zhu, Email: zhulanxin1@hust.edu.cn.
Peng Fei, Email: feipeng@hust.edu.cn.
Data and code availability
Data: Example data for VCD implementation have been deposited to Zenodo: https://doi.org/10.5281/zenodo.7568424.
Code: The source code of VCD has been deposited to Zenodo: https://doi.org/10.5281/zenodo.7502869.
References
- 1.Wang Z., Zhu L., Zhang H., Li G., Yi C., Li Y., Yang Y., Ding Y., Zhen M., Gao S., et al. Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning. Nat. Methods. 2021;18:551–556. doi: 10.1038/s41592-021-01058-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Mickoleit M., Schmid B., Weber M., Fahrbach F.O., Hombach S., Reischauer S., Huisken J. High-resolution reconstruction of the beating zebrafish heart. Nat. Methods. 2014;11:919–922. doi: 10.1038/nmeth.3037. [DOI] [PubMed] [Google Scholar]
- 3.Prevedel R., Yoon Y.-G., Hoffmann M., Pak N., Wetzstein G., Kato S., Schrödel T., Raskar R., Zimmer M., Boyden E.S., Vaziri A. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat. Methods. 2014;11:727–730. doi: 10.1038/nmeth.2964. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Schneider C.A., Rasband W.S., Eliceiri K.W. NIH Image to ImageJ: 25 years of image analysis. Nat. Methods. 2012;9:671–675. doi: 10.1038/nmeth.2089. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Zhu T., Zhu L., Li Y., Chen X., He M., Li G., Zhang H., Gao S., Fei P. High-speed large-scale 4D activities mapping of moving C. elegans by deep-learning-enabled light-field microscopy on a chip. Sens. Actuat. B Chem. 2021;348:130638. doi: 10.1016/j.snb.2021.130638. [DOI] [Google Scholar]
- 6.Tinevez J.-Y., Perry N., Schindelin J., Hoopes G.M., Reynolds G.D., Laplantine E., Bednarek S.Y., Shorte S.L., Eliceiri K.W. TrackMate: an open and extensible platform for single-particle tracking. Methods. 2017;115:80–90. doi: 10.1016/j.ymeth.2016.09.016. [DOI] [PubMed] [Google Scholar]
- 7.Weigert M., Schmidt U., Boothe T., Müller A., Dibrov A., Jain A., Wilhelm B., Schmidt D., Broaddus C., Culley S., et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods. 2018;15:1090–1097. doi: 10.1038/s41592-018-0216-7. [DOI] [PubMed] [Google Scholar]
- 8.Ramachandran P., Varoquaux G. Mayavi: 3D visualization of scientific data. Comput. Sci. Eng. 2011;13:40–51. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data: Example data for VCD implementation have been deposited to Zenodo: https://doi.org/10.5281/zenodo.7568424.
Code: The source code of VCD has been deposited to Zenodo: https://doi.org/10.5281/zenodo.7502869.

Timing: ∼1 h






