Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Dec 19.
Published before final editing as: IEEE Trans Radiat Plasma Med Sci. 2025 Oct 13:10.1109/TRPMS.2025.3619872. doi: 10.1109/TRPMS.2025.3619872

YRT-PET: An Open-Source GPU-accelerated Image Reconstruction Engine for Positron Emission Tomography

Yassir Najmaoui 1, Yanis Chemli 2, Maxime Toussaint 3, Yoann Petibon 4, Baptiste Marty 5, Kathryn Fontaine 6, Jean-Dominique Gallezot 7, Gašper Razdevšek 8, Matic Orehar 9, Maeva Dhaynaut 10, Nicolas Guehl 11, Rok Dolenec 12, Rok Pestotnik 13, Keith Johnson 14, Jinsong Ouyang 15, Marc Normandin 16, Marc-André Tétrault 17, Roger Lecomte 18, Georges El Fakhri 19, Thibault Marin 20
PMCID: PMC12714321  NIHMSID: NIHMS2117254  PMID: 41424471

Abstract

Image reconstruction for positron emission tomography (PET) requires an accurate model of the PET scanner geometry and degrading factors to produce high-quality and clinically meaningful images. It is typically implemented by scanner manufacturers, with proprietary software designed specifically for each scanner. This limits the ability to perform direct comparisons between scanners or to develop advanced image reconstruction algorithms. Open-source image reconstruction software can offer an alternative to manufacturer implementations, allowing more control and portability. Several existing software packages offer a wide range of features and interfaces, but there is still a need for an engine that simultaneously offers reusable code, fast implementation and convenient interfaces for interoperability and extensibility.

In this work, we introduce YRT-PET (Yale Reconstruction Toolkit for Positron Emission Tomography), an open-source toolkit for PET image reconstruction that aims for flexibility, reproducibility, speed, and interoperability with existing research software. The toolkit is implemented in C++ with CUDA-enabled GPU acceleration, relies on a plugin system to facilitate the use with multiple scanners, and offers Python bindings to enable the development of advanced algorithms. It includes support for list-mode/histogram data formats, multiple PET projectors, incorporation of time-of-flight information, event-by-event rigid motion correction, point-spread function modeling. It can incorporate correction factors such as normalization, randoms and scatter, obtained from scanner-specific plugins or provided by the user. The toolkit also includes an experimental module for scatter estimation without time-of-flight.

To evaluate the capabilities of the software, two different scanners in four different contexts were tested: dynamic imaging, motion correction, deep image prior, and reconstruction for a limited-angle scanner geometry with time-of-flight. Comparisons with existing tools demonstrated good agreement in image quality and the effectiveness of the correction methods. The proposed software toolkit offers high versatility and potential for research, including the development of novel reconstruction algorithms and new PET scanner systems.

Index Terms—: positron emission tomography (PET), image reconstruction, software, graphics processing unit (GPU), C++, Python

I. Introduction

POSITRON emission tomography (PET) has become an essential tool for the development and evaluation of imaging biomarkers in neuroscience, cardiology, oncology and others. Image reconstruction, which requires an accurate modeling of the system, is typically implemented and provided by the manufacturer, but the data format, the scanner geometry and the implementation details are usually proprietary. However, using custom image reconstruction engines can be advantageous when more control over the image reconstruction process is desired either to perform systematic comparisons between scanners, make use of state-of-the-art reconstruction algorithms, or when developing new scanners. Nevertheless, the lack of an established standard for raw PET data (until recent efforts [1]), differences in the availability and implementation of certain features, and the need for scanner-specific correction methods often hinder the dissemination of image reconstruction engines. More specifically, the handling of time-of-flight (TOF), depth-of-interaction (DOI), normalization and randoms estimations typically varies significantly from scanner to scanner. Additionally, motion correction is often not available in the manufacturer reconstruction engines, but may be a critical correction for quantitative PET imaging [2]. Motion information may be collected from a variety of sources (e.g., external motion tracking devices [3], [4], [5], simultaneous PET/MR [6], [7], [8], or from the data itself [9]) and may exploit different assumptions (e.g., rigid motion typically assumed in the brain, affine or deformable in other organs such as liver or heart) which affect the corresponding motion correction algorithms (e.g., event-by-event correction [3], [10] or image deformation in the forward model [11], [12]). Therefore, the implementation of a reconstruction algorithm oriented towards one specific PET scanner and imaging application might not be applicable in another context without major changes. This can be problematic, for instance, in early stages of scanner development when focus is placed on the hardware development and image reconstruction is needed to drive design decisions.

Several open-source packages for PET image reconstruction have been released to address this need, including STIR [13], CASToR [14], and PyTomography [15]. They aim to provide implementations of conventional and advanced reconstruction algorithms and simplify the data conversion from PET scanners or simulation software. STIR (Software for Tomographic Image Reconstruction) [13] is a comprehensive package written in C++ with MATLAB and Python bindings, implemented using the simplified wrapper and interface generator (SWIG) [16]. It offers a large collection of algorithms and operators and natively supports a wide range of scanners. This includes the several versions of the ordered-subsets expectation-maximization (OS-EM) [17] algorithm, corrections for attenuation, normalization and randoms, scatter estimation using the single scatter simulation (SSS) algorithm [18], kernel-based reconstruction [19], etc. It can leverage the PARALLELPROJ program [20], which implements a GPU version of the Joseph projector [21] for the geometry of the forward model. STIR mostly operates on sinograms, which enables efficient use of symmetries in the forward model but limits the implementation of event-by-event corrections such as motion correction [3]. CASToR (Customizable and Advanced Software for Tomographic Reconstruction) [14] is another widely used package for image reconstruction which offers alternative PET projectors such as the distance-driven model [22], [23] and conveniently interfaces with the GATE simulation software [24], enabling the conversion of GATE geometries and ROOT [25] files, which can greatly facilitate the design of novel scanners. While not part of CASToR, SSS is implemented in openSSS [26], which is compatible with CASToR. The main limitations of CASToR, as of version 3.2, include the lack of interface for the forward and back-projection (in the form of command-line executables or bindings to a high-level language) and the CPU-only implementation. Finally, PyTomography [15] is a Python-based reconstruction toolkit relying on PyTorch [27] and geared towards deep learning algorithms. It relies on PARALLELPROJ for its GPU-accelerated forward and back-projection and implements filtered back-projection and deep image prior [28], [29] algorithms. Its modular approach allows for easy integration with various deep learning algorithms. Note that other GPU projectors have been proposed [30], [31], but are not included in existing open source reconstruction engines.

In this work, we present the Yale Reconstruction Toolkit for Positron Emission Tomography (YRT-PET)1, a free and open-source C++ reconstruction library for PET image reconstruction that focuses on flexibility, interoperability and speed. It includes support for both CPU and GPU parallelism using OpenMP [32] and Nvidia CUDA. Image reconstruction is performed using the OS-EM algorithm with Siddon [33], [34] or distance-driven [22], [23] projectors. YRT-PET supports event-by-event motion correction in list-mode [3], [4] and rigid image-domain transformation. Additionally, YRT-PET includes optional Python bindings, allowing access to internal objects and operators for use in algorithms implemented in Python. This allows users to seamlessly integrate YRT-PET’s projection operators, image transformations, or data manipulations in their customized algorithms with Python array manipulation libraries. Finally, the toolkit can potentially handle input data from any list-mode or histogram format as it is designed around a plugin system, which allows developers to extend the software to support data in arbitrary formats. The main features that differentiate YRT-PET from existing reconstructions engines are:

  • A plugin system to easily add support for arbitrary scanners, including interfaces to read normalization factors and randoms estimate (from singles rate or delayed coincidences) from raw scanner files. This system enables sharing reconstruction code and features without divulging manufacturer intellectual property on scanner layout and data formats (keeping plugins closed-source when necessary).

  • An end-to-end GPU implementation, where data remains on the GPU if memory permits or is split into batches, with efficient memory transfer, overlapped with execution of GPU kernels.

  • Simultaneous support for event-by-event motion correction [3], time-of-flight, depth-of-interaction and point spread function modeling.

  • An interface with Python for most objects and operators, enabling the incorporation of the projectors, operators (CPU and GPU) and solvers in advanced algorithms written in Python. This includes bindings for Pytorch to enable the development of deep learning-based algorithms (supporting full GPU processing or CPU-to-GPU transfers depending on memory requirements).

II. Methods

A. Formulation of imaging model

In this work, we formulate the image reconstruction problem as that of recovering the activity in an imaged object, represented as a vector x=[x1,,xJ]T where J is the total number of image voxels, from measurements m=[m1,,mI]T where I is the number of measurements. In list-mode, mi=1,i and I is the number of collected coincidences, while in histogram mode, I represents the total number of detector pairs of the system. In the rest of this paper, we denote by the set of all the lines of response (LORs) of the system and by the set of measurements, which is equal to in histogram mode and is the list of collected LORs in list-mode. Given an image estimate x, the measurement for a given line of response i is modeled as:

mi=Poisson(jsiaiGijxj+ri+σi), (1)

where si represents the sensitivity of the line of response i, ai represents the attenuation coefficient factor, and ri and σi respectively represent the contribution from randoms and scatter associated with measurement i. The geometric system matrix element Gij represents the overlap between LOR i and image voxel j. YRT-PET implements both the Siddon [33], [34] and the distance-driven [22], [23] projectors (both available on CPU and GPU). The Siddon projector supports a multi-ray option that randomly samples multiple rays on the plane orthogonal to the normal vector of the crystal surface, which allows to take into account the detector width and orientation. Gij is computed on-the-fly, and can optionally incorporate additional modeling, including the point spread function (PSF) of the system in both image and projection space [35], [36], time-of-flight information, and event-by-event rigid motion [37]. Image-space PSF is modeled as a spatially invariant convolution (separable along x, y and z) and projection-space PSF is modeled, in the distance-driven kernel, as a user-defined smoothing profile incorporated into the overlap function, which varies with the radial position of a LOR. Time-of-flight uses a Gaussian model with user- controlled width and truncation where each coincidence includes a time of arrival difference in ps. Randoms estimation is scanner-specific and can be performed using singles rates or delayed coincidences [38], depending on the plugin implementation. YRT-PET includes a limited scatter estimation routine implementing the SSS method [18].

B. Reconstruction algorithm

The reconstruction of the activity image x from measurements m is performed iteratively using the maximum-likelihood expectation-maximization algorithm [39] (ML-EM), which results in the following update of image x at pixel j and iteration n+1:

xj(n+1)=xj(n)qjiGijmikGikxk(n)+ri+σisiai, (2)
qj=isiaiGij, (3)

where q=[q1,,qJ]T is the sensitivity image, computed before the first iteration. The ML-EM algorithm can be accelerated using the ordered-subsets expectation-maximization algorithm [17] (OS-EM), resulting in the following update, where the iteration number n and subset number p have been combined into a single update counter denoted by n:

xj(n+1)=xj(n)qj(p)ipGijmikGikxk(n)+ri+σisiai, (4)

where qj(p) is the sensitivity image value at voxel j for subset p, and p is the set of measurements associated with subset p. qj(p) is therefore calculated differently for list-mode and histogram data:

List-Mode:qj(p)=1PisiaiGij, (5a)
Histogram:qj(p)=ipsiaiGij, (5b)

where P is the number of subsets and p is the set of LORs in the system for subset p. For histogram reconstructions, LORs are grouped geometrically, resulting in a different sensitivity image for each subset. For list-mode reconstructions, since the subsets are subdivided chronologically, which assumes that the LOR spatial distribution will be similar, the same sensitivity image is reused for all subsets.

C. Event-by-event motion correction

Event-by-event motion correction is implemented as described in [4]. This method assumes that rigid motion information is available for all frames, defined as sets of consecutive coincidences. This information is typically obtained from external tracking devices [3], [37], [40]) or from the data itself [41]. For each event, motion correction consists in displacing the endpoints of a line of response based on the corresponding transformation matrix. This method also involves segmenting the attenuation image into two components: the hardware attenuation, corresponding to objects that remain fixed with respect to the scanner (e.g., the bed), and the in vivo attenuation, representing the moving subject [4]. The resulting image reconstruction update, with event-by-event motion correction, can be expressed as:

xj(n+1)=xj(n)qˇjipi=f(i)Gij1ai(v)kGikxk(n)+ri+σisiai, (6)

where ai(v) is the in vivo attenuation coefficient factor for the LOR associated with event i, i=f(i) represents, with a shorthand notation, the displaced LOR for event i based on its corresponding frame f. Additionally, the sensitivity image, denoted by qˇ is generated by first calculating the sensitivity image in the reference frame and then averaging it over time, applying the motion transform of each frame, as described in [42]:

qˇj=1PfτfτkTjk(f)iGikai(h)si, (7)

where ai(h) is the hardware attenuation coefficient factors for i, Tjk(f) is the rigid motion transformation from voxel k to j in frame f, and τ and τf denote respectively the scan and frame duration. Note that ai=ai(h)ai(v).

D. Scatter correction

Finally, the vector σ=[σ1,,σI]T in (4) and (6) represents the scatter estimate for each LOR. YRT-PET includes an experimental, non-TOF and CPU-only implementation of the SSS algorithm [18]. It requires an initial activity image, reconstructed without scatter correction but with randoms correction, which is used, along with the attenuation map, to compute the volume integrals over the scattering object, yielding an estimate of the scatter distribution in a low- resolution histogram with radial, angular downsampling and considering only direct planes. A dense Histogram3D is then interpolated from the low resolution histogram. Tail-fitting is then performed to estimate a scatter scaling factor such that the upsampled scatter histogram matches the prompts (from which randoms have been subtracted) near the edges of the imaged object. The resulting histogram is then used in the reconstruction as in (4) and (6). The activity image and scatter estimates can be iteratively updated by alternating image reconstruction and SSS steps. The current implementation does not support sinogram compression, TOF or fine-tuning of SSS and tail fitting parameters. Future versions will include these features to offer better control for individual scanners.

III. Implementation

This section first describes the data structures used by YRT-PET, then the operators used for reconstruction, and finally, the design and implementation of the optimization algorithm.

A. Data structure

In order to allow the support of a wide range of PET scanners, geometries and data formats, YRT-PET relies on generic data structures to represent the geometry, projection data and images.

1). Scanner geometry:

The Scanner object is responsible for handling information related to the scanner geometry, including the layout of detectors, their dimensions and orientations. The scanner is defined by a text-based configuration file (using the JavaScript object notation (JSON) format), which contains basic information about the scanner geometry, and a lookup table (LUT) file with detector positions and orientations. If the scanner encodes DOI information, the LUT stores the different DOI layers as separate detectors, and the sensitivity image generation includes all the possible pairs of detectors across all DOI layers. By convention, the detectors are ordered anticlockwise, ring by ring, and layer by layer. The Scanner object then defines an interface to gather the spatial coordinates of a LOR based on any given pair of detector indices.

2). Projection data formats:

In YRT-PET, the term projection data describes a set of LORs associated to numerical values. Projection data can be either of list-mode or histogram type. The class diagram shown in Fig. 1 illustrates a hierarchy of the supported data types. The native projection data structures in YRT-PET respect the Python buffer protocol, enabling their usage within Python programs, via an interface implemented using pybind112. This design allows projection data objects to be bound to NumPy arrays [43], or PyTorch tensors [27] for seamless processing between Python and YRT-PET.

Fig. 1.

Fig. 1.

Software architecture diagram for the projection-data objects. The PluginListMode class is an example of a plugin reading data in a custom list-mode format.

List-mode objects store the LOR associated with each event chronologically, along with the event timestamp and additional information such as time-of-flight measurement (when available). YRT-PET’s native list-mode format (named ListModeLUT in Fig. 1) defines, for each event, the timestamp and the detector pair defining the LOR (i.e. the indices of the two detectors in the scanner’s LUT). Optionally, motion information can be stored with the list-mode data for event-by-event motion correction as described in Section II-C.

The base histogram objects differ from list-mode objects in their ability to store a value associated with each LOR and the fact that each LOR is stored only once. The actual handling of buffers is left for derived classes, but the assumption is that values for a given LOR can be retrieved with a 𝒪(1) complexity. YRT-PET also defines a 3D histogram format (named Histogram3D in Fig. 1) which encodes all the possible LORs in a given scanner into a 3D array with three logical coordinates: the in-plane angular and radial coordinates and the axial ring pair. Note that time-of-flight information is not encoded in this histogram format in the current implementation. This histogram format only stores the exact LORs allowed by the scanner without compression (mashing), as opposed to a sinogram, which describes LORs in a regularly-spaced grid. This histogram representation allows for sensitivity image computation as it defines . Another built-in histogram format, named SparseHistogram in Fig. 1, encodes a value (number of coincidences) associated with a detector pair, which can save memory when storing acquisitions of small objects with some PET systems.

3). Plugin system for custom projection data formats:

YRT-PET includes a compile-time plugin system, which allows the extension of the software to support arbitrary scanner data formats. A plugin consists of a C++ class, inheriting from either the list-mode or the histogram abstract class, and the implementation of the required interface methods. Plugins can also add desired format-specific command-line options to toolkit’s built-in command-line executables. The simplified diagram in Fig. 1 shows a hypothetical third-party plugin PluginListMode, which inherits from the ListMode class. The plugin needs to define a type identifier and a static factory function (namely create) added to a “Format Registry”. This registry consists of a hash table linking unique identifiers to corresponding factory functions, allowing for the transparent creation of a ProjectionData object. Each format gets added into the registry at compile-time. At runtime, the user provides, through the command-line parameters, a filename and a type identifier to create the proper object. Fig. 2 shows a diagram describing the inputs for an OS-EM list-mode reconstruction with motion correction in YRT-PET. Randoms estimates are extracted from the list-mode data (typically by plugins or passed as histogram format. Scatter must be provided in Histogram3D format. Attenuation correction is passed via hardware and in-vivo attenuation maps. Finally, normalization factors can be extracted by plugins or passed as Histogram3D objects. Note that the example in Fig. 1 can also be applied for a custom histogram format. Plugins can also add Python bindings and command-line executables, which will be compiled within YRT-PET. Note that a plugin adding support for the PET ETSI Raw Data (PETSIRD) list-mode format [1] is currently in development.

Fig. 2.

Fig. 2.

Data pipeline for a list-mode reconstruction with motion-correction. Note that the list-mode and histogram formats can be customized through additional plugins, as shown in figure 1. Reconstructions from histogram data follow a similar structure, but without the steps related to motion-correction.

This plugin system design has several advantages: 1) it enables the use of YRT-PET with scanners whose raw data format is proprietary, 2) it simplifies the addition of new formats, requiring minimal effort, as it is automatically performed by the CMake build system upon discovery, and 3) it isolates data loading code from the rest of the reconstruction code, enabling the use of new data formats with all the YRT-PET features, including CPU/GPU reconstruction, Python bindings and others.

4). Image format:

Images are read and saved in the NeuroImaging Informatics Technology Initiative (NIfTI) open file format (National Institutes of Health, MD, USA). Additionally, since YRT-PET’s image buffers also respects the Python buffer protocol, image objects can also be used in Python and bound to NumPy arrays [43], or PyTorch tensors [27]. Therefore, YRT-PET operators can be seamlessly applied to NumPy arrays or PyTorch tensors without data transfer, which greatly facilitates their integration in Python programs and scripts.

B. Operators

YRT-PET operator objects are responsible for performing data processing components of the imaging model (e.g., PET projector, PSF model, image-domain transformation, etc.). A YRT-PET operator object must implement a forward and adjoint routine to be used within an optimization algorithm. These operations may be implemented as matrix-vector products, following traditional notations, but are more frequently implemented as on-the-fly operations due to memory constraints. Operators act as the building blocks for the reconstruction algorithm. This design pattern increases readability and code portability and simplifies the development of advanced solvers (see for instance [44] where Python bindings were used to develop a complex optimization framework using YRT-PET projection operators). Available operators include the image-domain PSF operator, image-domain rigid-motion transformations, and PET projectors.

GPU-support for operators is implemented by subclassing the regular CPU operators and adding memory management infrastructure. This means that YRT-PET operators transparently implement both a CPU and GPU version (using CUDA). GPU operators can either operate directly on GPU buffers or perform data transfers between host and device, depending on the location of the input buffer. Figure 3 shows the software architecture of the projection operators in both CPU and GPU. For projection operators, LORs are gathered through the ProjectionData interface before processing. The Scanner properties and the motion information are used to prepare the processing of each LOR. In order to properly select the LORs in a subset and provide a transparent interface for list-mode and histogram operations, they are enumerated by a BinIterator object, which is created for each subset and is responsible for listing measurements in a given subset. For the GPU operators, the subsets are divided into batches and LORs are then transferred to device memory through the intermediary of ProjectionDataDevice. The batch loading process is run in parallel with the forward and backward projections. This means that while the CPU prepares the data and copies it into the GPU memory, the GPU can run the projection operators, avoiding bottlenecks. This optimization leads to significant speed-ups compared to conventional approaches, especially when dealing with large datasets. Image-domain operators assume that images fit in GPU memory.

Fig. 3.

Fig. 3.

Software architecture diagram for projection operators

Additionally, just as it is possible to bind a NumPy array to YRT-PET data structures and apply CPU operators from Python, GPU operators can be applied directly on PyTorch [27] GPU tensors. This memory aliasing enables the straightforward implementation of neural networks that incorporate PET projection operations, as studied in [45] or [46], while operating directly within device memory, greatly accelerating network training and inference.

C. Optimization algorithm

The OS-EM implementation follows (6) (which reduces to (4) when event-by-event motion correction is not used). The forward model applied to the current image estimate includes image-domain rigid motion and PSF modeling, followed by application of the PET projector, multiplicative corrections (for LOR sensitivity and attenuation correction), and additive corrections for randoms and scatter. The building blocks provided allow the development of more advanced algorithms in C++ or Python.

Note that YRT-PET’s GPU implementation of the OS-EM algorithm subdivides the subsets into multiple batches of LORs if the projection data does not fit on the GPU memory. More specifically, (4) and (6) are modified such that the sums over measurements change from ip to b=1Bip,b, where b is the batch index, B is the total number of batches, and p,b is the set of LORs in subset p and batch b. The number of batches is determined by the amount of available GPU memory.

IV. Evaluation

This section illustrates the functionalities of YRT-PET in several applications using different input data formats and corrections. The goal is to demonstrate YRT-PET’s capabilities for dynamic reconstruction, motion correction, algorithm prototyping, and an arbitrary scanner geometry. Two acquisition systems were analyzed: the GE Discovery MI and GATE [24] simulations of a flat panel scanner under development [47], [48], [49]. Comparisons were performed against manufacturer software and existing reconstruction tools using the same reconstruction parameters.

A. Brain imaging using GE Discovery MI

Two evaluations were performed using the GE Discovery MI, the first one on a dynamic acquisition of a human subject and the second on a 3D Hoffman brain phantom [50] to demonstrate motion correction.

1). Dynamic acquisition:

This acquisition was performed on a cognitively normal human subject imaged with [18F]MK6240, a tracer developed to image tau aggregation. The imaging protocol is described in [51]. The human study was approved by the local institutional review board. All subjects provided informed consent.

Reconstruction was first performed using the manufacturer’s toolbox (Duetto v02.18, GE Healthcare) and compared against YRT-PET using the same image reconstruction parameters and projector (i.e., OS-EM with 17 subsets and three iterations, using the distance-driven projector without PSF modeling) on a 1.7 × 1.7 × 2.8 mm3 voxel grid. Correction for scatter and randoms was incorporated in the reconstruction, as described in (4). Since this dataset had negligible intra-frame motion, reconstructions were performed in histogram format frame-by-frame, using a YRT-PET in-house plugin to read the input data. The scatter estimate, randoms estimate, and the normalization factors were generated by the manufacturer’s toolbox. A total of 54 frames were reconstructed over a 2-hour scan time. YRT-PET reconstructions were rigidly registered to reconstructions from the GE toolbox averaging the first 10 minutes and using ANTs [52] in order to account for slight differences in geometry conventions. A comparison of reconstructed images between the GE reconstruction (top) and the YRT-PET reconstruction (bottom) for several frames is shown in Fig. 4. An isotropic 3D gaussian filter of 1.7 mm full width at half-maximum (FWHM) was applied in both images displayed. Images reconstructed using YRT-PET show good visual agreement with images reconstructed using the GE toolbox.

Fig. 4.

Fig. 4.

Reconstruction of a dynamic acquisition from the GE Discovery MI scanner using the manufacturer software and YRT-PET for frames (a) from 120 s to 135 s, (b) from 6 min to 7 min, (c) from 28 min to 30 min and (d) from 100 min to 105 min. Images and difference images are shown in units of kBq/mL.

To evaluate the quantitative accuracy of the reconstructed images, kinetic modeling was performed on both sets of images using the multilinear reference tissue model with fixed (population-based) k2 value (MRTM2). The reference region was set to the cerebellum grey matter, k2 was set to 0.0126 min−1 and a t value of 40 minutes was used. The distribution volume ratio (DVR) maps were computed in regions of interest (ROIs) segmented using the FreeSurfer software [53] and co-registered to the PET data using the FMRIB Software Library (FSL) [54]. The time activity curves (TAC) in figure 5 show little difference between the manufacturer and YRT-PET reconstructed images.

Fig. 5.

Fig. 5.

Time activity curves and their MRTM2 fitting for the manufacturer and the YRT-PET reconstructed images. The time t at 40 minutes is illustrated by a vertical line.

The results demonstrate a strong agreement between DVR estimates obtained from both sets of reconstructed images. Figure 6 illustrates the correlation between the DVR values for each region when computed from YRT-PET’s images and the manufacturer’s images. Linear regression analysis of these parameters yielded a slope close to 1 (a = 0.97) and a high correlation coefficient (R2=0.99), reinforcing that both methods produce highly consistent kinetic modeling results. Additionally, the Bland-Altman analysis [55] confirms the equivalence of the estimated kinetic parameters, indicating no significant bias between the two software tools.

Fig. 6.

Fig. 6.

Linear regression and Bland-Altman plot between the DVRs calculated using the images from the manufacturer and from YRT-PET

These results demonstrate that the proposed reconstruction software provides images that are quantitatively similar to the manufacturer’s images, validating its suitability for dynamic PET imaging and kinetic modeling.

2). Motion correction:

To evaluate event-by-event motion correction in YRT-PET, reconstructions were performed from two 15-minute acquisitions of a Hoffman [50] brain phantom as in [37]. The first acquisition was static and the second had manually induced motion ranging ±20° and ±40 mm along all 6 degrees of freedom. The motion information was captured using the Polaris Vega tracking camera (NDI, Ontario, Canada), which recorded transformation matrices at 60 Hz. Similar to the dynamic acquisition described earlier, the additive correction factors were generated by the manufacturer’s toolbox. Note that, although the additive corrections were stored in a 3D histogram, the measurement data was still in list-mode format. The reconstruction used a voxel size of 2 × 2 × 2.8 mm3, was performed with 34 subsets and two OS-EM iterations, following (6). Since the acquisition of the moving phantom was started shortly after the static acquisition, its reference frame was aligned with the static data, therefore no image registration was needed.

Images in Fig. 7 and the corresponding line plots show minimal differences between the static image and the motion-corrected image in arbitrary units (A.U.). To quantify the performance of motion correction, the structural similarity index measure (SSIM) [56] between the two images was measured. The SSIM is defined as:

SSIM(x1,x2)=(2μ(x1)μ(x2)+C1)(2cov(x1,x2)+C2)(μ(x1)2+μ(x2)2+C1)(σ(x1)2+σ(x2)2+C2), (8)

where x1 and x2 are the two images compared, the μ, σ and cov functions represent the mean, standard deviation and the covariance. C1 and C2 are constants included to avoid numerical instability and are set to a small fraction of the dynamic range of the images. The SSIM calculated between the motion-corrected reconstruction and the static reconstruction was 95.3%, while the SSIM between the static and uncorrected image was 48.6%. Additionally, the contrast-to-noise ratio (CNR) was measured to confirm the effect of motion correction. The CNR was calculated between a target region with high activity and a background region with low activity.

CNR(x)=μ(xtgt)μ(xbkg)σ(xbkg), (9)

where xtgt and xbkg are the target and background regions, segmented from an aligned MRI image of the phantom, and correspond to the gray matter and white matter region voxels respectively [50]. Using the above equation, the CNR of the static image was 5.37, the motion-corrected image 5.35, and the non-motion-corrected image 1.04. Although the motion correction was able to recover a large portion of the contrast, the motion-corrected image showed more noise than the static image. This may be caused by the pre-correction of measurements mi from the in vivo attenuation coefficient factors ai(v) from (6), which is an approximation [57] and invalidates the Poisson model assumption.

Fig. 7.

Fig. 7.

Hoffman phantom reconstructions acquired with the GE Discovery MI scanner: (a) No motion correction (No MC), (b) Static reference (Static), (c) YRT-PET motion correction (MC), (d) Horizontal line profiles through images.

B. Image reconstruction using Deep Image Prior

In order to demonstrate the usage of YRT-PET’s GPU operators within neural networks, the acquisition described in subsection IV-A2 was reconstructed using a Deep Image Prior (DIP) method [28] similar to that proposed by Hashimoto et al. [45]. This algorithm consists of calculating an image x using a convolutional neural network (CNN):

x=f(θz) (10)

where f represents the CNN, with weights represented by θ, and with z representing the prior vector. We define the forward model y¯ as:

y¯=saGx+r+σ, (11)

with the operator representing element-wise multiplication. Supposing the measured acquisition m follows a distribution of p(mix) and the projection-space data is binned into histograms, the log likelihood for the measured data m is written as:

L(mx)=ilogp(mix)=imilogy¯iy¯ilogmi! (12)

This method estimates the image x by training the neural network, substituting the forward model into the network’s loss function. In this case, the negative log likelihood is used as loss function. The training is therefore performed as:

θ^=arg minθiy¯imilogy¯i (13a)
s.t.y¯=saGf(θz)+r+σ (13b)

The final image is then computed from a final inference:

x^=f(θ^z) (14)

Since the loss function includes a forward projection G in (13b), the memory aliasing pattern allowed by the proposed software allows the usage of the Siddon or distance-driven projector within the neural network with minimal memory transfers between the CPU and GPU. Moreover, since both the forward and backward projectors are exposed through Python bindings, the PyTorch backpropagation routine can include YRT-PET’s back-projection operation transparently to the optimizer. The optimizer used is the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) [58] with a learning rate of 0.1, and the CNN architecture is the same 3-D U-Net structure as in [45]. The implementation code for this demonstration is also available3, serving as an example of deep learning algorithm prototyping for PET reconstruction.

Figure 8 shows a visual comparison of the DIP method applied on the Hoffman brain phantom. The low count input was generated by randomly sampling 1/40th of the list-mode events and re-binning them into a 3D histogram. The projector used for G is Siddon for both the ML-EM and the DIP reconstructions. To quantitatively compare both reconstructions, the convergence curves in Figure 9 plot the contrast recovery coefficient (CRC) against the normalized standard deviation (NSTD), which are calculated as:

CRC(x,xGT)=μ(xtgt)μ(xbkg)1μ(xtgtGT)μ(xbkgGT)1, (15a)
NSTD(x)=σ(xbkg)μ(xbkg), (15b)

where xGT is the ground truth image defined here as the last iteration of the full-count ML-EM reconstructed image. xbkg and xtgt are defined the same way as previously, namely as the white matter and gray matter voxels respectively. Figure 9 shows that the DIP algorithm reaches a lower level of noise than the low-count ML-EM for the same contrast recovery level.

Fig. 8.

Fig. 8.

Hoffman phantom reconstructions of a static acquisition from the GE Discovery MI scanner using (a) ML-EM with full count, (b) ML-EM with 1/40th of the count, and (c) DIP with 1/40th of the count. The magnified region of the red dotted square are shown in the bottom row. The image intensity of the full-count reconstruction was rescaled to match the intensity of the low-count reconstructions.

Fig. 9.

Fig. 9.

GE Discovery MI Hoffman phantom reconstruction convergence curves from the static acquisition. The corresponding reconstructed images from Figure 8 are indicated by an “X”. The full count ML-EM curve shows every 4 images from iteration 4 to 500. The low count ML-EM curve shows every 4 images from iteration 4 to 300. The low count DIP curve shows every 30 images from epochs 20 to 3200.

C. Limited-angle reconstruction from GATE simulations

In order to validate YRT-PET for a non-cylindrical scanner geometry, GATE simulations [24] of the XCAT phantom [59] were performed for a long axial field-of-view flat panel scanner under development [47], [48], [60].

The simulated geometry used two flat panel detectors of dimension 120 × 60 cm, composed of 3 × 3 × 20 mm crystals with a TOF resolution of 200 ps. GATE simulation results were converted to binary list-mode format (excluding randoms and scattered coincidences for simplicity) and reconstructed with ML-EM using CASToR v3.2.1 [14], PyTomography v3.4.0 [15], and YRT-PET v1.3.1 using one subset and 50 iterations with a voxel size of 3 × 3 × 3 mm3. In order to model the axial sensitivity of the scanner in reconstruction, normalization factors were estimated using a Cauchy distribution model, fitted to the overlap between LORs and detectors with respect to the axial LOR angle [61]. The normalization correction factors were stored in the CASToR histogram format and populated into the list-mode file. In YRT-PET, the same normalization factors were accumulated into a 3D histogram and used for the reconstruction. All reconstructions were performed on an AMD EPYC 7763 64-Core processor using 32 threads and an NVIDIA A100-SXM4-80GB GPU.

Figure 10 shows a comparison of reconstructed images using CASToR, PyTomography and YRT-PET. The voxel values were scaled to match the simulated activity in the liver region, with the same scaling factor applied to all the reconstructed images. Images as well as the line plots suggest that YRT-PET, PyTomography, and CASToR produce comparable images and are in reasonable agreement with the ground truth activity map.

Fig. 10.

Fig. 10.

Comparison of reconstructed images from GATE simulation of XCAT phantom with flat-panel scanner: (a) ground truth (Ref.), (b) CASToR reconstruction, (c) PyTomography reconstruction, (d) YRT-PET reconstruction, and (e) horizontal line profile through images.

To further compare the two reconstruction engines, the normalized root mean squared error (NRMSE) was calculated for each reconstructed image:

NRMSE(x,xGT)=1Jj(xjxjGT)2μ(xGT), (16)

where x is the reconstructed image, xGT is the ground truth and μ is a function calculating the average intensity. To compute the CRC, (15a) is applied with the target region xtgt placed inside the liver and the background region xbkg in a uniform soft-tissue region. Quantitative metrics are summarized in Table I and confirm the qualitative analysis, showing comparable image quality between the three engines. Note that the SSIM metric in Table I compares each reconstruction against the ground truth.

TABLE I.

Metrics comparison of the XCAT reconstructions obtained from CASToR and YRT-PET (shown in Fig. 10).

Metric CASToR Pytomo. YRT-PET
CRC 0.919 0.925 0.926
SSIM 88.1% 88.4% 88.3%
NRMSE 24.3% 23.9% 24.1%

Fig. 11 illustrates a breakdown of the time taken by each reconstruction engine. The timing comparison was divided in two parts, the sensitivity image generation and the ML-EM reconstruction. To avoid bias, the timings exclude the time spent reading the input files.

Fig. 11.

Fig. 11.

Bar plot comparing timings from the tested open-source reconstruction tools. For YRT-PET and CASToR, both the Siddon and the distance-driven projector were used.

YRT-PET on GPU outperforms CASToR in speed due to its usage of GPU acceleration. However, the CPU implementation of YRT-PET is slower than CASToR, due to its ability to re-use the elements of the geometric system matrix between the forward projection in the backward projection [14] rather than computing them on-the-fly twice.

When compared to PyTomography, which uses the GPU-based projector from the PARALLELPROJ program [20], YRT-PET still results in significant speedup (around 10x). This large difference is due to the size of the data, requiring splitting of the list-mode data in batches and because PyTomography’s [15] architecture is built on top of PyTorch, which handles memory management internally. The transfer between the host and the device is managed by PyTorch and blocks the execution, resulting in a severe bottleneck. YRT-PET parallelizes the data loading step with the projection operation, which avoids this bottleneck and leads to faster reconstruction. Note that the performance of PyTomography and YRT-PET is compared for a smaller image reconstruction problem in Supplementary Material S.2, showing a more modest, but still substantial, improvement around 2x for distance-driven and 5x for Siddon.

V. Discussions

The presented image reconstruction software offers a unique set of features, with GPU-acceleration, Python bindings, event-by-event motion correction, etc. and has been validated against existing, both proprietary and open-source, image reconstruction software.

A speed comparison was performed against open-source reconstruction software. Although the CPU implementation of YRT-PET still under-performs compared to a state-of-the-art reconstruction engine like CASToR, the GPU implementation outperforms both CPU-only and GPU-accelerated engines. The suboptimal performance of YRT-PET on CPU can be overcome in future works by integrating further optimizations such as the re-use of the geometric system matrix elements between forward and backward projection. The implementation difference that explains YRT-PET’s performance gain over PyTomography is, at its core, one of design choices. PyTomography has a more high-level design, since it is a Python library that interfaces with PyTorch for the vector operations and PARALLELPROJ for the Joseph projector CUDA kernel. On the other hand, YRT-PET is a C++ program that directly calls every CUDA kernel necessary for the reconstruction. This lower-level design, albeit more difficult to maintain, allows for far more control and optimization opportunities. The data flow in YRT-PET, which overlaps GPU kernels and memory transfers, results in a significant reduction of run time. Newer generations of high resolution and high sensitivity scanners are expected to produce a large amount of data per scan. This means that a more granular memory management becomes vital to achieve higher performance.

However, in order to achieve quantitative accuracy in image reconstruction, i.e. recover the activity level in each voxel, calibration factors must be determined for a given scanner and imaging protocol. Since scanner-dependent operations are handled by plugins, estimation of these calibration factors for quantitative accuracy must be performed by plugin developers and users. YRT-PET also allows for a global scaling parameter that can be used to scale images and recover accurate activity estimates. Similarly, the estimation of randoms is scanner-specific and is typically managed within plugins.

YRT-PET offers a provisional non-TOF implementation of the SSS algorithm for scatter estimation, evaluated for the GE DMI PET/CT scanner in Supplementary material S.1. The current implementation, however, has not been fully validated and might be unsuitable for some scanner architectures, such as flat panel or helmet-shaped scanners. Additional options will be added to allow more control over the scatter estimation process, allowing users to optimize it for different use cases.

Future work will address some of the current limitations of YRT-PET, including scatter estimation and improving the GPU implementation. More specifically, further optimizations to batch loading, memory management, and projector kernels are planned to boost speed and scalability. Additional developments will also include support for regularly-spaced sinograms (optionally with compression) and sinogram-based image reconstruction (using iterative reconstruction or filtered back-projection), handling of large datasets through disk swapping, and support for image-domain non-rigid deformations.

VI. Conclusion

This paper introduces the YRT-PET software, an open-source image reconstruction engine for PET which includes support for event-by-event motion correction, TOF, DOI, PSF modeling, GPU acceleration, and a provisional scatter estimation module. The plugin system aims to facilitate the extension of the software to support more PET scanners and data formats. Evaluations against existing tools showed good quantitative agreement for dynamic reconstructions, as evidenced by the high correlation between DVRs estimated from reconstructions using the manufacturer’s reconstruction engine and YRT-PET (slope: 0.97, R2=0.99). Moreover, a substantial acceleration compared to state-of-the-art reconstruction engines was achieved thanks to its optimized GPU-accelerated implementation. The Python interface enables the use of the internal data structures and operators within complex algorithms including advanced reconstruction frameworks and integration with deep learning models.

Supplementary Material

Supplementary_S2

Fig. S.2. Histogram-space display of (a) prompts, (b) randoms estimates, (c) scatter estimate from the manufacturer toolbox and (d) scatter estimate from YRT-PET. A line plot is shown in the bottom graph. The manufacturer scatter estimate is slightly above the proposed software, which explains the slightly higher contrast observed in the reconstructed images.

Supplementary_S1

Fig. S.1. Reconstructed images from acquisition of Hoffman phantom on GE DMI PET/CT scanner: (a) without scatter correction (No SC), (b) with scatter correction using the GE toolbox estimation (SC (GE)) and (c) with scatter correction using YRT-PET’s estimation (SC (YRT-PET)). (d) shows the line plot through the three images.

Acknowledgments

This work involved human subjects in its research. Approval of all ethical and experimental procedures and protocols was granted by the institutional review board at Massachusetts General Hospital (Protocol number: 2021P003519), and performed in line with the Declaration of Helsinki and in accordance with local statutory requirements.

Manuscript created 17 April 2025; This work was supported in part by the National Institutes of Health under the grants P41EB022544, R01EB035093, U01EB027003 and R01AG076153, by the European Union’s Horizon Europe research and innovation program under grant agreement No 101099896 (PetVision project), and from the Slovenian Research and Innovation Agency (Research core funding Nos. P1-0389 and P1-0135).

Footnotes

Contributor Information

Yassir Najmaoui, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA and the Interdisciplinary Institute for Technological Innovation, Université de Sherbrooke, Sherbrooke, QC, Canada..

Yanis Chemli, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA..

Maxime Toussaint, Gordon Center for Medical Imaging, and is now with the Sherbrooke Molecular Imaging Center of CRCHUS and Department of Medical Imaging and Radiation Sciences, Université de Sherbrooke, Sherbrooke, QC, Canada..

Yoann Petibon, Gordon Center for Medical Imaging, and is now with Takeda Pharmaceuticals, Cambridge MA, USA..

Baptiste Marty, Gordon Center for Medical Imaging, and is now with Exail Sonar Division, Paris, France..

Kathryn Fontaine, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA..

Jean-Dominique Gallezot, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA..

Gašper Razdevšek, Jožef Stefan Institute, Ljubljana, Slovenia..

Matic Orehar, Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia..

Maeva Dhaynaut, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA..

Nicolas Guehl, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA..

Rok Dolenec, Faculty of Mathematics and Physics, University of Ljubljana, and the Jožef Stefan Institute, Ljubljana, Slovenia..

Rok Pestotnik, Jožef Stefan Institute, Ljubljana, Slovenia..

Keith Johnson, Gordon Center for Medical Imaging, Mass. General Research Institute, Boston MA, USA..

Jinsong Ouyang, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA..

Marc Normandin, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA..

Marc-André Tétrault, Gordon Center for Medical Imaging, Massachusetts General Hospital, Boston MA, and is now with the Interdisciplinary Institute for Technological Innovation, Université de Sherbrooke, Sherbrooke, QC, Canada..

Roger Lecomte, Sherbrooke Molecular Imaging Center of CRCHUS and Department of Medical Imaging and Radiation Sciences, Université de Sherbrooke, Sherbrooke, QC, Canada..

Georges El Fakhri, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA..

Thibault Marin, Yale Biomedical Imaging Institute and Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA..

References

  • [1].Karakatsanis N. et al. , “Usability of PETSIRD, the PET raw data open format of the emission tomography standardization initiative (ETSI): results from ETSI’s first hackathon,” Journal of Nuclear Medicine, vol. 65, pp. 241 285–241 285, 2024. [Google Scholar]
  • [2].Rahmim A. et al. , “Strategies for motion tracking and correction in PET,” PET clinics, vol. 2, no. 2, pp. 251–266, 2007. [DOI] [PubMed] [Google Scholar]
  • [3].Carson RE et al. , “Design of a motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction for the HRRT,” in IEEE Nuclear Science Symposium and Medical Imaging Conference, vol. 5, 2003, pp. 3281–3285. [Google Scholar]
  • [4].Spangler-Bickell MG et al. , “Rigid Motion Correction for Brain PET/MR Imaging using Optical Tracking,” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 3, no. 4, pp. 498–503, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Zeng T. et al. , “Markerless head motion tracking and event-by-event correction in brain PET,” Physics in Medicine and Biology, vol. 68, no. 24, 2023. [Google Scholar]
  • [6].Petibon Y. et al. , “Cardiac motion compensation and resolution modeling in simultaneous PET-MR: a cardiac lesion detection study,” Physics in Medicine and Biology, vol. 58, no. 7, pp. 2085–2102, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Gillman A. et al. , “PET motion correction in context of integrated PET/MR: Current techniques, limitations, and future projections,” Medical Physics, vol. 44, no. 12, pp. e430–e445, 2017. [DOI] [PubMed] [Google Scholar]
  • [8].Marin T. et al. , “Motion correction for PET data using subspace-based real-time MR imaging in simultaneous PET/MR,” Physics in Medicine and Biology, vol. 65, no. 23, p. 235022, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Sun T. et al. , “Body motion detection and correction in cardiac PET: Phantom and human studies,” Medical Physics, vol. 46, no. 11, pp. 4898–4906, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Jin X. et al. , “Evaluation of frame-based and event-by-event motion-correction methods for awake monkey brain PET imaging,” Journal of Nuclear Medicine, vol. 55, no. 2, pp. 287–293, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Qiao F. et al. , “A motion-incorporated reconstruction method for gated PET studies,” Physics in Medicine and Biology, vol. 51, no. 15, pp. 3769–3783, 2006. [DOI] [PubMed] [Google Scholar]
  • [12].Lamare F. et al. , “List-mode-based reconstruction for respiratory motion correction in PET using non-rigid body transformations,” Physics in Medicine and Biology, vol. 52, no. 17, pp. 5187–5204, 2007. [DOI] [PubMed] [Google Scholar]
  • [13].Thielemans K. et al. , “STIR: software for tomographic image reconstruction release 2,” Physics in Medicine and Biology, vol. 57, no. 4, pp. 867–883, 2012. [DOI] [PubMed] [Google Scholar]
  • [14].Merlin T. et al. , “CASToR: a generic data organization and processing code framework for multi-modal and multi-dimensional tomographic reconstruction,” Physics in Medicine and Biology, vol. 63, no. 18, p. 185005, 2018. [DOI] [PubMed] [Google Scholar]
  • [15].Polson L. et al. , “PyTomography: A Python Library for Quantitative Medical Image Reconstruction,” arXiv preprint, vol. 2309, p. 01977, 2024. [Google Scholar]
  • [16].Beazley DM, “SWIG: An easy to use tool for integrating scripting languages with C and C++,” in Tcl/Tk Workshop, 1996. [Google Scholar]
  • [17].Hudson HM and Larkin RS, “Accelerated image reconstruction using ordered subsets of projection data,” IEEE Transactions on Medical Imaging, vol. 13, no. 4, pp. 601–609, 1994. [DOI] [PubMed] [Google Scholar]
  • [18].Watson CC, “New, faster, image-based scatter correction for 3D PET,” IEEE Transactions on Nuclear Science, vol. 47, no. 4, pp. 1587–1594, 2000. [Google Scholar]
  • [19].Wang G and Qi J, “PET Image Reconstruction Using Kernel Method,” IEEE Transactions on Medical Imaging, vol. 34, no. 1, pp. 61–71, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Schramm G and Thielemans K, “PARALLELPROJ-an open-source framework for fast calculation of projections in tomography,” Frontiers in Nuclear Medicine (Lausanne, Switzerland: ), vol. 3, p. 1324562, 2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Joseph PM, “An Improved Algorithm for Reprojecting Rays through Pixel Images,” IEEE Transactions on Medical Imaging, vol. 1, no. 3, pp. 192–196, 1982. [DOI] [PubMed] [Google Scholar]
  • [22].De Man B and Basu S, “Distance-driven projection and backprojection in three dimensions,” Physics in Medicine and Biology, vol. 49, no. 11, pp. 2463–2475, 2004. [DOI] [PubMed] [Google Scholar]
  • [23].Manjeshwar RM et al. , “Fully 3D PET Iterative Reconstruction Using Distance-Driven Projectors and Native Scanner Geometry,” in IEEE Nuclear Science Symposium and Medical Imaging Conference, vol. 5, 2006, pp. 2804–2807. [Google Scholar]
  • [24].Jan S. et al. , “GATE V6: a major enhancement of the GATE simulation platform enabling modelling of CT and radiotherapy,” Physics in Medicine and Biology, vol. 56, no. 4, pp. 881–901, 2011. [DOI] [PubMed] [Google Scholar]
  • [25].Brun R and Rademakers F, “ROOT – An object oriented data analysis framework,” Nuclear Instruments and Methods in Physics Research, vol. 389, no. 1-2, pp. 81–86, 1997. [Google Scholar]
  • [26].José Santo R. et al. , “openSSS: an open-source implementation of scatter estimation for 3d TOF-PET,” EJNMMI Physics, vol. 12, no. 1, p. 17, 2025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Ansel J. et al. , “PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation,” in ACM International Conference on Architectural Support for Programming Languages and Operating Systems, vol. 2. New York, NY, USA: Association for Computing Machinery, 2024, pp. 929–947. [Google Scholar]
  • [28].Ulyanov D. et al. , “Deep Image Prior,” arXiv preprint, vol. 1711, p. 10925, 2018. [Google Scholar]
  • [29].Gong K. et al. , “PET Image Reconstruction Using Deep Image Prior,” IEEE Transactions on Medical Imaging, vol. 38, no. 7, pp. 1655–1665, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Pratx G et al., “Fast, accurate and shift-varying line projections for iterative reconstruction using the GPU,” IEEE Transactions on Medical Imaging, vol. 28, no. 3, pp. 435–445, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Herraiz JL et al. , “Fully 3d GPU PET reconstruction,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 648, pp. S169–S171, 2011. [Google Scholar]
  • [32].Dagum L and Menon R, “OpenMP: an industry standard API for shared-memory programming,” IEEE Computational Science and Engineering, vol. 5, no. 1, pp. 46–55, 1998. [Google Scholar]
  • [33].Siddon RL, “Fast calculation of the exact radiological path for a three-dimensional CT array,” Medical Physics, vol. 12, no. 2, pp. 252–255, 1985. [DOI] [PubMed] [Google Scholar]
  • [34].Jacobs F. et al. , “A Fast Algorithm to Calculate the Exact Radiological Path through a Pixel or Voxel Space,” Journal of Computing and Information Technology, vol. 6, no. 1, pp. 89–94, 1998. [Google Scholar]
  • [35].Rahmim A. et al. , “Resolution modeling in PET imaging: theory, practice, benefits, and pitfalls,” Medical Physics, vol. 40, no. 6, p. 064301, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Deller TW et al. , “Implementation and Image Quality Benefit of a Hybrid-Space PET Point Spread Function,” in IEEE Nuclear Science Symposium and Medical Imaging Conference, 2021, pp. 1–5. [Google Scholar]
  • [37].Chemli Y. et al. , “Super-resolution in brain Positron Emission Tomography using a real-time motion capture system,” NeuroImage, vol. 272, p. 120056, 2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Brasse D. et al. , “Correction methods for random coincidences in fully 3D whole-body PET: impact on data and image quality,” Journal of Nuclear Medicine, vol. 46, no. 5, pp. 859–867, 2005. [PubMed] [Google Scholar]
  • [39].Lange K and Carson RE, “EM reconstruction algorithms for emission and transmission tomography,” Journal of Computer Assisted Tomography, vol. 8, no. 2, pp. 306–316, 1984. [PubMed] [Google Scholar]
  • [40].Olesen OV et al. , “Motion tracking for medical imaging: a nonvisible structured light tracking approach,” IEEE Transactions on Medical Imaging, vol. 31, no. 1, pp. 79–87, 2012. [DOI] [PubMed] [Google Scholar]
  • [41].Tiss A. et al. , “Impact of motion correction on [18F]-MK6240 tau PET imaging,” Physics in Medicine and Biology, vol. 68, no. 10, p. 105015, 2023. [Google Scholar]
  • [42].Rahmim A. et al. , “Motion compensation in histogram-mode and list-mode EM reconstructions: beyond the event-driven approach,” IEEE Transactions on Nuclear Science, vol. 51, no. 5, pp. 2588–2596, 2004. [Google Scholar]
  • [43].Harris CR et al. , “Array programming with NumPy,” Nature, vol. 585, no. 7825, pp. 357–362, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [44].Marin T. et al. , “PET mapping of receptor occupancy using joint direct parametric reconstruction,” IEEE transactions on bio-medical engineering, vol. 72, 2024. [Google Scholar]
  • [45].Hashimoto F. et al. , “PET Image Reconstruction Incorporating Deep Image Prior and a Forward Projection Model,” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 6, no. 8, pp. 841–846, 2022. [Google Scholar]
  • [46].Hashimoto F. et al. , “Fully 3d implementation of the end-to-end deep image prior-based PET image reconstruction using block iterative algorithm,” Physics in Medicine & Biology, vol. 68, no. 15, p. 155009, 2023. [Google Scholar]
  • [47].Pestotnik R. et al. , “Simulation study of a 50 ps panel TOF PET imager,” Journal of Instrumentation, vol. 17, no. 12, p. C12010, 2022. [Google Scholar]
  • [48].Orehar M. et al. , “Design Optimisation of a Flat-Panel, Limited-Angle TOF-PET Scanner: A Simulation Study,” Diagnostics, vol. 14, no. 17, p. 1976, 2024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [49].Razdevšek G. et al. , “Flexible and modular PET: Evaluating the potential of TOF-DOI panel detectors,” Medical Physics, vol. n/a, no. n/a, 2025. [Google Scholar]
  • [50].Hoffman E. et al. , “3-D phantom to simulate cerebral blood flow and metabolic images for PET,” IEEE Transactions on Nuclear Science, vol. 37, no. 2, pp. 616–620, 1990. [Google Scholar]
  • [51].Guehl NJ et al. , “Evaluation of pharmacokinetic modeling strategies for in-vivo quantification of tau with the radiotracer [(18)F]MK6240 in human subjects,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 46, no. 10, pp. 2099–2111, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [52].Tustison NJ et al. , “The ANTsX ecosystem for quantitative biological and medical imaging,” Scientific Reports, vol. 11, no. 1, p. 9068, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].Fischl B, “FreeSurfer,” NeuroImage, vol. 62, no. 2, pp. 774–781, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [54].Jenkinson M. et al. , “Fsl,” NeuroImage, vol. 62, no. 2, pp. 782–790, 2012. [DOI] [PubMed] [Google Scholar]
  • [55].Martin Bland J and Altman D, “Statistical Method for Assessing Agreement Between two Methods of Clinical Measurement,” The Lancet, vol. 327, no. 8476, pp. 307–310, 1986. [Google Scholar]
  • [56].Wang Z. et al. , “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. [DOI] [PubMed] [Google Scholar]
  • [57].Rahmim A. et al. , “Accurate event-driven motion compensation in high-resolution PET incorporating scattered and random events,” IEEE Transactions on Medical Imaging, vol. 27, no. 8, pp. 1018–1033, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [58].Liu DC and Nocedal J, “On the limited memory BFGS method for large scale optimization,” Mathematical Programming, vol. 45, no. 1, pp. 503–528, 1989. [Google Scholar]
  • [59].Segars WP et al. , “4D XCAT phantom for multimodality imaging research,” Medical Physics, vol. 37, no. 9, pp. 4902–4915, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [60].Razdevšek G. et al. , “Multipanel Limited Angle PET System With 50 ps FWHM Coincidence Time Resolution: A Simulation Study,” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 6, no. 6, pp. 721–730, 2022. [Google Scholar]
  • [61].Marin T. et al. , “Long-Axial Field-Of-View Limited-Angle PET System with Ultra-High Time-Of-Flight and Depth-Of-Interaction,” in IEEE Nuclear Science Symposium and Medical Imaging Conference, 2024, pp. 1–2. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary_S2

Fig. S.2. Histogram-space display of (a) prompts, (b) randoms estimates, (c) scatter estimate from the manufacturer toolbox and (d) scatter estimate from YRT-PET. A line plot is shown in the bottom graph. The manufacturer scatter estimate is slightly above the proposed software, which explains the slightly higher contrast observed in the reconstructed images.

Supplementary_S1

Fig. S.1. Reconstructed images from acquisition of Hoffman phantom on GE DMI PET/CT scanner: (a) without scatter correction (No SC), (b) with scatter correction using the GE toolbox estimation (SC (GE)) and (c) with scatter correction using YRT-PET’s estimation (SC (YRT-PET)). (d) shows the line plot through the three images.

RESOURCES