Abstract
Objective:
Quality assurance (QA) testing must be performed at regular intervals to ensure that medical devices are operating within designed specifications. Numerous QA phantoms and software packages have been developed to facilitate measurements of machine performance. However, due to the hard-coded nature of geometric phantom definition in analysis software, users are typically limited to the use of a small subset of compatible QA phantoms. In this work, we present a novel AI-based Universal Phantom (UniPhan) algorithm that is not phantom specific and can be easily adapted to any pre-existing image-based QA phantom.
Approach:
Extensible Markup Language Scalable Vector Graphics (XML-SVG) was modified to include several new tags describing the function of embedded phantom objects for use in QA analysis. Functional tags include contrast and density plugs, spatial linearity markers, resolution bars and edges, uniformity regions, and light-radiation field coincidence areas. Machine learning was used to develop an image classification model for automatic phantom type detection. After AI phantom identification, UniPhan imported the corresponding XML-SVG wireframe, registered it to the image taken during the QA process, performed analysis on the functional tags, and exported results for comparison to expected device specifications. Analysis results were compared to those generated by manual image analysis.
Main Results:
XML-SVG wireframes were generated for several commercial phantoms including ones specific to CT, CBCT, kV planar imaging, and MV imaging. Several functional objects were developed and assigned to the graphical elements of the phantoms. The AI classification model was tested for training and validation accuracy and loss, along with phantom type prediction accuracy and speed. The results reported training and validation accuracies of 99%, phantom type prediction confidence scores of around 100%, and prediction speeds of around 0.1 seconds. Compared to manual image analysis, Uniphan results were consistent across all metrics including contrast-to-noise ratio (CNR), modulation transfer function (MTF), HU accuracy, and uniformity.
Significance:
The UniPhan method can identify phantom type and use its corresponding wireframe to perform QA analysis. As these wireframes can be generated in a variety of ways this represents an accessible automated method of analyzing image based QA phantoms that is flexible in scope and implementation.
I. Introduction
Quality assurance (QA) is essential to ensuring that medical devices are operating within designed specifications. Quality health care, whether it be in the form of the accurate delivery of radiotherapy or the assurance of optimal diagnostic image quality for identification of disease and metabolic function tests, is dependent, in part, on the optimal performance of the equipment used1. The American Association of Physicists in Medicine (AAPM) and American College of Radiology (ACR) have underscored the importance of quality assurance and outlined standardized procedures and tolerances in Task Group Reports and Quality Control Manuals2,3,4,5,6.
There are numerous phantoms designed specifically for the QA of machine performance, as well as associated software packages that analyze these phantoms in order to report metrics required for QA tests2,7. Such phantoms typically have various embedded QA objects such as contrast disks, material density plugs, resolution bars, uniformity regions, spatial linearity markers, and other objects as required for machine QA assessment. Although manual QA object analysis can be performed, the use of automated analysis algorithms is preferable as it allows reduction of manual QA time, reduced user bias, and more quantitative data collection8,9. Currently, such algorithms are phantom-specific in that they only register and analyze features of a subset of available phantoms, which limits users in their choice of phantoms10. This is primarily due to the fact that, although the QA analysis methods of phantom objects are similar (contrast-noise-ratio (CNR), intensity, modulation-transfer-function (MTF), spatial linearity etc.), the geometric location and size of these objects can vary from phantom-to-phantom, making it necessary to hard code the phantom geometry directly into the analysis software.
To address these issues, we introduce a novel universal phantom (UniPhan) imaging QA analysis algorithm, which utilizes Extensible Markup Language-based Scalable Vector Graphics (XML-SVG) to directly embed functional information into the geometrical elements describing the QA phantom11,12, and a trained artificial intelligence (AI) image classification model for automatic phantom detection. Since SVG is based on XML, a markup language which can be interpreted by both human and machine, information defining the functional objects in the file can be deciphered, created, and edited using a simple text editor or graphics software package. The XML-SVG wireframe method provides an improvement over current analysis software packages with hard-coded phantoms by removing all phantom specific geometric information from the algorithm and placing it in an SVG wireframe. The machine learning routine allows automated phantom identification for selection of its corresponding wireframe. UniPhan will then decode the wireframe, register it to the image, perform image analysis, and output the results for comparison to expected machine performance metrics.
II. Materials and Methods
Functional Objects
XML is a markup language designed to transport and store data in a simplified manner through the use of a human- and machine-readable markup format and user defined tags which facilitate effective classification of objects within a file in a manner that doesn’t require specialized database tools to interpret11. Hundreds of document formats using XML syntax have been developed with the most popular one being Hypertext Markup Language (HTML) for internet web pages13. Even in the field of radiation oncology, XML has been adopted as the command language for linear accelerator (LINAC) control by some vendors as a straightforward means to allow users to interface with the equipment beyond standard clinical procedures without the need for detailed knowledge of programming. SVG is a well-known XML-based vector image format for two-dimensional (2D) graphics which describes the graphical elements in the human-readable format that is characteristic of XML and is widely used by many graphics software packages12. Here we have used XML-SVG to create visual wireframes for several phantoms as shown by the example in Figure 1. Although such wireframes can define the shape and position of the various objects in a phantom, they currently do not define their functional roles. To allow this, we have modified SVG with the addition of custom tags to identify the functional objects in the wireframe (Table 1). Current objects supported by UniPhan include contrast regions, intensity regions, MTF bars and edges, ball bearing (BB) center identification, and radiation versus light field coincidence areas.
Figure 1:

Example of a Leeds TOR-18FG phantom x-ray image (left), its corresponding XML-SVG wireframe (middle), and both registered (right). Functional objects in this phantom include outside body (black), contrast disks (green), intensity ROIs for uniformity (blue), MTF edge (red), and MTF bars (red).
Table 1:
List of wireframe functional objects. The corresponding image quality metric and the region(s) on the phantom with which they coincide are defined.
| Tag | QA Functions | Region Output |
|---|---|---|
|
| ||
| body | Image Registration | Outside edge of the phantom body |
| bb | Spatial Linearity | Locates and calculates center position of BB inside region |
| intensity | Density, Uniformity | Calculates mean, medium, and standard deviation of inner pixels |
| contrast | Contrast and CNR | Compares inner region pixels to outer pixels to calculate contrast 4,14 |
| mtf_bar | Spatial Resolution | Analyzes region containing line pair spacings for spatial resolution15,16 |
| mtf_edge | Spatial Resolution | Locates a high contrast edge within the region and calculates MTF17,18 |
| mtf_ring | Spatial Resolution | Locates a high contrast circular object within the region and calculates MTF19,20 |
| rvl | Radiation-vs-Light | Locates a high contrast radiation edge within the region reports distance from expected |
As shown in Table 1, several common measurement techniques were employed in UniPhan. For objects with the contrast tag, the Michelson and Weber contrasts are both calculated, along with the contrast-to-noise ratio (CNR). The intensity object will report the median pixel intensity along with its standard deviation. This object is versatile in that several common QA measurements can be made. For example, by drawing the intensity object on phantom density plugs, an HU versus density measurmenet can be performed. Similarly, drawing several intensity objects on a uniformity slice and calculating their intensity differences allows for a determination of image uniformity. Three objects exist for resolution measurements which utilize previously well established techniques15,19,20,21,22,23. For resolution bars, discrete MTF values are calculated as the normalized contrast of a set of line pairs, while for edges and rings, MTF is approximated from measurements of the edge spread function (ESF). Various MTF calculation methods were used to carry out analysis routines17,18,18,19,20,21,22,23,24,25,26,27,28,29. The BB object performs background segmentation and calculates the center of mass remaining pixels allowing spatial linearity to be measured between two or more BBs with well-known distances of separation. Finally, the body functional object is used for registering the wireframe to the phantom image.
Automated Phantom Identification
Image classification is employed by UniPhan as a method for automatic phantom identification. This method uses TensorFlow, a free and open-source end-to-end machine learning and artificial intelligence platform. TensorFlow’s tools for training and deep neural networks were used to develop and train an image classification model. The machine learning workflow was composed of data extraction and examination, an input pipeline, the building, training, and testing of the model, and model improvements if and when necessary. The trained model was then used to perform predictions (i.e., image classification tests) on images of the four above mentioned phantoms.
The data repository used for training, testing, and validation contained over 1600 DICOM images of four different phantoms; CatPhan 504 (Phantomlab, NY), Gammex ACR 464 (Sun Nuclear, FL), TOR-18FG (Leeds Test Objects, United Kingdom), and Las Vegas (Varian Medical Systems, CA). All phantom images were acquired at the University of Chicago using a Brillance Big Bore CT (Philips, Netherlands), Truebeam LINAC (Varian Medical Systems, CA), Trilogy LINAC (Varian Medical Systems, CA), and a C-Series LINAC (Varian Medical Systems, CA). All CT images were of the Gammex ACR phantom with protocol settings of 120 kVp, 285 mAs, and 3.0 mm slice thickness. LINAC CBCT images used the CatPhan 504 phantom with settings of 125 kVp, 80 mAs, and 2.5 mm slice thickness. LINAC MV images used the Las Vegas phantom and were acquired at 6 MV and 2 MU. LINAC kV images used the TOR18FG phantom and were acquired at 75.0 kVp, 200 mAs.
Data was prepared by sorting it by phantom type, and then by contrast, density, resolution, and uniformity slices for 3D-based phantoms (Gammex ACR and CatPhan 504). The data was then loaded off disk and used to create a TensorFlow dataset with a validation split of 80% and 20%, for training and validation, respectively. The dataset was then configured for performance (e.g., on-disk caching or keeping images in memory), and standardized the pixel values to be in the (0, 1) range, which is ideal for use with neural networks. A model was created using a TensorFlow Keras Sequential model which consists of three convolution blocks, with each containing a max pooling layer, and a fully-connected layer containing 128 units on top. The model was then compiled using a built-in optimizer and loss function and trained for 50 epochs. Initial training results were used to improve the model by reducing the effects of overfitting via data augmentation methods; this generates additional training data by augmenting exiting data using random transformations such flips, rotations, and zooms to expose the image classification model to more aspects of the data. The trained model was then used to classify a set of images that were not included in the training or validation sets.
Algorithm Workflow
The UniPhan algorithm process is shown in Figure 3. A source DICOM image file is inputed and its associated phantom XML wireframe is identified by the AI classification model. The DICOM metadata is read and imported using the pydicom module30. The UniPhan decoder performs an analysis on the wireframe to identify the features outlined in Table 1. High contrast features which are easy to detect, such as contrast disks and the phantom body are located using thresholding and Canny edge detection. The wireframe is registered to the image using a similarity function. In particular, the modified Powell algorithm is used to minimize the Manhattan norm between the raw image and the wireframe mask (as shown in eq. 1)31
| (1) |
where is the rectilinear distance and is the row and column coordinate of a pixel in the image array. Once registered, the image pixels associated with each functional object in the phantom are analyzed based on the specified function contained in the wireframe (Table 1).
Figure 3:

Uniphan workflow diagram. Wireframe objects are used to perform image registration by aligning them to their observed counterparts in the QA image. These objects also define the associated analysis function by means of functional tags included in the wireframe definition.
III. Results
To test the concept of a universal phantom algorithm, wireframes using the proposed XML-SVG syntax were created for several well-known QA phantoms (Figure 4). For automated phantom detection, the image classification model was trained for 50 epochs with each epoch taking around 1 second. The training and validation accuracy (Figure 5) crossed 90% after 5 epochs and quickly approached 99% by the 20th epoch. A total of 102 test QA images from all four phantom types were passed to the trained image classification model to test phantom type prediction and speed. The results reported prediction confidence scores of 99% for all phantom types, and prediction speeds of around 0.1 seconds per QA image. It was found that the input to the image classification model could be any QA image slice from a specific phantom. This includes slices from CT/CBCT volumetric phantoms taken from the contrast, density, resolution, or uniformity regions. The lowest prediction accuracy was found for Catphan or Gammex contrast slices.
Figure 4:

User created wireframe library for several QA phantoms. The (a) Leeds and (b) Las Vegas planar imaging phantoms for kV and MV imaging respectively, along with associated wireframes. The (c) Gammex ACR464 and (d) Catphan 504 tomographic imaging phantoms for CT and CBCT, respectively, along with associated wireframes.
Figure 5:

Plots showing the training and validation accuracy and loss results for the image classification model. The model has achieved around 99% accuracy on the validation set.
After AI-based phantom identification, UniPhan automatically selected the corresponding XML-SVG wireframe and registered the wireframe to the DICOM image as shown in the example output for the Leeds phantom (Fig. 6a). Since UniPhan is designed to be an alternative to existing image quality phantom analysis software which has the advantage of universal phantom compatibility, we sought to demonstrate the accuracy by comparing the results with manual hand calculations using 3D Slicer - a DICOM visualization and measurement software32,33,34,35. Here the manual calculations were considered to be the “gold standard” since they rely only on pixel/voxel intensity values and well-established equations, with no registration or image processing steps involved. The metrics compared were CNR, uniformity, and HU accuracy. The location of the functional objects used by UniPhan was defined by existing wireframes for the Leeds, Catphan, Las Vegas, and Gammex phantoms, whereras, region-of-interest (ROI) selection was performed by the user for manual calculations. In this study, these regions were selected to coincide with the corresponding regions in UniPhan, as described above, to ensure the same quantities were being measured.
Figure 6:

(a) Image of the analyzed Leeds phantom with contrast disks and MTF bar wireframe functional objects overlaid (red). (b) Plot of measured contrast and CNR of the 18 contrast disks.
The contrast and CNR decreased for successive contrast disks as expected for successfully registered and identified contrast objects (Fig. 6b). For the Leeds phantom, the CNR of the contrast disks were in agreement , between manual UniPhan calculated values and UniPhan. The image uniformity was consistent within tolerance as well . For the Las Vegas phantom, the CNR of the highest contrast disk and image uniformity were again in agreement between manual calculation and UniPhan. Finally, for the Catphan 504, the CNR was consistent between UniPhan and hand calculation within QA tolerance. Uniformity values of the CTP486 region were in agreement between all methods . HU accuracy values for both Teflon and air was consistent between both analysis methods within QA tolerance. The edge was successfully identified within the MTF edge region and bins surrounding it were created.
IV. Discussion
In this feasibility study we have demonstrated that the UniPhan method can be used to analyze images of QA phantoms using XML-SVG wireframes to define the location of functional objects. This analysis, which utilizes previously published and well-established equations to calculate metrics of image quality and imager performance, were found to produce results which agree with manual calculation. An advantage of storing phantom data inside the wireframe is that the SVG specification is a W3C standard that can display an image on most graphics software packages and web browsers and is composed of elements that can be read, created, and edited in a simple text editor, making it accessible to a wide variety of end users with different skill sets. Since no phantom specific information is stored in the UniPhan algorithm itself, UniPhan is compatible with any current phantom, as well as future phantoms, provided that a wireframe is created. Therefore, UniPhan represents a step towards global QA standardization by providing a single program capable of analyzing any phantom, removing differences in analysis methodology and in the data output format.
As a result of this flexibility, UniPhan relies on the user to ensure that any wireframes they create and regions that they define are appropriate to produce accurate results. When designing contrast disks and HU density ROIs, the area should be selected such that it encompasses the majority of the feature while attempting to avoid adjacent background pixels to provide for the most accurate results. Uniformity regions should be sufficiently large to minimize error due to noise and not include overlap with other objects in the phantom. Current objects supported by UniPhan include contrast regions, intensity regions, resolution MTF bars and edges, spatial linearity BBs, uniformity regions, Hounsfield Unit (HU) density regions, and light-radiation field coincidence areas although the algorithm can be expanded to include the analysis of other objects as needed.
The AI-based image classification model demonstrated excellent performance when trained over many different phantom types. Image classification confidence scores and prediction speeds deem this method practical for carrying out QA procedures in a clinical setting. The lowest prediction accuracy was found for Catphan or Gammex contrast slices. This is likely due to the variation in visibility of low contrast plugs between different machines due to imager protocol settings, detector efficiency, and detector noise characteristics. However, this could be easily resolved by including more high contrast density or resolution slices which showed almost perfect prediction accuracy. In the event of an AI phantom identification failure, the user can also manually select the correct wireframe for the particular phantom type.
In terms of clinical time efficiencies, once trained, the AI phantom identification model takes 0.1 s to identify the phantom type. After identification, the XML-SVG wireframe based QA analysis process takes 1–2 s to complete and report results. These computation times are likely similar to commercial QA software offerings and significantly shorter than manual QA analysis which requires ROIs to be drawn around each QA feature. For example, complex phantoms, such as the TOR18FG, manual analysis of all features can take several minutes depending on the experience of the user and is more subject to human bias and errors. In terms of potential clinical implementation and robustness, UniPhan was found to correctly identify the phantom 99% of the time. This is likely due to the simplicity of phantom images in terms of having a uniform background and well defined geometric shapes and current power of modern AI image classification methods. However, in the event of phantom misidentification, a clinical product should have a backup solution that allows the user to manually select the phantom. After identification, the QA analysis process uses well-known analytical functions and is judged to be highly robust.
V. Conclusions
We present an AI-based Universal Phantom (UniPhan) algorithm that is not phantom specific and can be easily adapted to any pre-existing image-based QA phantom. AI-based image classification was found to allow high accuracy phantom identification across a wide variety of different phantom types. This method, utilizing XML-SVG wireframes with embedded functions which contain information about objects in the phantom, demonstrated accurate QA analysis. This method is flexible and can be easily expanded to incorporate new or custom-made phantoms by users without significant programming knowledge.
Figure 2:

A diagram of the TensorFlow Keras Sequential model, which consists of a normalized input image that is passed to a series of convolutional blocks each with a max pooling layer, and a fully connected layer on top that is activated by a ReLU activation function.
Table 2:
Demonstrative comparison of CNR, image uniformity, and HU accuracy calculated by UniPhan and manually.
| Phantom | Analysis Method | CNR | Uniformity (%) | HU Accuracy: Teflon | HU Accuracy: Air |
|---|---|---|---|---|---|
|
| |||||
| Leeds TOR 18FG | Uniphan | 59.7 | 99 | NA | NA |
| Manual | 59.5 | 100 | NA | NA | |
| Las Vegas | Uniphan | 27.4 | 100 | NA | NA |
| Manual | 31.5 | 100 | NA | NA | |
| Catphan 504 | Uniphan | 2.2 | 100 | 961.7 | −997.7 |
| Manual | 2.5 | 100 | 960 | −998 | |
|
|
|||||
Acknowledgments
Work supported by Radiation Oncology Dept., University of Pennsylvania and NIH R01CA227124.
References
- 1.Bissonnette JP, Quality Assurance of Image-Guidance Technologies, Seminars in Radiation Oncology 17, 278–286 (2007). [DOI] [PubMed] [Google Scholar]
- 2.Klein EE, Hanley J, Bayouth J, Yin FF, Simon W, Dresser S, Serago C, Aguirre F, Ma L, Arjomandy B, Liu C, Sandin C, and Holmes T, Task group 142 report: Quality assurance of medical acceleratorsa, Medical Physics 36, 4197–4212 (2009). [DOI] [PubMed] [Google Scholar]
- 3.Mawlawi OR, Kemp BJ, Jordan DW, Campbell JM, Halama JR, Massoth RJ, Schmidtlein CR, Shepard JD, Wooten WW, and Anderson JA, A_2.1__Aula prática 2 PET controlo de qualidade, Number 126, 2019. [Google Scholar]
- 4.Dillon C, Breeden W, Clements J, Cody D, Gress D, Kanal K, Kofler J, Mcnitt-Gray MF, Norweck J, Pfeiffer D, Ruckdeschel TG, Strauss KJ, and Tomlinson J, Computed Tomography Radiologist’s Section Radiologic Technologist’s Section Qualified Medical Physicist’s Section QUALITY CONTROL MANUAL QUALITY CONTROL MANUAL Radiologist’s Section Radiologic Technologist’s Section Qualified Medical Physicist’s Section, (2017).
- 5.Bissonnette JP, Balter PA, Dong L, Langen KM, Lovelock DM, Miften M, Moseley DJ, Pouliot J, Sonke JJ, and Yoo S, Quality assurance for image-guided radiation therapy utilizing CT-based technologies: A report of the AAPM TG-179, Medical Physics 39, 1946–1963 (2012). [DOI] [PubMed] [Google Scholar]
- 6.Moran JM, Molineu A, Kruse JJ, Oldham M, Jeraj R, Galvin JM, Palta JR, and Olch AJ, AAPM n°113: Guidance for the physics aspects of clinical trials, volume 19, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.De Oliveira MV, Wenzel A, Campos PS, and Spin-Neto R, Quality assurance phantoms for cone beam computed tomography: A systematic literature review, Dentomaxillofacial Radiology 46 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Schreibmann E, Elder E, and Fox T, Automated quality assurance for image-guided radiation therapy, Journal of Applied Clinical Medical Physics 10, 71–79 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Winkler P, Hofer C, and Stollberger R, A Quality Assurance Tool Based on kV-and MV-Image Analysis for a Linear Accelerator Including an Integrated IGRT System, in World Congress on Medical Physics and Biomedical Engineering, September 7 – 12, 2009, Munich, Germany, edited by Dössel O and Schlegel WC, pages 920–923, Berlin, Heidelberg, 2009, Springer Berlin Heidelberg. [Google Scholar]
- 10.Steiding C, Kolditz D, and Kalender WA, A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography, Medical Physics 41, 1–15 (2014). [DOI] [PubMed] [Google Scholar]
- 11.Bray T YF, Paoli J, Sperberg-McQueen CM, Maler E, Extensible Markup Language (XML) 1.0.
- 12.Bellamy-Royds A WE, Brinza B, Lilley C, Schulze D, Storey D, Scalable Vector Graphics (SVG) 2. [Google Scholar]
- 13.Cover R, XML Applications and Initiatives.
- 14.Michelson AA. Studies in Optics, Chicago Ill: University of Chicago Press, 1927. [Google Scholar]
- 15.Urikura A, Ichikawa K, Hara T, Nishimaru E, and Nakaya Y, Spatial resolution measurement for iterative reconstruction by use of image-averaging techniques in computed tomography, Radiological Physics and Technology 7, 358–366 (2014). [DOI] [PubMed] [Google Scholar]
- 16.Zaila A, Adili M, and Bamajboor S, Pylinac: A toolkit for performing TG-142 QA related tasks on linear accelerator, Physica Medica 32, 292–293 (2016). [Google Scholar]
- 17.Boreman GD, Modulation Transfer Function in Optical and Electro-Optical Systems, Modulation Transfer Function in Optical and Electro-Optical Systems (2010). [Google Scholar]
- 18.Hsu W-F, Hsu YC, and Chuang KW, Measurement of the spatial frequency response (SFR) of digital still-picture cameras using a modified slanted-edge method, Input/Output and Imaging Technolgies II 4080, 96–103 (2000). [Google Scholar]
- 19.Friedman SN, Fung GS, Siewerdsen JH, and Tsui BM, A simple approach to measure computed tomography (CT) modulation transfer function (MTF) and noisepower spectrum (NPS) using the American College of Radiology (ACR) accreditation phantom, Medical Physics 40, 051907 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Richard S, Husarik DB, Yadava G, Murphy SN, and Samei E, Towards task-based assessment of CT performance: System and object MTF across different reconstruction algorithms, Medical Physics 39, 4115–4122 (2012). [DOI] [PubMed] [Google Scholar]
- 21.Cunningham IA and Fenster A, A method for modulation transfer function determination from edge profiles with correction for finite-element differentiation, Medical Physics 14, 533–537 (1987). [DOI] [PubMed] [Google Scholar]
- 22.Rossmann K, Point Spread-Function, Line Spread-Function, and Modulation Transfer Function, 10.1148/93.2.25793, 257–272 (1969). [DOI] [PubMed] [Google Scholar]
- 23.Samei E, Flynn MJ, and Reimann DA, A method for measuring the presampled MTF of digital radiographic systems using an edge test device, Medical Physics 25, 102–113 (1998). [DOI] [PubMed] [Google Scholar]
- 24.Canny J, A Computational Approach to Edge Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-8, 679–698 (1986). [PubMed] [Google Scholar]
- 25.Mitja C RR, Escofet J, Tacho A, Slanted Edge MTF.
- 26.Chawla AS, Roehrig H, Rodriguez JJ, and Fan J, Determining the MTF of medical imaging displays using edge techniques, Journal of Digital Imaging 18, 296–310 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Savitzky A and E MJ, Smoothing and Differentiation of Data by Simplified Least Squares Procedures, Z. Physiol. Chem 40, 1832 (1951). [Google Scholar]
- 28.Friedman SN and Cunningham IA, Normalization of the modulation transfer function: The open-field approach, Medical Physics 35, 4443–4449 (2008). [DOI] [PubMed] [Google Scholar]
- 29.Cooley JW and Tukey JW, An Algorithm for the Machine Calculation of Complex Fourier Series.
- 30.Mason D et al. , pydicom/pydicom: pydicom 2.0.0, (2020).
- 31.Powell MJD, An efficient method for finding the minimum of a function of several variables without calculating derivatives, The Computer Journal 7, 155–162 (1964). [Google Scholar]
- 32.3D Slicer image computing platform — 3D Slicer.
- 33.Kikinis R, Pieper SD, and Vosburgh KG, 3D Slicer: A Platform for Subject-Specific Image Analysis, Visualization, and Clinical Support, Intraoperative Imaging and Image-Guided Therapy, 277–289 (2014). [Google Scholar]
- 34.Kapur T et al. , Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience, Medical image analysis 33, 176–180 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-Robin JC, Pujol S, Bauer C, Jennings D, Fennessy F, Sonka M, Buatti J, Aylward S, Miller JV, Pieper S, and Kikinis R, 3D Slicer as an image computing platform for the Quantitative Imaging Network, Magnetic resonance imaging 30, 1323–1341 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
