Abstract
Purpose:
The aim of this work was to provide a novel description of how the radiotherapy community configures treatment planning system (TPS) radiation beam models for clinically used treatment machines. Here we describe the results of a survey of self-reported TPS beam modeling parameter values across different C-arm linear accelerators, beam energies, and multileaf collimator (MLC) configurations.
Acquisition and Validation Methods:
Beam modeling data were acquired via electronic survey implemented through the Imaging and Radiation Oncology Core (IROC) Houston Quality Assurance Center’s online facility questionnaire. The survey was open to participation from January 2018 through January 2019 for all institutions monitored by IROC. After quality control, 2818 beam models were collected from 642 institutions. This survey, designed for Eclipse, Pinnacle, and RayStation, instructed physicists to report parameter values used to model the radiation source and MLC for each treatment machine and beam energy used clinically for intensity modulated radiation therapy. Parameters collected included the effective source/spot size, MLC transmission, dosimetric leaf gap, tongue and groove effect, and other non-dosimetric parameters specific to each TPS. To facilitate survey participation, instructions were provided on how to identify requested beam modeling parameters within each TPS environment.
Data Format and Usage Notes:
Numeric values of the beam modeling parameters are compiled and tabulated according to TPS and calculation algorithm, linear accelerator model class, beam energy, and MLC configuration. Values are also presented as distributions, ranging from the 2.5th to the 97.5th percentile.
Potential Applications:
This data provides an independent guide describing how the radiotherapy community mathematically represents its clinical radiation beams. These distributions may be used by the community for comparison during the commissioning or verification of their TPS beam models. Ultimately, we hope that the current work will allow institutions to spot potentially suspicious parameter values and help ensure more accurate radiotherapy delivery.
Keywords: IROC, beam modeling, quality assurance, treatment planning system, commissioning
1. INTRODUCTION
Constructing an accurate and robust linear accelerator (Linac) beam model is fundamental to providing high-quality radiation therapy. To do so, medical physicists must manage and define several dosimetric and non-dosimetric input parameters to create a model in the treatment planning system (TPS) that optimally agrees with the physical Linac output. It is expected that this model will then be suitable for a wide variety of clinical scenarios. The process of beam model creation often consists of several iterations in order to achieve the most robust solution, and the amount of adjustment or model tuning available to the user varies among TPS vendors.
The challenge in beam model creation, however, is that rapidly advancing technologies such as intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) can test the limits of these TPS algorithms, thus requiring that extra care and attention be given during the commissioning process. Several studies have determined that clinically significant errors (>5%) can occur when certain factors are measured or employed improperly, thus underscoring the importance of beam model accuracy.1–5 As such, both the approach and user knowledge needed to achieve good dosimetric commissioning of IMRT and VMAT are of paramount importance. Accordingly, it is the duty of the qualified medical physicist to understand both the limitations of the dose calculation algorithm and measurements used to validate the model to ensure its accuracy in clinical use.6
To provide assistance in beam model creation, professional organizations such as the International Atomic Energy Agency and the American Association of Physicists in Medicine have published recommendations for both the commissioning and quality assurance of treatment planning systems for modern applications, including IMRT.6–10 These references provide guidance and several validation tests for the dose calculation algorithms used by the TPS. However, such guidance does not extend to specific methods by which commercially available TPS software and individual model parameters are to be evaluated, especially because each TPS manufacturer has different standards and specifications for clinical use.
Despite these additional resources for beam model development and testing, studies have determined that treatment errors related to the TPS calculation still exist. The Imaging and Radiation Oncology Core Houston Quality Assurance Center’s (IROC Houston) recent works underscore the continued challenges of achieving accurate dosimetric commissioning for IMRT systems. While the percentage of institutions that pass the IROC Houston head and neck phantom irradiation has improved over time, a substantial number of institutions still fail to meet the relatively loose minimum criteria required for clinical trial credentialing.11,12 Results published by Kerns, et al.13 and Carson, et al.12 strongly demonstrate the dominating dosimetric characteristics of phantom failures, which can originate from poor TPS dose calculations.
One way to help ensure accurate beam model commissioning is to understand how the radiotherapy community at large defines their beam models. Because of IROC Houston’s unique relationship with the radiotherapy community and the relative complexity associated with beam model creation, the goal of this work was to create a reference dataset for comparison of C-arm Linac models that may aid physicists in beam model creation and validation. Although several beam modeling parameters have been tested previously on multiple Linacs from different institutions and demonstrated their relative importance in beam modeling,3,4,14 no large scale data source describing individual parameters is yet available. Here we provide distribution characteristics of several key input beam modeling parameters for several Linacs, TPS, and beam energies, so that physicists may evaluate their institution’s beam models in the context of the distribution of similar Linacs.
2. ACQUISITION AND VALIDATION METHODS
2.A. Survey creation and implementation
In order to acquire data describing how the community defines its Linacs in their TPS environments, a survey was designed for Eclipse, Pinnacle, and RayStation users. These three TPS were chosen based upon the demographics of IROC Houston service users and their frequency of use today. The survey encompassed multiple beam modeling parameters, many of which have been found to be of interest from previous studies.3,14–16 The requested beam modeling parameters are listed for each of the three TPSs examined in Table 1. Notably, these parameters model the behavior of the multileaf collimator (MLC), radiation source/spot size, and radiation field penumbra, all aspects of which are relevant for accurate IMRT and VMAT.
Table 1.
Treatment planning system beam modeling parameters requested via online survey.
| Eclipse | Pinnacle | RayStation |
|---|---|---|
| Effective Target Spot Size X and Y [mm] MLC Transmission Factor Dosimetric Leaf Gap [cm] |
Effective Source Size X and Y [cm] Rounded Leaf Tip Radius [cm] Tongue and Groove width [cm] Additional Tongue and Groove Transmission MLC Transmission Flattening Filter Gaussian Height and Width |
Primary Source X Width and Y Width [cm] MLC Transmission Tongue and Groove [cm] Leaf Tip Width [cm] MLC Position Offset [cm] MLC Position Gain MLC Position Curvature [1/cm] |
Beam modeling parameter data were acquired via an online survey as part of the IROC Houston Facility Questionnaire, as well as by paper copy issued with phantom tests. The survey was available to all institutions monitored by IROC. To facilitate survey participation and minimize transcription errors, visual instructions (included in the supplemental materials) were provided on how to identify and record the requested beam modeling parameters found in each of the native TPS environments.
The survey was available in the online Facility Questionnaire from January 2018 through January 2019. During this timeframe users were allowed to edit their survey responses such that the most up-to-date information regarding the modeling process could be captured at the time of analysis.
2.B. Data Validation
In total, 2915 individual beam models from 699 institutions were recorded. The results of this survey serve as a broad representation of today’s radiotherapy practice; the survey was available to all IROC service users, totaling over 2200 institutions globally, and responses were received from nearly one third of the institutions. Given this breadth of survey respondents and that the types of responding institutions ranged from single-machine community clinics to large academic hospital centers, we expect that these data are representative of most radiotherapy institutions and that nonresponse bias does not contribute significantly to these survey results.
To ensure the most accurate representation of TPS modeling data, survey responses were examined for gross inconsistencies or typographical errors (for example, recording the MLC transmission value as “1.5”, intending to mean 1.5%, instead of the listed value in the native TPS, “0.015”). Each survey response was checked for the presence of atypical or missing decimal places (indicating wrong magnitude or reported units), illogical values (i.e. negative values for most parameters), or completely blank survey responses. If an unexpected result was obtained, the individual survey response was cross-checked with the institution’s most recent phantom test submissions, which may contain a paper copy of the survey with hand-written responses. These paper copies were assumed free of transcription errors. If corrections to any parameter value could be validated with the institution’s paper survey, the response was amended and included in the analyses; if not, the response was excluded. In a number of cases, institutions elected to submit only some of the applicable parameter values; these partial submissions, so long as the values provided were validated, were retained for analyses. In total, only 95 responses (3.3% of all survey responses) were excluded from analyses, resulting in 2818 usable responses from 642 institutions.
2.C. Data Summarization
Survey results summarized in the supplemental Excel workbook are categorized according to the following: Linac class (e.g. Varian Base, Varian TrueBeam, Elekta Agility), beam energy, MLC configuration (e.g. Millennium 120 versus HD120), and calculation algorithm (for Eclipse; i.e. AAA or AcurosXB). A given Linac class consisted of potentially several machine models that were deemed to be dosimetrically equivalent, according to previous work performed by IROC Houston.17,18 In addition to these stratifications, TPS version number was also evaluated as a potential factor for stratification based on the potential for substantive changes to the dose calculation process as version number changed. Ultimately, it was not necessary to separate out the version number because the usable survey responses represented TPS versions for which no substantive changes were made to the dose calculation engine (per the manufacturer) that could, for example, affect the basic modeling or require significant reconfiguration. The TPS version numbers described by this survey data were: Eclipse 10.0+, Pinnacle 9.10+, RayStation 4.7+.
The distributions of survey responses are presented in terms of 2.5th, 25th, 50th, 75th, and 97.5th percentiles to encompass both the interquartile range and the major breadth of parameter values. All data analyses was performed in SPSS Statistics 24 (IBM Corp., Armonk, NY). Percentiles were calculated using the “HAVERAGE” method, which provides an unbiased estimate of the population percentile. Note that for some combinations of Linac class, beam energy, and MLC type, some percentiles (typically the 2.5th and 97.5th) will be undefined because the method is based on a function of the number of cases present; that is, some percentiles may not be defined for smaller subsamples.
3. DATA FORMAT AND USAGE NOTES
3.A. Data Format
The compiled list of survey responses is archived on Zenodo (DOI: 10.5281/zenodo.3357124) as well as the IROC Houston Quality Assurance Center website (http://rpc.mdanderson.org/RPC/IROCReferenceData.htm) as an Excel workbook file in the *.xlsx format. The list is composed of spreadsheet rows, each of which corresponds to a single beam model reported by an institution. Information included in this dataset include the machine model, beam energy, MLC model, TPS/algorithm, TPS version, and the numeric values associated with the parameter values investigated in Table 1.
Beam modeling parameter distributions are also included as a sile in the *.xlsx format. The survey results are segregated by beam energy into worksheets named “6 MV”, “6 FFF”, “10 MV”, “10 FFF”, “15 MV”, and “18 MV”. Worksheet titles including “FFF” describe models utilizing flattening filter free beams. Each worksheet is divided into sections by treatment planning system (and algorithm for Eclipse). Linear accelerator classes are differentiated in columns; Varian models are also separated according to MLC model (standard versus high definition MLC) as applicable.
While valid data has been collected, several models using uncommon machines (e.g. Siemens) and beam energies (e.g. 4 MV, 16 MV, 20 MV, etc.) are not presented in the summary distributions due to very limited survey responses; combined, such uncommon models represented only 3% of all survey responses.
3.B. Usage Notes
This data is intended for use as a comparison tool during TPS commissioning or validation studies. The information from this work describes the TPS parameter values that have been deemed clinically acceptable by centers throughout the community, and thus can only provide so much information about the most appropriate values to adopt, particularly in the context of modeling an individual Linac. Through individual measurements and testing, one may determine that the optimal parameter values deviate from the median values shown here. However, even within this limitation, this dataset can serve as a quality check for those who wish to determine whether their model reasonably follows what others expect for use of a similar Linac/TPS combination based upon the distributions presented. Ideally, this data may also highlight when gross measurement errors occur, such as those associated with determining the dosimetric leaf gap (DLG) or MLC transmission factor.
However, this data is not intended to be used as a reference by which to shortcut the TPS commissioning process. Likewise, the use of popular parameter values may not be the most appropriate for those using treatment specific beam models (e.g. stereotactic radiosurgery, VMAT-dedicated units, etc.). Physicists should be careful to recognize when parameter values presented here may not be ideal for the Linac’s intended purpose. Similarly, parameters for which there were few survey responses should also be viewed with greater skepticism. Ultimately, the best test of Linac model value is the agreement between the TPS dose calculation and dosimetric measurement.
4. DISCUSSION
The goal of this study was to present the community’s consensus for the TPS modeling of photon beams (median values and descriptions of the distributions) for the different Linac models and energies that are currently used clinically. To facilitate data interpretations, different MLC models were separately combined, and linear accelerator models were combined into dosimetrically equivalent classes according to reference dosimetry data from IROC-Houston site visit data.17,18 Reference data was compiled so that physicists can compare their own input values to those shown in this work.
Because the dataset presented here is for informational purposes, this work does not include interpretation or hypotheses. This work does not, therefore, include assessment of the impact of different parameters on underlying dose calculation accuracy, nor does it attempt to identify “unsuitable” parameter values. This work simply includes raw survey responses as well as general descriptions and trends observed within the survey results.
As depicted by the distributions in Figure 1, the greatest proportion of survey responses were for Eclipse (78.2%), a Varian product (84%), or 6 MV treatment beams (40.8%). These proportions followed what was expected based on IROC Houston service user demographics.
Figure 1.
Distributions of survey responses according to linear accelerator class (left), beam energy (center) and TPS/algorithm (right).
Interestingly, there exist cases where the radiotherapy community shows substantial agreement, particularly concerning the spot size in Eclipse. Nearly all participants using Varian machines and the AAA algorithm (regardless of beam energy) opted to model a point source (0 mm x 0 mm), and AcurosXB users modeled the spot size as 1 mm x 1 mm (Figure 2). This extensive agreement may be due to, in part, by the proprietary auto-modeling features found in Eclipse, which pre-fill model parameter values based on pre-measured dose profiles. This is in direct contrast to Pinnacle, where users individualize the source size and many other parameters from the start. This result suggests that computer-aided decisions may decrease the variations exhibited, although the value or risk associated with such a process is unclear from these data.
Figure 2.
Survey responses for effective target spot size used to model standard 6 MV Varian Base class machines in Eclipse (top: AAA; bottom: AcurosXB). Both algorithms show uniformity of response.
The histograms of Figure 3 demonstrate where the radiotherapy community shows the least consensus in the development, commissioning, and validation of their beam models. Here the DLG (from Eclipse), MLC transmission (for all three TPS), and effective source size and flattening filter Gaussian width (for Pinnacle) show notable variations, even among similar Linacs. Some of this variation in Pinnacle can be attributed to the many degrees of freedom that physicists have in model creation. Other variations can be explained by the way the DLG and MLC transmission are developed; physicists often physically measure these factors following different protocols and using different equipment, leading to different measured values. For example, measurement of DLG, a parameter found to be critical in the accuracy of IMRT,2,4,19 has been found to vary based on the size of ion chamber used to measure it.20 This underscores the necessity for physicists to critically analyze the measurement and validation process to ensure that, ultimately, the value used in their model is most adequate for their dose calculations.
Figure 3.
Subset of survey responses for standard 6 MV Varian Base class and Elekta Agility Linacs that depict high variability in parameter value agreement: a) Eclipse AAA DLG, b) Eclipse AAA MLC transmission, c) Pinnacle MLC transmission, d) RayStation MLC transmission, e) Pinnacle source size X-dimension, f) Pinnacle flattening filter Gaussian width.
One aspect of beam modeling that cannot be properly assessed in this work is the potential interplay between different parameters. This may be of particular interest and may be more pronounced for TPSs with more parameters (i.e. more degrees of freedom) to explore in beam model creation. Should interplay be present among the parameters presented, including the potential to offset or exacerbate errors, interpretation of these survey results may be more challenging. Thus, more work assessing such factors and their associated effects is warranted. In fact, such work may be of particular interest given that for institutions modeling a Varian Base class machine (with standard MLC) using Pinnacle, it was exceedingly uncommon for a user to report all parameter values within the interquartile range (25% – 75%); that is, nearly all Pinnacle beam models had at least one parameter value outside this range. This was not the case for common models like the Varian Base class for Eclipse or RayStation, where at least 25% of all responses reported all values within the defined interquartile range. However, such observations suggest that additional benefit may be derived from follow-up analyses determining how different parameter values and/or combinations can affect the overall beam model accuracy.
Some limitations of the current study are that not all survey results could be represented in this dataset because certain Linacs did not fit within the predefined classes (e.g. Siemens machines were excluded from presentation). Additionally, the set of parameters presented herein only represent a subgroup of all the modeling parameters available in TPS commissioning. Users should be mindful of factors that may affect the aforementioned parameters or properties that may substantially alter the overall model applicability, such as the tongue and groove effect.21 Lastly, the survey data herein can only represent the most up-to-date interpretation of the TPS models through 2018. Should further TPS upgrades occur that drastically affect the way the TPS calculates dose, the data found in this set may be of historic interest, rather than prospective. Given the stability of the dose calculation algorithm in the current TPS algorithms, this data may retain utility for an extended period of time.
5. CONCLUSION
In this study, beam modeling parameters for the Eclipse, Pinnacle, and RayStation TPS were compiled and analyzed to create a reference dataset describing how the radiotherapy community assigns parameter values to generate its clinical beam models. Statistical metrics were provided so that physicists examining the commissioning of their TPS may recognize parameters that require greater attention and consideration to ensure the most accurate and robust models possible. This dataset can be used as a second check for physicists during the TPS commissioning process to detect what may contribute to anomalies in the beam model that could warrant further attention. This work also highlights considerable variations among several critical parameters used in beam modeling, thus providing additional caution.
Supplementary Material
7. ACKNOWLEDGEMENTS
This work was supported by Public Health Service Grants CA180803 and CA214526 awarded by the National Cancer Institute, United States Department of Health and Human Services. Mallory Glenn is supported by the Rosalie B. Hite Graduate Fellowship in Cancer Research and the American Legion Auxiliary Fellowship awarded by The University of Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences. Christine Peterson is partially supported by NIH/NCI CCSG grant P30CA016672.
Many thanks go to Preetha Boisen Paul for implementing the beam modeling parameter survey in electronic format and querying survey responses for analysis.
Footnotes
8. CONFLICT OF INTEREST STATEMENT
The authors have no conflicts of interest to disclose.
9. REFERENCES
- 1.Rangel A, Ploquin N, Kay I, Dunscombe P. Towards an objective evaluation of tolerances for beam modeling in a treatment planning system. Phys Med Biol. 2007;52(19):6011–6025. doi: 10.1088/0031-9155/52/19/020 [DOI] [PubMed] [Google Scholar]
- 2.McVicker D, Yin F-F, Adamson JD. On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors. J Appl Clin Med Phys. 2016;17(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Kron T, Clivio A, Vanetti E, et al. Small field segments surrounded by large areas only shielded by a multileaf collimator: comparison of experiments and dose calculation. Med Phys. 2012;39(12):7480–7489. doi: 10.1118/1.4762564 [DOI] [PubMed] [Google Scholar]
- 4.Fogliata A, Nicolini G, Clivio A, Vanetti E, Cozzi L. Accuracy of Acuros XB and AAA dose calculation for small fields with reference to RapidArc® stereotactic treatments. Med Phys. 2011;38(11):6228–6237. doi: 10.1118/1.3654739 [DOI] [PubMed] [Google Scholar]
- 5.Fogliata A, Lobefalo F, Reggiori G, et al. Evaluation of the dose calculation accuracy for small fields defined by jaw or MLC for AAA and Acuros XB algorithms. Med Phys. 2016;43(10):5685–5694. doi: 10.1118/1.4963219 [DOI] [PubMed] [Google Scholar]
- 6.Smilowitz JB, Das IJ, Feygelman V, et al. AAPM Medical Physics Practice Guideline 5.a.: Commissioning and QA of Treatment Planning Dose Calculations - Megavoltage Photon and Electron Beams. J Appl Clin Med Phys. 2015;16(5):14–34. doi: 10.1120/jacmp.v16i5.5768 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Ezzell GA, Burmeister JW, Dogan N, et al. IMRT commissioning: Multiple institution planning and dosimetry comparisons, a report from AAPM Task Group 119. Med Phys. 2009;36(11):5359–5373. doi: 10.1118/1.3238104 [DOI] [PubMed] [Google Scholar]
- 8.IAEA. TecDoc 1540: Specification and Acceptance Testing of Radiotherapy Treatment Planning Systems. Vienna; 2007. [Google Scholar]
- 9.IAEA. TecDoc 1583: Commissioning of Radiotherapy Treatment Planning Systems: Testing for Typical External Beam Treatment Techniques. Vienna; 2008. [Google Scholar]
- 10.Das IJ, Cheng CW, Watts RJ, et al. Accelerator beam data commissioning equipment and procedures: Report of the TG-106 of the Therapy Physics Committee of the AAPM. Med Phys. 2008;35(9):4186–4215. doi: 10.1118/1.2969070 [DOI] [PubMed] [Google Scholar]
- 11.Molineu A, Hernandez N, Nguyen T, Ibbott G, Followill D. Credentialing results from IMRT irradiations of an anthropomorphic head and neck phantom. Med Phys. 2013;40(2):022101. doi: 10.1118/1.4773309 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Carson ME, Molineu A, Taylor PA, Followill DS, Stingo FC, Kry SF. Examining credentialing criteria and poor performance indicators for IROC Houston’s anthropomorphic head and neck phantom. Med Phys. 2016;43(12):6491–6496. doi: 10.1118/1.4967344 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Kerns JR, Stingo F, Followill D, Howell R, Melancon A, Kry SF. Treatment Planning System Calculation Errors Are Present in the Majority of IROC-Houston Phantom Failures. Int J Radiat Oncol Biol Phys. 2017;0(0). doi: 10.1016/j.ijrobp.2017.03.049 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Starkschall G, Steadham RE, Popple RA, Ahmad S, Rosen II. Beam-commissioning methodology for a three-dimensional convolution/superposition photon dose algorithm. J Appl Clin Med Phys. 2000;1(1):8–27. doi: 10.1120/JACMP.V1I1.2651 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Chen S, Yi BY, Yang X, Xu H, Prado KL, D’Souza WD. Optimizing the MLC model parameters for IMRT in the RayStation treatment planning system. J Appl Clin Med Phys. 2015;16(5). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Bedford JL, Thomas MDR, Smyth G. Beam modeling and VMAT performance with the Agility 160-leaf multileaf collimator. J Appl Clin Med Phys. 2013;14(2). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Kerns JR, Followill DS, Lowenstein J, et al. Reference dosimetry data and modeling challenges for Elekta accelerators based on IROC-Houston site visit data. Med Phys. 2018;45(5):2337–2344. doi: 10.1002/mp.12865 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Kerns JR, Followill DS, Lowenstein J, et al. Technical Report: Reference photon dosimetry data for Varian accelerators based on IROC-Houston site visit data. Med Phys. 2016;43(5):2374–2386. doi: 10.1118/1.4945697 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Kumaraswamy LK, Schmitt JD, Bailey DW, Xu ZZ, Podgorsak MB. Spatial variation of dosimetric leaf gap and its impact on dose delivery. Med Phys. 2014;41(11):111711. doi: 10.1118/1.4897572 [DOI] [PubMed] [Google Scholar]
- 20.Balasingh STP, Singh IRR, Rafic KM, Babu SES, Ravindran BP. Determination of dosimetric leaf gap using amorphous silicon electronic portal imaging device and its influence on intensity modulated radiotherapy dose delivery. J Med Phys. 2015;40(3):129–135. doi: 10.4103/0971-6203.165072 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Hernandez V, Vera-Sánchez JA, Vieillevigne L, Saez J. Commissioning of the tongue-and-groove modelling in treatment planning systems: from static fields to VMAT treatments. Phys Med Biol. 2017;62(16):6688–6707. doi: 10.1088/1361-6560/aa7b1a [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.



