Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Nov 1.
Published in final edited form as: Int J Radiat Oncol Biol Phys. 2021 Jun 5;111(3):693–704. doi: 10.1016/j.ijrobp.2021.05.132

Forecasting Individual Patient Response to Radiation Therapy in Head and Neck Cancer With a Dynamic Carrying Capacity Model

Mohammad U Zahid *, Nuverah Mohsin *,, Abdallah SR Mohamed , Jimmy J Caudell §, Louis B Harrison §, Clifton D Fuller , Eduardo G Moros §, Heiko Enderling *,§
PMCID: PMC8463501  NIHMSID: NIHMS1730038  PMID: 34102299

Abstract

Purpose:

To model and predict individual patient responses to radiation therapy.

Methods and Materials:

We modeled tumor dynamics as logistic growth and the effect of radiation as a reduction in the tumor carrying capacity, motivated by the effect of radiation on the tumor microenvironment. The model was assessed on weekly tumor volume data collected for 2 independent cohorts of patients with head and neck cancer from the H. Lee Moffitt Cancer Center (MCC) and the MD Anderson Cancer Center (MDACC) who received 66 to 70 Gy in standard daily fractions or with accelerated fractionation. To predict response to radiation therapy for individual patients, we developed a new forecasting framework that combined the learned tumor growth rate and carrying capacity reduction fraction (δ) distribution with weekly measurements of tumor volume reduction for a given test patient to estimate δ, which was used to predict patient-specific outcomes.

Results:

The model fit data from MCC with high accuracy with patient-specific δ and a fixed tumor growth rate across all patients. The model fit data from an independent cohort from MDACC with comparable accuracy using the tumor growth rate learned from the MCC cohort, showing transferability of the growth rate. The forecasting framework predicted patient-specific outcomes with 76% sensitivity and 83% specificity for locoregional control and 68% sensitivity and 85% specificity for disease-free survival with the inclusion of 4 on-treatment tumor volume measurements.

Conclusions:

These results demonstrate that our simple mathematical model can describe a variety of tumor volume dynamics. Furthermore, combining historically observed patient responses with a few patient-specific tumor volume measurements allowed for the accurate prediction of patient outcomes, which may inform treatment adaptation and personalization.

Introduction

Radiation therapy (RT) is the single most used therapeutic agent in oncology.1,2 Although the flood of genomic data has thus far affected chemotherapy and certain targeted biological agents, it has yet to affect RT. With increasing understanding of the complexity of tumor heterogeneity, the central principle underlying precision medicine calls for cancer therapy to be tailored to individual patients. To this effect, actionable biomarkers need to be identified that adequately describe individual patients’ tumor growth dynamics and therapy responses. We recently postulated that the future of personalized radiation therapy will need to integrate and synergize clinical radiation oncology with the expertise of molecular biology, immunology, radiomics, and mathematical modeling.35

In vivo radiation sensitivity has been described in terms of a 10-gene molecular signature, and this genomic indicator has been shown to be highly heterogeneous within and between different cancer types.68 In mathematical modeling of RT, the prevailing dogma of the major effect of radiation remains DNA damage–induced direct cell death, and treatment schedules are derived to maximize tumor-control probability while minimizing the probability of normal-tissue complication.2,9 The linear-quadratic (LQ) model is the gold standard to describe the in vitro radiosensitivity of cells.1012 Indeed, most—if not all—quantitative modeling studies use the dose-dependent survival fraction derived from the LQ model to calibrate RT cell death rates.13 To account for nondirect cell-killing effects of radiation, recent developments of the LQ model include radiation bystander effects.14

Cell-intrinsic radiation sensitivity, however, has recently been argued to be less of a factor than patient-specific, microenvironmental properties of tumors that modulate the fraction of actively proliferating cells in a tumor. The proliferation saturation index (PSI) describes the ratio of the tumor volume before radiation to its preirradiation carrying capacity—the maximum tumor volume that can be supported by the host tissue.1519 The PSI ranges between 0 and 1: when PSI = 0, the entire tumor volume is considered as proliferative and tumor growth is purely exponential, and when PSI = 1, the entire tumor volume is considered as nonproliferative with zero tumor growth. Tumor dormancy is a visualization of tumors at carrying capacity. Notably, carrying capacity and PSI are emergent properties of multiple factors. Model analysis has shown that PSI can describe clinically observed volumetric regression during RT better than any single measure of radiosensitivity.15,17 Although PSI offers a conceptual departure from traditional radiation modeling, it so far has continued to rely on an explicit radiation-induced cell death term.

In recent years, cancer biology has shifted from a cell-centric view toward an integrated view of the tumor ecosystem.18,2022 As such, the effect of radiation on different components of the tumor microenvironment has become of increasing interest.23 An extensive body of literature has emerged relating to the immune-activating ability of RT.2428 In fact, RT efficacy may be a combination of the cytotoxic effect of radiation on cancer cells and the direct and indirect effects on the complex tumor microenvironment within the radiation treatment field (Fig. 1A). Radiation alters the tumor vasculature29 and releases tumor-specific antigens and damage-associated molecular patterns that stimulate subsequent antitumor immunity30—all of which may change the tumor carrying capacity. In this study, we evaluated the concept of the effect of radiation being mathematically modeled and simulated by a stepwise reduction in the tumor carrying capacity. This model was calibrated and tested on longitudinal tumor-volume data from patients with head and neck cancer and then was further extended to make predictions of individual patient responses to RT.

Fig. 1.

Fig. 1.

Radiation-induced reduction of carrying capacity. (A) Schematic depictionof how a radiation therapy (RT) target volume encompasses both the tumor and the tumor microenvironment, which includes the extracellular matrix, immune cells, stromal cells, and vasculature. (B, C) Tumor growth is modeled as logistic growth and the effect of RT is modeled as an instantaneous reduction in the carrying capacity, leading to 2 cases: (B) slowed tumor growth when the reduced carrying capacity remains larger than the current tumor volume or (C) tumor volume reduction when the reduced carrying capacity drops below the current tumor volume; in the latter case, the tumor volume will subsequently approach the carrying capacity from above (C). In panels B and C, the orange dashed line indicates the carrying capacity before RT (Ko), the green dash-dot curve indicates the trajectory the tumor volume would have followed without RT, the red dashed line indicates how the carrying capacity changes in response to RT, and solid blue curves indicate the tumor-volume trajectories after RT.

Materials and Methods

Mathematical model of carrying capacity reduction

The change in tumor volume, V (cm3), over time is modeled by logistic growth15:

dVdt=λV(1VK),

where λ is the intrinsic volumetric growth rate (day−1) and K is the tumor carrying capacity (cm3). The intrinsic volumetric growth rate translates into volume-doubling time as ln(2)/ λ. Without therapy, the tumor volume increases in logistic fashion, approaching its carrying capacity. We defined PSIV0K0; where V0 is the tumor volume before the first dose of RT, usually obtained from patient-positioning cone beam computed tomography (CBCT) images. The carrying capacity of the tumor before the first dose of RT, denoted as K0, is calculated as

K0=V0Vplan(eλΔt1)VplaneλΔtV0,

where Vplan is the volume abstracted at the time of initial RT planning (typically a few weeks before the start of RT) and Δt is the time interval between the measurements of Vplan and V0. During radiation, the complex microenvironment in the radiation target volume is altered (Fig. 1A). Thus, the effects of RT are modeled by an instantaneous reduction in carrying capacity, given by

KpostRTFx=KpreRTFx(1δ),

where KpostRTFx is the tumor carrying capacity after a radiation fraction and δ is the fraction by which the carrying capacity is reduced with each radiation fraction; δ is defined between 0 and 1, where when δ = 0 there is no reduction of carrying capacity and when δ = 1 there is 100% reduction in carrying capacity. Both λ and δ are assumed to remain constant throughout treatment.

The simple 3-parameter model proposed here can simulate a variety of tumor growth dynamics in response to RT. When RT is applied and the carrying capacity after RT remains greater than the tumor volume (Kpost-RT-Fx > Vpre-RT-Fx), tumor growth is slowed (Fig. 1B). Conversely, when the carrying capacity after RT is less than the volume at the time of RT (Kpost-RT-Fx < Vpre-RT-Fx), tumor volume declines and approaches Kpost-RT-Fx from above (Fig. 1C). In this case, λ now becomes the rate at which the tumor volume approaches the carrying capacity from above, as indicated earlier, providing a representation of the tumor volume radiation response rate.

Patient data

The model parameters were tuned using data from a cohort of 17 patients with head and neck cancer treated at Moffitt Cancer Center (MCC) with a total of 66 to 70 Gy RT in 2-Gy weekday fractions. The model was then tested on data from an independent cohort from MD Anderson Cancer Center (MDACC) comprising 22 patients with head and neck cancer treated with a total of 66 to 70 Gy RT (2 or 2.12-Gy weekday fractions or with accelerated fractionation). All methods were carried out in accordance with institutional policies of the 2 cancer centers. The clinical protocol covering patient data and methods used in this article was approved by the respective institutional review boards.

Tumor volume measurements from MCC were derived from CBCT with slice thicknesses of 2 mm. Tumor volume measurements from MDACC were derived from CT scans from a CT-on-Rails system combining a GE Smart Gantry CT scanner (General Electric, Boston, Massachusetts) and a Varian 2100EX linear accelerator (Varian Medical Systems, Palo Alto, California) with slice thicknesses ranging from 2.5 to 3.75 mm. Detailed imaging acquisition and reconstruction details can be found in the Supplementary Materials and Methods.

For both cohorts, scans were collected at the time of RT planning, just before the first RT dose, and weekly during the course of treatment. All CT images were placed into Mirada imaging software (Mirada Medical, Denver, Colorado), and primary tumor and/or involved lymph nodes were contoured by a single physician (J.J.C.). Patient demographics and clinical parameters are described in Table E1. Statistical comparison of the 2 cohorts showed no significant differences in tumor, lymph node, and metastases stages. Although the primary site of the MCC cohort was predominantly the oropharynx, the majority of primary sites of MDACC patients included the tonsils and the base of the tongue. Locoregional control (LRC), defined as time without recurrence or cancer in the treated fields, and disease-free survival (DFS), defined as any recurrence of disease or death event, were abstracted as outcome measures and determined by biopsy confirmation or imaging sufficient to initiate additional treatments. A 5-year follow-up endpoint was used for both outcomes.

Statistical methods

Patient characteristic prevalence levels were compared between the 2 cohorts using the Fisher exact test to test the null hypothesis that there were no nonrandom associations between the presence and absence of the characteristic in the 2 cohorts, and the P-values for each characteristic are reported in Table E1. For the remainder of the study, distributions of fitted parameters and model predictions were compared using a Mann-Whitney U-test to test the null hypothesis of distributions with equal medians. A P-value < .05 indicated significantly different distributions.

Model application

The mathematical model was applied to the patient data in 3 distinct phases: (1) parameter tuning, using the MCC cohort data; (2) testing the tuned parameters on the MDACC cohort data; and (3) patient outcome (LRC, DFS) and risk prediction, using the combined data from both the MCC and MDACC cohorts in a leave-one-out study.

Parameter tuning

The model was initially fit to patient data from the MCC cohort by finding a pair of λ and δ values that minimized the root mean square error (RMSE) of the model for each patient. The RMSE for each patient is defined as

RMSE=1nCTi=1nCT(ViV^i)2,

where Vi is the measured tumor volume at time point i; V^i is the model estimate of the tumor volume at time point i, and nCT is the number of CT scans for the patient. After systematic parameter reduction analysis (supplementary methods, Table E2), it was found that λ could be set uniform (λ optim) across the entire cohort without losing any information. Parameter optimization and parameter reduction details can be found in the Supplement. Finally, the value for λ optim was selected by performing a full grid search of λ ∈ (0.055 day−1, 0.69 day−1) with a step size of 0.025, to find the value of λ that minimized the average normalized RMSE (⟨nRMSE⟩) for the entire MCC cohort. The ⟨nRMSE⟩ is defined as

nRMSE=1npj=1npRMSEj1nCT,ji=1nCT,jVi,j,

where np is the total number of patients in the cohort. The upper bound for λ was set as 1 cell division or volume doubling per day (ln 2), and the lower bound was calculated by a 92% cell loss factor with 1 cell division per day.31

Testing tuned parameters

We tested the capacity of the model to fit an independent patient data set from MDACC (patient characteristics described in Table E1) using λoptim learned from the MCC cohort. The model was fit to patient data from the MDACC cohort by finding the δ values that minimized RMSE of the model for each patient (parameter optimization details are provided in the Supplement).

Patient outcome prediction

Leave-one-out outcome prediction study design.

Owing to the low failure rate in the treatment of head and neck cancers with RT, we only observed 6 local failures and 7 distant failures across both cohorts. To deal with this low number of events, all work in forecasting patient outcomes was done by combining the data from MCC and MDACC and performing a series of 39 leave-one-out cross-validation studies, where the forecasting model was trained on 38 patients to make predictions for the 39th patient.32 This type of analysis is classified as a type 1b analysis in the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement recommendations for predictive models, which is considered appropriate for model development and internal validation in the context of limited data.33

Forecasting pipeline.

Because the model simulated tumor volume dynamics, we related changes in tumor volume with LRC and DFS. To predict tumor volume changes and thus outcomes, we developed a forecasting pipeline that adaptively combined the training data with specific clinical measurements of the left-out test patient at appropriate weights (Fig. 2). We evaluate weekly tumor volume reduction as a function of the radiation-induced carrying capacity reduction fraction, δ. For a given test patient i, the average weekly volume reduction since the start of RT, (ΔVΔt)i, was used to estimate a value of δ for that patient (δi). This estimate was used to update the training-derived δ distribution,

Lognormal~(μi=whμh+wnmeasln(δi)nmenswnmeasnmeas+1,σi=σhnmeas+1),

where μi and σi were the updated parameters for the patient-specific δ distribution, μh and σh were the parameters for the training δ distribution, Wnmeas(0,10) was the weight given to the patient’s clinical measures relative to a weight of wh = 1 given to the training δ distribution for the nth clinical measurement, and nmeas was the number of measurements being considered in a given prediction. This formulation allowed the distribution to shift toward δi and to narrow as the number of measurements increased.

Fig. 2.

Fig. 2.

Flowchart representation of the forecasting pipeline that adaptively combines training data and new patient measurements. The pipeline is divided into 3 phases: premeasurement based on the training cohort, measurements for the i-th patient, and patient-specific predictions. Squares represent information learned from the training cohort, and circles represent information measured or calculated for an individual patient. The entire prediction pipeline can be repeated with the additional measurements from the patient.

Individual patient-specific predictions were made by randomly sampling δ from the updated δ distribution and simulating tumor volume dynamics given the patient’s specified treatment schedule. For patients without any on-treatment measurements (nmeas = 0), predictions were made by sampling directly from the training δ distribution.

Calibrating forecasting pipeline.

To estimate δ from response data (derived from a given leave-one-out training cohort) and early patient-specific, on-treatment data points, we created an estimator for δ using average volume reduction per week (ΔVΔt) as an input. The correlation of ΔVΔt with the fitted δ value for each patient in the training cohort can be described as the following quadratic relation (Fig. 5A):

δ=β1(ΔVΔt)2+β2(ΔVΔt)+β3.

The δ values for each training cohort in the leave-one-out analysis were fit to a lognormal distribution (Fig. 5B). Numerical values for the β coefficients and δ distribution parameters can be found in the Supplement.

Fig. 5.

Fig. 5.

Prediction pipeline inputs and results for the 39 leave-one-out studies for the combined MCC and MDACC cohorts. (A) Scatter plot of δ and the weekly percentage volume reduction, with all 39 quadratic fits from the leave-one-out analyses overlaid, that serves as a δ estimator derived from each corresponding training cohort. (B) Histograms of the fitted values for δ for each leave-one-out training cohort, which serves as the chosen training δ distribution, with a uniform λ = 0.13 day−1 with lognormal fits to the distribution overlaid. (C) Plot showing the ranges for the locoregional control (LRC) and disease-free survival (DFS) prediction cutoffs derived from the 39 leave-one-out training cohorts. Error bars indicate standard deviations across the 39 training cohorts. (D) Kaplan-Meier analysis for LRC and DFS for the 39 leave-one-out training cohorts separated by their respective percentage volume reduction threshold after 6 weeks of radiation therapy (RT). (E) Representative spaghetti plots of tumor-volume prediction simulations: 100 prediction simulations for patient 10 for nmeas = 0 to 4. Light green circles around the black dots indicate measurements that were considered in making predictions. (F) Results of 100 prediction simulations from the leave-one-out analyses showing the predicted normalized tumor volumes at the sixth week on RT (colored dots) compared with the measured normalized tumor volume (black asterisks). In panels E through F, black dashed lines indicate the patient-specific cutoffs for LRC prediction; the cyan dashed lines indicate the patient-specific cutoffs for DFS prediction. Blank columns indicate that predictions were not made for patients who did not have a volume measurement at week 6; red diamonds indicate simulations with estimated volumes Vweek 6 on-RT / V0 > 2 outside of the displayed area.

Treatment response predictions were simulated with different weights, Wnmeas, relative to wh= 1 for the historical δ distribution for each number of clinical on-treatment measurements, nmeas. This was done with 500 predictions because our analysis showed this to be sufficient for stable results despite the random sampling from the δ distribution (Fig. E6). Across all training cohorts in the leave-one-out analysis, the optimal weights were determined to be Wnmeas<2 for nmeas ≤ 2, and Wnmeas>2 on average for all nmeas ≥ 3 (Fig. E7).

Defining risk strata.

To relate modeled tumor-volume dynamics to patient outcomes, we derived tumor-volume reduction cutoffs after 6 weeks of RT that perfectly separated the patients into 2 risk strata: (1) low risk of failure and (2) high risk of failure. The low-risk stratum consisted of patients with measured tumor volume changes less than the determined volume reduction cutoff at week 6 of RT. The patients in this stratum had no failures. The high-risk stratum consisted of patients with measured tumor volume changes greater than the determined cutoff. This group had a mixture of patients with controlled tumors and patients with locoregional or distant failure. Cutoff values that maximized the significance of curve separation of the LRC and DFS Kaplan-Meier survival curves were selected (Fig. 5C-Fig. 5D). This was done by testing 100 possible cutoff values, spanning the entire range of possible cutoffs, and selecting the cutoffs that minimized the log-rank P values. The volume-reduction cutoffs for LRC and DFS were determined independently of each other.

Risk prediction versus outcome prediction.

The model predicted an LRC or DFS event if the simulated tumor volume at week 6 was greater than the volume-reduction threshold determined from the N-1 training cohort. The model predictive power was then evaluated for (1) risk prediction and (2) outcome prediction. For risk prediction, the model prediction was compared with the actual risk stratum that the patient’s week-6 volume measurement placed them in. In this case, a prediction of an event and placement in the high-risk stratum or the prediction of no event and placement in the low-risk stratum would both be considered correct predictions. For outcome prediction, the model prediction was directly compared with the observed LRC or DFS events from the 5-year follow-up. The subset of patients for which the model would have a correct risk prediction but incorrect outcome prediction would consist of any patients who had a week-6 tumor volume greater than the volume-reduction threshold (placing them in the high-risk stratum) but had locoregional or distant tumor control at the time of data abstraction.

Results

Parameter tuning

Model calibration to MCC cohort

Across the entire MCC cohort, the model fit individual volume measurements with low error, ⟨nRMSE⟩ = 0.098, and this showed that the proposed model can fit a variety of pretreatment and on-treatment tumor-volume dynamics with 3 patient-specific parameters (Fig. E1A-E1B). The distribution of fitted values for individual growth rate, λ, spanned the entire bounded range, whereas the optimized values for the carrying capacity reduction fraction for each RT dose fraction, δ, were less than 0.1 for each patient (Fig. E1C). Notably, 3 patients had λ values at the upper bound. However, we tried a number of values for the upper bound, and the model always fit λ to the upper bound for these patients. Model calibration and parameter optimization were performed 42 times to reach 95% probability that the optimizer was finding globally optimal results (Supplementary Methods and Fig. E2). In the case that 2 optima were found, then the more frequently occurring parameters were selected.

Model simplification by parameter reduction

Uncertainty associated with each parameter estimate leads to a decrease in the confidence and predictive power of the model; thus, parameter reduction may decrease uncertainty and increase confidence and predictive power. We explored which of the parameters could be defined as uniform across the entire training cohort with minimal cost to the model’s goodness-of-fit. Based on Akaike Information Criterion and Bayesian Information Criterion analyses, we found that λ could be set to a fixed value across the entire MCC cohort with minimal reduction to the fitting capacity of the model (Supplementary Methods and Table E2). We found λoptim = 0.13 day−1, which corresponds to a tumor-volume doubling time of 5 days, to minimize nRMSE across the entire training cohort (Fig. 3B). Setting λoptim = 0.13 day−1 uniform across all patients resulted in fits not noticeably different from the results of the original model (⟨nRMSE⟩ = 0.136 vs ⟨nRMSE⟩ = 0.098) (Fig. 3A; Fig. 3C; Fig. E3). The calculated values of PSI did not vary significantly between the 2 models (P = .78; Fig. 3D). The distributions of the fitted values of δ also did not vary significantly between the full and reduced models (P = .86), showing that setting λ uniform across the MCC cohort did not meaningfully alter the estimation of δ (Fig. 3D).

Fig. 3.

Fig. 3.

Model testing results for the MDACC cohort with uniform λ learned from the MCC cohort and patient-specific δ values. (A) Representative fitting results for 3 patients showing the rich variety of response dynamics that the model can capture. Volume trajectories are from the reduced model using λoptim learned from the training cohort for all patients. (B) Correlation of simulated volumes to the measured tumor volumes for all 22 patients in the cross-validation cohort. (C) Box plots comparing tumor volumes at the start of radiation therapy of the training cohort (17 patients) and the cross-validation cohort (22 patients). (D) Box plots showing parameter distributions across all patients and comparing the results from the training and cross-validation cohorts. For each box plot, median and interquartile divisions are indicated.

Testing tuned parameters on MDACC cohort

The reduced version of the model, with λoptim learned from the MCC cohort, fit pretreatment and on-treatment tumor-volume dynamics in the MDACC cohort with high accuracy (⟨nRMSE⟩ = 0.131) (Fig. 4A-B; Fig. E4). This was despite the fact that some tumor volumes in the MDACC cohort were up to 2 times larger than those in the MCC cohort, although the distribution of starting volumes was statistically indistinguishable between the 2 cohorts (P = .55) (Fig. 4C). The distributions of PSI values of the MCC cohort and the MDACC cohort also were not statistically distinguishable, which showed similarity in terms of pretreatment growth between the 2 cohorts (P = .77) (Fig. 4D). In addition, the fitted values of δ did not vary significantly between the 2 cohorts, indicating that the on-treatment dynamics, as captured by the model, were also similar between the 2 cohorts (P = .66) (Fig. 4D). Owing to the wide range of RT response dynamics and subsequently a broad distribution of δ values, pretreatment dynamics alone were unable to accurately predict δ (Fig. E5).

Fig. 4.

Fig. 4.

Model tuning results for the MCC cohort, with uniform λ and patient-specific δ values compared with fits from the full model. (A) Representative fitting results for 3 patients showing the rich variety of response dynamics that the model can capture. Magenta curves (dashed for pretreatment calculations and solid for on-treatment fits) show volume trajectories from the full model with patient-specific λ values; blue curves (dashed for pretreatment calculations and solid for on-treatment fits) show volume trajectories from the reduced model, using λoptim = 0.13 day−1 across all patients. (B) Finding optimal λ to minimize the average normalized root mean square error for the training cohort. (C) Correlation of simulated volumes for the reduced model to the measured tumor volumes for all 17 patients in the training cohort. (D) Parameter distributions across all patients for both the full model and the reduced model (median and interquartile divisions indicated).

Patient outcome prediction

One hundred prediction simulations for each left-out patient, performed using the λoptim and the optimized values of Wnmeas learned from the corresponding leave-one-out training cohorts, are displayed in Figure 5E to 5F.

Evaluating forecasting pipeline performance

The performance of the forecasting pipeline was evaluated in terms of both risk prediction and outcome prediction by comparing it with the measured tumor volumes at week 6 of RT. A total of 3 patients (8%) without tumor-volume measurements at week 6 were excluded from the prediction analysis.

Model outcome predictions for LRC and DFS events yielded sensitivity and specificity values slightly above the chance line, with nmeas = 0, for LRC prediction and slightly below the chance line for DFS prediction. Model predictions after inclusion of 1 on-therapy tumor-volume measurement (nmeas = 1) yielded specificities greater than 0.85 for both outcomes, although with sensitivities less than 0.5 for both LRC and DFS. By including 2 on-treatment measurements (nmeas = 2), the forecasting framework predicted patient-specific outcomes with >0.63 sensitivity and >0.96 specificity for LRC and with >0.85 specificity for DFS, although the DFS sensitivity remained <0.5. Additional on-RT measurements (nmeas = 3 to 4) further increased prediction specificity for LRC, although this trend did not hold for DFS (Fig. 6A). Notably, sensitivity values did not increase to greater than 0.76 for LRC or 0.67 for DFS. To evaluate the predictive power provided by our model compared with the prognostic capacity of volume-reduction measurements relative to tumor volume at the start of RT alone to predict LRC or DFS events, we calculated ROC curves for absolute volume reduction for weeks 1 to 4 of RT, relative to the start of RT (Fig. 6B, Fig. E8, and Supplementary Methods). Whereas tumor-volume reduction alone has predictive power for outcome prediction, the dynamic carrying capacity model outperforms volume reduction alone with the inclusion of at least 1 measurement of treatment response for both LRC and DFS prediction, as evaluated by statistical comparisons of the Youden J statistic for both methods34 (Fig. 6C, Table E7, and Supplementary Methods).

Fig. 6.

Fig. 6.

Comparing outcome predictions using the dynamic carrying capacity model with the prediction pipeline vs volume reduction alone. (A) Receiver operator characteristic (ROC) plots summarizing the pipeline results from the 39 leave-one-out studies to predict locoregional control and disease-free survival for each left-out patient, with an increasing number of weekly measurements being considered. Each marker shows the performance of 1 simulation of 500 predictions each (10 simulations total); the gray unit line indicates the chance line in the ROC space. Standard deviations for all predictions were <0.01 (exact values are given in Table E3 in the Supplement). (B) ROC analysis of prediction results using volume reduction relative to the start of radiation therapy (RT) for weeks 1 to 4 of RT. Error bars indicate the standard deviations (exact values are given in Table E5) of the sensitivity and specificity of the 39 leave-one-out predictions derived from points maximizing the Youden J statistic derived from individual ROC analyses (Fig. E8). (C) Comparison of the Youden J statistic for the model predictions (teal) and predictions using volume reduction alone (black) at different weeks of RT. Error bars indicate standard deviation values (exact values are given in Tables E3 and E5). Standard deviations < 0.05 are not shown. All comparisons are statistically distinct (P < .05; exact values are given in Table E7), except for DFS for weeks 3 and 4 of RT.

In terms of risk prediction, the model predictions showed a similar trend for specificity. However, sensitivity values increased up to 0.8 for both LRC and DFS for nmeas = 3 to 4 (Fig. E9A). Notably, for risk prediction, volume reduction alone statistically outperformed the model predictions in most cases but without clinically actionable differences (Fig. E9B-E9C and Table E8).

Discussion

To our knowledge, this is the first presentation of a mathematical model of tumor-volume dynamics in response to RT with a dynamic carrying capacity modeled as an instantaneous function of therapy. Previously, Hahnfeldt et al presented a model with a dynamic carrying capacity, modeled as a continuous function based on the degree of vascularization, to model the effect of antiangiogenic drugs.35 Several models have used a changing tumor microenvironment with a dynamic carrying capacity to model the effects of immune predation, immune-mediated tumor stimulation, or nutrient availability in the tumor microenvironment.3638 In contrast to these models, we assumed carrying capacity reduction to be an emergent multifactorial property.

This simple model, with 3 patient-specific parameters in the full version and 2 patient-specific parameters in the reduced version, was able to simulate individual differences in both the varied tumor-response dynamics during RT and variable pretreatment growth dynamics, which included, but were not limited to, no change in tumor volume before RT, transient increases in volume after the start of RT, and various rates of volume reduction during RT. Previous attempts to model the effect of radiation with an LQ model–related survival rate have been unable to capture some of these diverse behaviors15,17; this motivated the inclusion of additional variables and parameters describing the dynamics of “doomed” cells dying from radiation,3941 yielding an ill-posed mathematical problem with currently collected data.42 The ability to model radiation response dynamics with a single variable and fewer parameters as shown in this study opens up the possibility of reliably predicting individual patient responses to therapy, and subsequently, the potential to stratify patients for adaptation and thus personalization of radiation therapy.

Both the full version of the model with 3 patient-specific parameters and the reduced model with 2 patient-specific parameters fit the data with low error (⟨nRMSE⟩ < 0.14), showing that modeling the effect of RT as an indirect effect via a reduction in a putative carrying capacity may be sufficient to model patient-specific tumor-volume dynamics. Furthermore, the fitting results from the reduced model with a constant growth rate, λ, across the entire cohort showed that interpatient heterogeneity can be captured in the PSI and carrying capacity reduction fraction, δ, vis-a-vis traditional simulations of patient-specific growth rates.4345

Testing of the reduced model with an independent data set showed promising results that imply that it may be possible to learn a value for a patient-uniform tumor growth rate λ from an external or historical cohort. In this case, all patient heterogeneity would be described by patient-specific PSI and δ values. Notably, even if λ is known a priori, 2 pretreatment volume measurements are needed to calculate PSI. Here, we showed that 2 tumor volume measurements from 2 sufficiently separated time points before the start of an RT course may be used to inform estimates of PSI.

The presented prediction pipeline, which uses the combination of a training parameter distribution and preliminary parameter estimates from available clinical measures, showed a remarkable capacity to predict clinical outcomes with high specificity and moderate sensitivity with the inclusion of just a few weekly clinical measurements. The high specificity in prediction of both LRC and DFS events after just 1 week of RT will be critical to the potential utility of using this framework to inform treatment adaptation, because at that point, there are still 4 to 5 weeks of RT left in the treatment course for head and neck cancers. In addition, the fact that the model forecasts of LRC and DFS outcomes outperform predictions based on simple volume reduction relative to tumor volume at the start of RT alone shows the necessity of such a model that considers multiple on-treatment volume measurements for making high-accuracy predictions early in the treatment course. This shows the benefit of using a model that uses volume measurements from multiple time points, as opposed to just considering a simple metric of percentage volume reduction derived from volume measurements at 2 time points. Notably, the model outperformed volume reduction alone in predicting outcome but not risk. Despite the potential utility of the model forecasts, the accuracy, sensitivity, and specificity values should be considered in light of the low number of failures in both patient cohorts.

The trend in the optimized values of Wnmeas with the inclusion of increasing amounts of clinical measurements (Fig. E7) indicates that the forecasting method weights the patient-specific clinical measurements less in the first couple of weeks of RT and then begins weighing them 5 to 9 times more heavily than the information from the historic cohort for the remaining weeks. Notably, despite essentially ignoring the patient-specific measurements during the first few weeks of treatment, the forecasting method achieves high specificity in both risk and outcome prediction. In addition, although the model was trained only to maximize predictive power for LRC outcomes, it performed well in accurately predicting DFS outcomes. It is conceivable that this may be owed to a correlation between these 2 outcomes.46

The capacity of this method to accurately forecast clinical outcomes for individual patients has potential implications for clinical decision making. If these predictions hold up in prospective validation, then this framework could be used to forecast and determine whether a patient will have a positive outcome from a course of RT midtreatment. This may offer the first mathematical modeling–provided trigger to adjust RT based on individual patients’ early response dynamics: to escalate radiation dose with or without concurrent therapies when necessary or to de-escalate RT without sacrificing cure.

Although these results are very promising, it remains to be seen how δ varies with different RT doses. This is in contrast to models that rely on modeling tumor-volume reduction by modeling cell death as a result of RT, in which the well-established LQ model translates between different dose fractions. A prospective trial in which longitudinal tumor-volume data are collected for patients with similar histology and different dosing protocols would provide insights into how δ could be calibrated as a function of dose (NCT03656133).

It should also be noted that neither the full version of the model nor the reduced version could capture large transient increases in tumor volume. Such large changes in volume may be caused by factors not included in the model, such as an influx of immune cells or increased fluid retention.47,48 It may be possible to separate out these volumes if a different imaging modality, such as magnetic resonance imaging, is used.49 Of interest is that the early data points collected for test patients were weighted very low compared with the leave-one-out historic training data. This further indicates that early transient radiographic response dynamics are not captured in the developed model and may, indeed, not be prognostic. The increased weight for individual patient data after week 2 during RT suggests that by that time, radiation response dynamics are adequately captured in the presented model and are highly predictive and prognostic.

In addition, we learned that when determining λoptim from one cohort and applying it as a predetermined parameter in an independent cohort, more rigorous investigation and analysis are necessary to determine what degree of similarity is needed between cohorts (and by what metrics this should be determined) to transfer learned parameter values. This would include determining the minimum characteristics necessary to describe similar cohorts. If the model turns out to be insensitive to the value of λ across multiple cohorts, then it may be possible for λ to be set uniform for broader categories and possibly even for other disease sites. It is interesting to note here that despite the MDACC cohort having a maximum tumor volume at the start of RT that was nearly 2 times larger than that of the MCC cohort, λoptim was translatable between these cohorts. It should also be noted that although these cohorts were heterogeneous in terms of the primary site of the cancer, there was no statistically discernible difference in outcome between the sites, owing to the small number of local and distant failures.

There has been a recent proposal to use a genomic signature to stratify patients according to radiosensitivity.50 It may be possible to find a comparable signature to predict δ. However, because carrying capacity, and subsequently δ, is an emergent property that is the sum of multiple factors, any biological signature to infer δ will likely need to include multiple components. This could include the degree of immune infiltration in the tumor microenvironment or what subtypes of immune cells make up the tumor-associated immune cells, which may be accessible by expression-level sequencing data or analysis of stained tissue samples obtained as part of a pretreatment biopsy or from surgical resection.5153 Depending on to what degree of confidence δ could be estimated from such pretreatment information, this parameter estimate could potentially be integrated into the prediction pipeline with its own relative weight.

The utility of the model and prediction methodology will need further validation on external cohorts in a prospective setting before we can ascertain that the underlying assumptions are acceptable, and it is necessary to be cautious as these types of models move toward translation.53 Nevertheless, the results presented here are promising both for mathematical modeling of cancer and for predicting individual patient responses to different RT protocols.

Supplementary Material

Supp.Materials

Acknowledgments—

The authors thank Isha Harshe for her artwork in Figure 1A and Maximilian Strobl, Gregory Kimmel, Renee Brady-Nicholls, Stefano Pasetto, Daniel Glazar, and Rebecca Bekker for helpful discussions and feedback.

This research has been supported in part by funding and salary support to H.E. from the National Institutes of Health (NIH), including grant U01CA244100 from the National Cancer Institute (NCI). Additionally, this research has been supported in part by funding and salary support to C.D.F. from the NIH, including grant R01DE028290 from the National Institute for Dental and Craniofacial Research Academic Industrial Partnership; grant 1R01CA218148 from the NCI Early Phase Clinical Trials in Imaging and Image-Guided Interventions Program; an NIH/NCI Cancer Center Support Grant (CCSG) Pilot Research Program Award (P30CA016672) from the UT MD Anderson CCSG Radiation Oncology and Cancer Imaging Program; and an NIH/NCI Head and Neck Specialized Programs of Research Excellence Developmental Research Program Award (P50 CA097007). Direct infrastructure support was provided by the multidisciplinary Stiefel Oropharyngeal Research Fund of the University of Texas MD Anderson Cancer Center Charles and Daneen Stiefel Center for Head and Neck Cancer and the Cancer Center Support Grant (P30CA016672) and the MD Anderson Program in Image-guided Cancer Therapy.

Disclosures: M.U.Z. and H.E. are inventors on a provisional patent application titled Personalized Radiation Therapy. J.J.C. has received research grant support and honoraria from and has consulted for Varian Medical Systems. L.B.H. is the principal investigator of a ViewRay research grant. C.D.F. received funding and salary support unrelated to this project from the National Institute for Dental and Craniofacial Research Establishing Outcome Measures Award, from the National Science Foundation (NSF), Division of Mathematical Sciences, Joint National Institutes of Health (NIH)/NSF Initiative on Quantitative Approaches to Biomedical Big Data, and from Elekta AB; grants from the NSF Division of Civil, Mechanical, and Manufacturing Innovation, the National Institute of Biomedical Imaging and Bioengineering Research Education Programs for Residents and Clinical Fellows, and the MD Anderson Sister Institution Network Fund; and the NIH Big Data to Knowledge Program of the National Cancer Institute Early Stage Development of Technologies in Biomedical Computing, Informatics, and Big Data Science Award. H.E. receives funding and salary support unrelated to this project from an NIH/NCI grant. No other disclosures were reported.

Footnotes

Research data, code, and materials are available upon reasonable request.

Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.ijrobp.2021.05.132.

References

  • 1.Torres-Roca JF. A molecular assay of tumor radiosensitivity: A roadmap towards biology-based personalized radiation therapy. Per Med 2012;9:547–557. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Baskar R, Dai J, Wenlong N, Yeo R, Yeoh K-W. Biological response of cancer cells to radiation treatment. Front Mol Biosci 2014;1:24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Harrison LB, Rishi A, Caudell JJ, et al. The future of personalised radiotherapy for head and neck cancer. Lancet Oncol 2017;18: e266–e273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Enderling H, Alfonso JCL, Moros E, Caudell JJ, Harrison LB. Integrating mathematical modeling into the roadmap for personalized adaptive radiation therapy. Trends Cancer 2019;5:467–474. [DOI] [PubMed] [Google Scholar]
  • 5.Aherne NJ, Dhawan A, Scott JG, Enderling H. Mathematical oncology and it’s application in non melanoma skin cancer—A primer for radiation oncology professionals. Oral Oncol 2020;103104473. [DOI] [PubMed] [Google Scholar]
  • 6.Eschrich S, Zhang H, Zhao H, et al. Systems biology modeling of the radiation sensitivity network: A biomarker discovery platform. Int J Radiat Oncol Biol Phys 2009;75:497–505. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Eschrich SA, Pramana J, Zhang H, et al. A gene expression model of intrinsic tumor radiosensitivity: Prediction of response and prognosis after chemoradiation. Int J Radiat Oncol Biol Phys 2009;75:489–496. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Eschrich SA, Fulp WJ, Pawitan Y, et al. Validation of a radiosensitivity molecular signature in breast cancer. Clin Cancer Res 2012;18: 5134–5143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Fowler JF. How worthwhile are short schedules in radiotherapy? A series of exploratory calculations. Radiother Oncol 1990;18:165–181. [DOI] [PubMed] [Google Scholar]
  • 10.Fowler JF. The linear-quadratic formula and progress in fractionated radiotherapy. Br J Radiol 1989;62:679–694. [DOI] [PubMed] [Google Scholar]
  • 11.Brenner DJ. The linear-quadratic model is an appropriate methodology for determining isoeffective doses at large doses per fraction. Semin Radiat Oncol 2008;18:234–239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Sachs RK, Hlatky LR, Hahnfeldt P. Simple ODE models of tumor growth and anti-angiogenic or radiation treatment. Math Comput Model 2001;33:1297–1305. [Google Scholar]
  • 13.Enderling H, Chaplain MAJ, Hahnfeldt P. Quantitative modeling of tumor dynamics and radiotherapy. Acta Biotheor 2010;58:341–353. [DOI] [PubMed] [Google Scholar]
  • 14.Poleszczuk J, Krzywon A, Forys U, Widel M. Connecting radiation-induced bystander effects and senescence to improve radiation response prediction. Radiat Res 2015;183:571–577. [DOI] [PubMed] [Google Scholar]
  • 15.Prokopiou S, Moros EG, Poleszczuk J, et al. A proliferation saturation index to predict radiation response and personalize radiotherapy fractionation. Radiat Oncol 2015;10:1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Poleszczuk J, Walker R, Moros EG, Latifi K, Caudell JJ, Enderling H. Predicting patient-specific radiotherapy protocols based on mathematical model choice for proliferation saturation index. Bull Math Biol 2018;80:1195–1206. [DOI] [PubMed] [Google Scholar]
  • 17.Sunassee ED, Tan D, Ji N, et al. Proliferation saturation index in an adaptive Bayesian approach to predict patient-specific radiotherapy responses. Int J Radiat Biol 2019;95:1421–1426. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Anderson ARA, Quaranta V. Integrative mathematical oncology. Nat Rev Cancer 2008;8:227–234. [DOI] [PubMed] [Google Scholar]
  • 19.Zahid MU, Mohamed ASR, Latifi K, et al. Proliferation saturation index to characterize response to radiation therapy and evaluate altered fractionation in head and neck cancer. Appl Radiat Oncol 2021;10:32–39. [Google Scholar]
  • 20.Basanta D, Anderson ARA. Exploiting ecological principles to better understand cancer progression and treatment. Interface Focus 2013;320130020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Gatenby RA, Vincent TL. Application of quantitative models from population biology and evolutionary game theory to tumor therapeutic strategies; Mol Cancer Ther 2003;2:919–927. Available at: http://mct.aacrjournals.org/content/2/9/919.short. Accessed October 24, 2019. [PubMed] [Google Scholar]
  • 22.Gatenby RA, Brown J, Vincent T. Lessons from applied ecology: Cancer control using an evolutionary double bind. Cancer Res 2009;69:7499–7502. [DOI] [PubMed] [Google Scholar]
  • 23.Arnold KM, Flynn NJ, Raben A, et al. The impact of radiation on the tumor microenvironment: Effect of dose and fractionation schedules. Cancer Growth Metastasis 2018;11117906441876163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Demaria S, Ng B, Devitt ML, et al. Ionizing radiation inhibition of distant untreated tumors (abscopal effect) is immune mediated. Int J Radiat Oncol Biol Phys 2004;58:862–870. [DOI] [PubMed] [Google Scholar]
  • 25.Formenti SC, Demaria S. Systemic effects of local radiotherapy. Lancet Oncol 2009;10:718–726. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Formenti SC, Demaria S. Radiation therapy to convert the tumor into an in situ vaccine. Int J Radiat Oncol Biol Phys 2012;84:879–880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Poleszczuk JT, Luddy KA, Prokopiou S, et al. Abscopal benefits of localized radiotherapy depend on activated T-cell trafficking and distribution between metastatic lesions. Cancer Res 2016;76:1009–1018. [DOI] [PubMed] [Google Scholar]
  • 28.López Alfonso JC, Poleszczuk J, Walker R, et al. Immunologic consequences of sequencing cancer radiotherapy and surgery. JCO Clin Cancer Informatics 2019;3:1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Tozer GM, Myers R, Cunningham VJ. Radiation-induced modification of blood flow distribution in a rat fibrosarcoma. Int J Radiat Biol 1991;60:327–334. [DOI] [PubMed] [Google Scholar]
  • 30.Friedman EJ. Immune modulation by ionizing radiation and its implications for cancer immunotherapy. Curr Pharm Des 2002;8: 1765–1780. [DOI] [PubMed] [Google Scholar]
  • 31.Matsu-ura T, Dovzhenok A, Aihara E, et al. Intercellular coupling of the cell cycle and circadian clock in adult stem cell culture. Mol Cell 2016;64:900–912. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Stone M Cross-validatory choice and assessment of statistical predictions. J R Stat Soc Ser B 1974;36:111–133. [Google Scholar]
  • 33.Moons KGM, Altman DG, Reitsma JB, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): Explanation and elaboration. Ann Intern Med 2015;162:W1–W73. [DOI] [PubMed] [Google Scholar]
  • 34.Youden WJ. Index for rating diagnostic tests. Cancer 1950;3:32–35. [DOI] [PubMed] [Google Scholar]
  • 35.Hahnfeldt P, Panigrahy D, Folkman J, Hlatky L. Tumor development under angiogenic signaling: A dynamical theory of tumor growth, treatment response, and postvascular dormancy. Cancer Res 1999; 59:4770–4775. Available at: http://cancerres.aacrjournals.org/content/59/19/4770.short. Accessed 25 October 2019. [PubMed] [Google Scholar]
  • 36.Wilkie KP, Hahnfeldt P. Modeling the dichotomy of the immune response to cancer: Cytotoxic effects and tumor-promoting inflammation. Bull Math Biol 2017;79:1426–1448. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Wilkie KP, Hahnfeldt P. Tumor—Immune dynamics regulated in the microenvironment inform the transient nature of immune-induced tumor dormancy. Cancer Res 2013;73:3534–3544. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Kareva I Biological stoichiometry in tumor micro-environments. PLoS One 2013;8:e51844. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Chvetsov AV, Dong L, Palta JR, Amdur RJ. Tumor-volume simulation during radiotherapy for head-and-neck cancer using a four-level cell population model. Int J Radiat Oncol Biol Phys 2009;75: 595–602. [DOI] [PubMed] [Google Scholar]
  • 40.Chvetsov AV, Yartsev S, Schwartz JL, Mayr N. Assessment of interpatient heterogeneity in tumor radiosensitivity for nonsmall cell lung cancer using tumor-volume variation data. Med Phys 2014;41:064101. [DOI] [PubMed] [Google Scholar]
  • 41.Lewin TD, Byrne HM, Maini PK, Caudell JJ, Moros EG, Enderling H. The importance of dead material within a tumour on the dynamics in response to radiotherapy. Phys Med Biol 2020;65:15007. [DOI] [PubMed] [Google Scholar]
  • 42.Chvetsov AV, Sandison GA, Schwartz JL, Rengan R. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data. Phys Med Biol 2015;60:8491. [DOI] [PubMed] [Google Scholar]
  • 43.Laird AK. Dynamics of tumour growth: Comparison of growth rates and extrapolation of growth curve to one cell. Br J Cancer 1965; 19:278. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Steel GG, Lamerton LF. The growth rate of human tumours. Br J Cancer 1966;20:74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Fournier DV, Weber E, Hoeffken W, Bauer M, Kubli F, Barth V. Growth rate of 147 mammary carcinomas. Cancer 1980;45:2198–2207. [DOI] [PubMed] [Google Scholar]
  • 46.Michiels S, Le Maître A, Buyse M, et al. Surrogate endpoints for overall survival in locally advanced head and neck cancer: Meta-analyses of individual patient data. Lancet Oncol 2009;10:341–350. [DOI] [PubMed] [Google Scholar]
  • 47.Lugade AA, Moran JP, Gerber SA, Rose RC, Frelinger JG, Lord EM. Local radiation therapy of B16 melanoma tumors increases the generation of tumor antigen-specific effector cells that traffic to the tumor. J Immunol 2005;174:7516–7523. [DOI] [PubMed] [Google Scholar]
  • 48.Bae JS, Roh J-L, Lee S, et al. Laryngeal edema after radiotherapy in patients with squamous cell carcinomas of the larynx and hypopharynx. Oral Oncol 2012;48:853–858. [DOI] [PubMed] [Google Scholar]
  • 49.Jardim-Perassi BV, Huang S, Dominguez-Viqueira W, et al. Multiparametric MRI and coregistered histology identify tumor habitats in breast cancer mouse models. Cancer Res 2019;79:3952–3964. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Mellon E, Yue B, Strom TS, et al. A genome-based model for adjusting radiotherapy dose (GARD): A retrospective, cohort-based study. Lancet Oncol 2016;18:202–211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Thorsson V, Gibbs DL, Brown SD, et al. The immune landscape of cancer. Immunity 2018;48812–830.e14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Daniel Grass G, Alfonso JCL, Welsh E, et al. Harnessing tumor immune ecosystem dynamics to personalize radiotherapy. bioRxiv January20202020.02.11.944512. [Google Scholar]
  • 53.Brady R, Enderling H. Mathematical models of cancer: When to predict novel therapies, and when not to. Bull Math Biol 2019;81:3722–3731. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp.Materials

RESOURCES