Skip to main content
Translational Oncology logoLink to Translational Oncology
. 2018 Apr 16;11(3):732–742. doi: 10.1016/j.tranon.2018.03.009

Precision Medicine with Imprecise Therapy: Computational Modeling for Chemotherapy in Breast Cancer1

Matthew T McKenna *,, Jared A Weis , Amy Brock , Vito Quaranta §, Thomas E Yankeelov ‡,¶,#,**,††,
PMCID: PMC6056758  PMID: 29674173

Abstract

Medical oncology is in need of a mathematical modeling toolkit that can leverage clinically-available measurements to optimize treatment selection and schedules for patients. Just as the therapeutic choice has been optimized to match tumor genetics, the delivery of those therapeutics should be optimized based on patient-specific pharmacokinetic/pharmacodynamic properties. Under the current approach to treatment response planning and assessment, there does not exist an efficient method to consolidate biomarker changes into a holistic understanding of treatment response. While the majority of research on chemotherapies focus on cellular and genetic mechanisms of resistance, there are numerous patient-specific and tumor-specific measures that contribute to treatment response. New approaches that consolidate multimodal information into actionable data are needed. Mathematical modeling offers a solution to this problem. In this perspective, we first focus on the particular case of breast cancer to highlight how mathematical models have shaped the current approaches to treatment. Then we compare chemotherapy to radiation therapy. Finally, we identify opportunities to improve chemotherapy treatments using the model of radiation therapy. We posit that mathematical models can improve the application of anticancer therapeutics in the era of precision medicine. By highlighting a number of historical examples of the contributions of mathematical models to cancer therapy, we hope that this contribution serves to engage investigators who may not have previously considered how mathematical modeling can provide real insights into breast cancer therapy.

Introduction

On May 25, 1961, President Kennedy proposed to Congress that the United States should commit itself to “landing a man on the moon and returning him safely to earth” by the end of the decade. Similarly, on December 23, 1971 President Nixon signed into law the National Cancer Act and stated it was time for the concentrated effort that resulted in the lunar landings to be turned towards conquering cancer. Of course, Neil Armstrong first set foot on the lunar surface on July 20, 1969, yet 46 years after Nixon’s announcement we have made only modest advances in controlling this disease. This is particularly striking with the renewed lunar-centric announcement of the Cancer Moonshot Initiative by former President Obama in his 2016 State of the Union. A fundamental difference between the planetary and cancer moonshots is that the basic mathematics for gravity were known for nearly three centuries at the time of Kennedy’s speech, while we still do not have a mathematical description of cancer that allows us to compute the spatiotemporal evolution of an individual patient’s tumor. In the current state of oncology, we are tasked with getting to the moon without knowing F = ma.

Precision medicine is the concept of incorporating patient-specific variability into prevention and treatment strategies [1]. The advent of precision medicine has brought significant advances to oncology. The majority of these efforts have focused on the use of genetics to classify and pharmaceutically target cancers [2]. This approach has led to a paradigm in which tumor genotypes are matched to appropriate treatments [3], [4]. For example, the addition of trastuzumab, a monoclonal antibody targeting the human epidermal receptor 2 (HER2) protein, to chemotherapeutic regimens in breast cancer patients with HER2-positive disease has resulted in improved disease-free and overall survival [5]. While the current genetic-centric approach to cancer therapy has great merit in appropriately selecting therapies and identifying new pharmaceutical targets, it can frequently overlook a host of patient-specific measures that influence response to therapy. For example, the microenvironment of the tumor alters response [6], delivery of therapy to tumors is variable as tumor perfusion is limited [7], [8], and patient-specific pharmacokinetic properties vary [9], [10]. Intratumor heterogeneity, at the genetic and epigenetic levels, complicates the use of gene-centric precision medicine approaches. In some tumors, a single dominant clone may be identified [11], [12] and that clone may be targetable by therapy; however, neutral evolution and vast clonal diversity are more common scenarios [13], [14], [15]. For example, a single hepatocellular carcinoma may include more than 100 million different coding region mutations, including multiple sets of potential ‘driver’ mutations [16]. Further, the schedule on which therapy is given may significantly alter response [17], [18], [19]. These issues may be partly responsible for the high attrition rates of proposed cancer therapeutics [20].

The goal of precision medicine is to tailor therapeutic strategies to each patient’s specific biology. More specifically, we define the goal of precision medicine to be the use of the optimal dose of the optimal therapy on the optimal schedule for each patient. Under this interpretation, there is an opportunity to expand precision oncology beyond the tumor-genotype-driven selection of therapy. To achieve this goal, new hypotheses related to optimal dosing and scheduling are needed. Whereas the hypotheses in genetic studies often compare tumor volume changes to a static genetic marker, dosing and scheduling require temporally-resolved hypotheses and concomitant treatment response measures. In particular, such hypotheses would need to specify quantitatively how the tumor microenvironment and/or patient pharmacokinetics influence response to therapy in order to adapt therapeutic approaches to measured responses. Fortunately, the tools to probe cancer from the genetic to tumor scales have rapidly matured over the past decade. While more time is needed to fully understand and contextualize the micro-, meso-, and macro-scale data coming online, several groups have demonstrated the utility of new technologies. For example, advances in imaging technologies, such as diffusion weighted magnetic resonance imaging (DW-MRI) and dynamic contrast enhanced MRI (DCE-MRI), have led to the discovery of clinically-relevant biomarkers that are predictive of response [21]. We (and others [22], [23], [24]) believe that mathematical modeling holds the potential to synthesize available biomarkers to test new hypotheses. These models will not only improve our ability to treat cancer, but it will also allow precision cancer care to enter the dosing and scheduling domains.

A goal of mathematical modeling is to abstract the key features of a physical system to succinctly describe its behavior in a series of mathematical equations. In this way, the system can be simulated in silico to further understand system behavior, generate hypotheses, and guide experimental design. When experimental data is available, model predictions can be compared to those data. The model can then be iteratively refined to account for data-prediction mismatches. Models can also identify high-yield experiments in cases where an exhaustive investigation of experimental conditions is infeasible [25]. Traditionally, cancer models are built off of first order biological and physical principles, such as evolution [26] and diffusion [27]. Part of the recent excitement about applications of mathematical models to cancer is the discovery of higher-order, emergent properties that any one model component does not possess [28]. For example, cancer models have been constructed to investigate the role of tumor cell-matrix interactions in shaping tumor geometry and in enhancing selective pressures [29]. Fundamentally, models built from these first principles are designed to discover new biological behaviors and principles, identify new hypotheses for further investigation, and predict the behavior of cancer systems to perturbations. These models are tuned with any available data and simulated to discover system properties [24]. However, the majority of these models are not structured to leverage currently-available clinical data to make patient-specific predictions [30]. Often, these complex mechanism-based models have been limited to in silico exploration, and their utility in generating patient-specific predictions remains to be investigated. Medical oncology is in need of a mathematical, mechanism-based modeling framework to leverage all available clinical information, spanning from tumor genetic to tumor imaging data, to make impactful changes on patient management [31]. In this way, models can be used to make specific and measurable predictions of the response of an individual patient to an individualized therapeutic regimen. While these models may not explicitly consider all scales of biological interactions, they may be of practical utility by consolidating clinically-available data sources into a coherent understanding of tumor growth and treatment response.

The interaction of matter is governed by weak nuclear, strong nuclear, gravitational, and electromagnetic forces just as the behavior of cells is governed by genetics and genetic expression. However, for macroscopic objects traveling at speeds much less than the speed of light, F = ma is an excellent approximation of the movement of those objects. While the understanding of fundamental physical laws is still being advanced, a complete understanding is not necessary to leverage classical mechanical models to engineer mechanical tools (such as a rocket to lift astronauts to the moon). There is an opportunity in oncology to develop an analogous “classical oncology” toolkit. We posit that a complete understanding of cancer is not necessary to create tools that leverage clinical data to improve the treatment of cancer. This toolkit will likely consist of “simple” models that approximate the behavior and treatment response of tumors. Fortunately, the tools to make analogous force measurements in cancer already exist.

This perspective will highlight the utility of modeling and discuss opportunities for modeling in breast cancer treatment. Our target audience is composed of investigators with expertise in the biological sciences and interest in how mathematical modeling can inform the selection and optimization of therapies for breast cancer. We begin by reviewing the use of mathematical models in clinical oncology, including those used in radiation oncology. We then draw parallels between dose planning in radiation therapy and chemotherapy and propose how mathematical modeling approaches can leverage current technologies to more precisely apply anti-cancer chemotherapies. We then highlight opportunities for investigation in the clinical evaluation of response in the context of patient-specific modeling. To limit the scope of this perspective, we will focus on cytotoxic chemotherapeutics (defined below) in breast cancer. It is the goal of this perspective to provide guidance and highlight opportunities for a classical oncology toolkit.

Models for Clinical Oncology

We now discuss the mathematical theory and models used to define administration schedules and dosing in both medical and radiation oncology. We primarily focus on select dynamic treatment response models that have penetrated clinical practice and those with promising clinical utility. There exists a wide range of applications for mathematical models in oncology. For example, models have provided insight into tumor evolution and the development of intratumor heterogeneity [32]. Mathematical modeling has also been used to explore tumor initiation and development, focusing on tumor vasculature [33] and the tumor microenvironment [34]. We direct the interested reader to a review on mathematical oncology for a broader overview of the applications of mathematical modeling in cancer [35]. Further, while surgical oncology has incorporated mathematical modeling approaches, especially in image-guided surgical approaches, such discussion falls outside of the scope of this perspective. We review these concepts in the context of our definition of precision medicine: the use of the optimal dose of the optimal therapy on the optimal schedule for each patient.

Medical Oncology

Cytotoxic drugs, which are designed to inflict lethal insults on rapidly-dividing cells, were among the first pharmaceuticals used to treat breast cancer (the first clinical trial started in 1958 [36]), and they remain a critical component of current therapeutic regimens. The modern era of chemotherapy was born from the observation that mustard gas induced myelosuppressive states and was effective in treating hematologic malignancies [37]. Dosing schemes with these agents all follow a common pattern: cycles of a high dose nearing the maximum tolerated dose followed by a recovery period. The goal of this strategy is to maximize tumor cell kill, while trying to minimize adverse effects via drug holidays between each cycle. While tumors often respond to these therapies, there is a high rate of tumor recurrence. For example, the 5-year progression free survival rate for triple negative breast cancer (TNBC) patients is 61% [38]. Furthermore, cytotoxic therapies often have lasting effects on survivors, adversely affecting their quality of life. For example, doxorubicin, a standard-of-care therapy for the treatment of TNBC, is associated with cardiomyopathy [39].

When cytotoxic therapy was first applied to cancer, few mathematical principles existed to guide its use [40]. While the cytotoxic properties of these agents had clearly been demonstrated in animal models, the subsequent translation into a human population lagged behind. Skipper first observed the relationship between tumor size and treatment response when he discovered leukemia response to therapy to be proportional to the number of malignant cells [41]. He hypothesized that each dose of treatment kills a fixed percentage of tumor cells. This necessitates repeated dosing strategies to increase the odds of tumor eradication. Despite the relatively simplicity of the model, its practical implication was profound: chemotherapies should be delivered several times, even after the disappearance of macroscopic tumors, to eradicate all tumor cells. This was a departure from the current practice of the time, in which chemotherapeutic agents were given over a short course to treat solid tumors [36]. Skipper’s observation challenged this paradigm, and multi-dose treatment regimens were supported by subsequent clinical trials in the 1970’s [42], [43], forming the basis of modern adjuvant and neoadjuvant chemotherapy approaches. Subsequently, investigators sought to improve response through dose escalation. The dose escalation trials were met with limited success, as several agents demonstrated a saturated response curve at high doses [44], [45].

Investigation into the scheduling of therapeutics was advanced when Norton and Simon hypothesized that tumors grow according to Gompertzian kinetics [46], [47]. Qualitatively, Gompertzian kinetics posit that tumors grow exponentially, with an exponentially decreasing growth rate. Treatment response was assumed proportional to tumor growth rate, with smaller, faster growing tumors responding more robustly to treatment than larger slower-growing tumors. Similar to the log-kill model, the Norton-Simon hypothesis is relatively simple yet impactful. The model indicates that chemotherapy is best delivered to small, fast-growing tumors on a dose-dense schedule, minimizing the time between treatments. This approach limits the regrowth of tumors between treatments, meaning smaller tumors are being treated. Per the model, smaller tumors grow more quickly, rendering them increasingly responsive to treatment thereby maximizing therapeutic effect. This dose-dense approach demonstrated improvement over conventional dosing schedules in a clinical trial [17].

Multi-agent regimens were introduced in order to address tumor heterogeneity, in the hopes of eliminating tumor cells resistant to single agent therapy. Following the Goldie-Coldman hypothesis, which proposes that multi-agent chemotherapies should be delivered in alternating courses (e.g., ABABAB instead of AAABBB) to minimize the probability of developing resistance [48], empiric schedules of administration for these multi-agent regimens were tested in clinical trials [49]. While multi-agent regimens demonstrate improved efficacy relative to single-agent treatments, the scheduling of therapeutics remains an open question. For example, in trials investigating the reordering of treatments to avoid development of resistance according to the Goldie-Coldman hypothesis, the schedules that delivered therapy most quickly (regardless of order) were found to be superior [49]. While different schedules have been hypothesized to significantly impact response [19], [50], [51], empiric schedules remain as a matter of practicality as there exist innumerable combinations of drugs and schedules that cannot be tested clinically.

While several more complicated models of tumor growth and treatment response have been proposed in the literature [35], the models highlighted have been the only to penetrate clinical practice. We suppose these have gained traction because each provided a precise, clinically-testable hypothesis for improved cancer treatment. However, these models are limited to making general predictions for the use of chemotherapy. Further, the above hypotheses were developed to maximize the rate of tumor cell kill, which is assumed to improve long-term, disease-free survival; however, growing evidence suggests this may not be the optimal therapeutic approach [52].

The dosing of chemotherapeutics also has a mathematical basis. Doses of chemotherapeutic agents are often personalized through use of patient body surface area (BSA) [53], [54]. BSA was first proposed as a guide for chemotherapy dosing by Pinkel, noting that the accepted cytotoxic dose for pediatric and adult patients, and the dose used in laboratory animals correlated with BSA across those scales [55]. Several BSA models have been developed over time, primarily differing in the coefficients used in their calculation. While a BSA-based dosing strategy is of great practical utility for calculating doses for each patient, BSA correlates poorly with the underlying physiological processes that affect drug pharmacology (e.g., liver metabolism and glomerular filtration rate) [56]. Specifically, BSA has been found to correlate poorly with patient pharmacokinetic properties for several chemotherapies [57]. For example, in a study of 110 patients receiving doxorubicin therapy, doxorubicin clearance was found to weakly correlate with BSA [9]. Despite the weak relationship between BSA and pharmacokinetics for several therapeutics, BSA remains widely used to guide dosing in the clinic.

Radiation Oncology

Similar to chemotherapy, radiation therapy was once delivered in a single, high dose [58]. Contrary to chemotherapy, in which a theory of treatment response was established prior to changes in therapeutic application, radiation doses were quickly fractionated to account for excessive toxicities in healthy tissue. Briefly, radiation therapy leverages ionizing radiation to damage the DNA of tumor cells [59]. The DNA damage induced by radiation can lead to immediate cell death via apoptosis, senescence, autophagy, or necrosis or a delayed cell death via mitotic catastrophe [60].

The interaction of photons with DNA can be physically modeled as a stochastic process. The probability of the number of photon-tissue interactions can be described using Poisson statistics:

Pn=eDDnn!,

where P(n) is the probability of n interactions, and D is radiation dose in units of Gray (defined to be one joule of energy absorbed per kilogram of matter). If a single interaction is assumed to result in cell death, the probability of survival (n = 0) is simply e-D. For viruses, bacteria, and very sensitive human cells, it is an appropriate model of survival [61]. However, this model fails to describe survival in other human cell types. For these tissues, the linear-quadratic (LQ) model was found to be the most parsimonious model that fit the observed survival curves [61], [62]. The LQ model is expressed as:

Psurvival=eαDβD2,

where α and β are radiosensitivity parameters, and D is dose. As β approaches zero, the LQ model approaches the Poisson model of cell survival. The LQ model can be used to characterize the radiosensitivity of different tissues with two parameters (α and β). One potential biological interpretation of the linear-quadratic model is offered by the lethal-potentially lethal damage (LPL) model [63]. The LPL model posits that the linear portion of the LQ model are cells that receive non-repairable lethal lesions after a single hit (radiation dose). The quadratic portion is representative of repairable lesions that may eventually die to subsequent lesions or misrepair.

The LQ formalism can be used to explain why fractionated radiotherapy was superior to single doses (there are additional biological rationales for the use of fractionated therapy [64], but these have yet to be formalized into a mathematical modeling framework). Fractionation approaches leverage differential radiosensitivities of tissues (i.e., tumor and healthy tissues) to maximize efficacy while minimizing off-target toxicities. In planning treatment schedules, the effect of therapy on the tumor (generally high α/β ratios) must be balanced with both the acute and long-term toxicities of surrounding, healthy tissue (lower α/β ratios). For a fixed duration of treatment, the isoeffect doses (i.e., doses that have an equivalent biological effect) of different fractionation schedules can be compared [65]:

D2D1=d1+α/βd2+α/β,

where Di is the total dose for each fractionation scheme, di is the dose per fraction, and α/β is a measure of tissue-specific radiosensitivity. For late-responding healthy tissues (i.e., for tissue with low α/β), the total isoeffective dose increases more quickly than acutely-responding tissue (i.e., high α/β) when doses are hyperfractionated (i.e., smaller doses with more fractions). This means that fractionation schedules allow for higher isoeffective doses in tumor tissues compared to surrounding healthy tissue. For this reason, radiotherapy is typically given at low doses over several sessions to maximize tumor dose and to minimize damage to healthy tissue. For example, in head and neck cancer with high α/β ratios (>7 Gy) [66], a hyperfractioned schedule has been shown to be superior to conventional schedules with fewer fractions [67]. While patient-specific biology underlies the α/β parameters for tumors and surrounding tissue, interpatient variability in parameters is often not considered in clinical practice, yielding a single schedule for many patients receiving radiotherapy. For example, some tumors demonstrate similar α/β ratios to the surrounding healthy tissue. Specifically, breast cancers have relatively smaller α/β ratios (4 Gy) [66]. In this case, a schedule using higher doses and fewer treatment sessions (hypofractionation) may be superior [68].

In addition to its explicit consideration of off-target toxicities, radiation therapy differs from chemotherapy in dose planning. As noted above, in radiation therapy, dose is defined as the energy absorbed per unit mass. This differs from the use of “dose” in chemotherapy as the amount of drug given to the patient (not necessarily the amount of drug delivered to tissue). Radiation dose planning involves leveraging patient-specific anatomy to maximize dose delivered to the tumor while minimizing off-target effects [69]. As the physics governing tissue irradiation are well-characterized, physical models can be defined to estimate spatially-resolved radiation dose prior to treatment. Several algorithms have been developed to efficiently calculate dose distribution for each patient [70]. Generally, these methods model photon interactions (e.g., photoelectric effect and Compton scattering) to simulate the energy absorbed by tissue. Several of these methods leverage a Monte Carlo approach to estimate spatially-resolved dose estimates, simulating the path of each photon through tissue probabilistically with a random number generator [71]. Briefly, the probability that a photon will travel a distance l without undergoing any interactions can be defined:

Pl=1eμl,

where μ is the attenuation coefficient, which is a function of photon energy (E) and the physical properties of the material the photon encounters:

μE=ρNAiwiAiσiE,

where ρ is the mass density, NA is the Avogadro constant, wi the elemental weight (i.e., fractional composition) of element i in the material, Ai is the atomic mass of element i, and σi is the total cross section for element i (which is a value describing element-photon interactions such as Compton scattering) [72], [73], [74]. By modeling these interactions, spatially-resolved dose maps and the corresponding uncertainty in those estimates can be calculated. Importantly, the uncertainty in radiation dose translates into uncertainty in tumor control probability [75]. While this relationship depends on tumor-specific dose response curves, Boyer and Schulteiss estimated that the cure of early stage patients increases 2% for every 1% improvement in accuracy of dose delivery (i.e., spatially-resolved dose deposition) [76].

Critically, X-ray computed tomography images, which generate spatially-resolved μ values, can be used to estimate the tissue parameters needed for Monte Carlo simulation of dose distribution [74]. This modeling framework allows for the use of patient-specific imaging data to design patient-specific dose plans. Indeed, Rockne and colleagues demonstrated how imaging data can be used to estimate radiation response parameters to design treatment schedules that maximize tumor response in glioblastoma [23], [77].

Current Opportunities in Modeling Systemic Therapies

A key step in the evolution of precision cancer therapy will be understanding interpatient variability in drug delivery and drug response and using those differences to personalize drug dosing and administration schedules [78]. Mathematical models can be used to explore these relationships. However, model behavior is reliant on the parameter values used in model evaluation, and many of the variables in proposed models are difficult to measure clinically [25]. This presents a fundamental hurdle in the translation of these approaches into clinical practice. If these models are dependent on un-observable data, the utility of these models in making patient-specific measurements and predictions is greatly reduced.

There is a need to develop methods to measure the biological processes underlying treatment response variability. These measurements can then be used to parameterize predictive mathematical models to optimize treatment plans. Just as the linear-quadratic model can be used to characterize the radio-sensitivity of tissue, models can be applied to clinically-available data to derive measurements of tumor behavior. Below, we reimagine the use of cytotoxic chemotherapies considering this interpatient variability, applying lessons learned from radiation oncology to the technologies available clinically. We again focus specifically on pharmacokinetic and pharmacodynamic models, providing select examples for illustrative purposes.

While the differences in chemotherapy and radiotherapy are apparent, we note fundamental similarities between these modalities. First, several commonly-used chemotherapeutics, such as doxorubicin and cisplatin, are DNA-damaging agents. The response to these therapies can reasonably be compared to the DNA damage of photon therapy. Second, both chemotherapy and radiation therapy share a fractionated dosing schedule. While there exists a formalism for dose fractionation in radiation therapy with the linear-quadratic model, chemotherapies lack a widely-adopted quantitative approach to dose scheduling that balances tumor efficacy with off-target effects.

In our opinion, one of the more prominent discrepancies in these treatment modalities is their respective definitions of dose. There may exist practical reasons for this difference. An external radiation beam can be accurately tuned and targeted, and the physics of photon interactions are well-understood. Alternatively, medical oncologists must leverage patients’ circulatory systems to delivery therapeutics to tumors. While the pharmacokinetic properties of patients can be measured, this delivery method is inherently more imprecise. However, as we highlight below, the technology to estimate patient-specific pharmacokinetic and pharmacodynamic (PK/PD) properties may already be available clinically.

Therapeutic drug monitoring

Therapeutic drug monitoring (TDM) is the concept of adjusting therapeutic doses on a patient-specific basis to maximize drug efficacy. Paci et al. reviewed the relevance of TDM in the use of cytotoxic anticancer drugs [79]. They argue that the use of cytotoxic drugs meets the prerequisites for TDM, specifically: 1) a large variance in inter-patient PK parameters, 2) a defined relationship between PK and PD parameters, and 3) a delay between PD end-point and time of measurement of plasma concentration. For several cytotoxic agents dosed by BSA, pharmacokinetic measurements among patients may vary over an order of magnitude [57]. Given the high variability in PK properties and the narrow therapeutic window (i.e., the range of drug doses that can effectively treat a disease process without having toxic effects) for cytotoxic agents, this variability may be a cause for treatment failures [80], [81]. For example, significantly better outcomes were observed in children with B-lineage acute lymphoblastic leukemia when chemotherapy was dosed to reflect patient-specific clearance rates instead of BSA [82].

The concentration of drug in blood plasma can be measured via a variety of clinical chemistry techniques (e.g., immunoassays or chromatography [83]), and these measurements can be used to parameterize pharmacokinetic models that describe the absorption, distribution, metabolism, and excretion of a therapeutic agent [84]. Compartment models are often employed as pharmacokinetic models. In the context of pharmacokinetics, compartment models separate the body into physiologically-defined volumes (e.g., blood plasma, liver, kidney) that are each assumed to be homogenous with respect to drug concentration. These compartments are defined to communicate with each other with a set of rate constants. Such physiology-based pharmacokinetic models have been leveraged to describe the pharmacokinetics of several anti-cancer agents including doxorubicin [85].

Measurements of plasma drug concentrations offer an alternative to BSA to more precisely account for inter-patient variability in drug pharmacokinetics. For example, Bayesian methods have been employed to leverage limited blood plasma samples to estimate an individual’s pharmacokinetic properties [86]. These a posteriori estimates can be used to guide future dosing of therapeutics. Indeed, some clinical trials have leveraged simple PK/PD models to optimize therapy for patients [87], [88]. Alternatively, a priori dose adjustments can be made leveraging covarying patient properties. For example, carboplatin clearance was found to strongly correlate with kidney function, allowing for an empiric formula based on glomerular filtration rate (a measure of kidney function) to be derived for dosing [89]. Using these approaches to populate pharmacokinetic models will help reduce inter-patient variability and will play a role in the realization of personalized drug treatment schedules [84], [90].

Tumor-specific drug distribution

Inducing and sustaining angiogenesis is a hallmark of cancer [91]. Tumor vasculature is often morphologically and functionally immature. Relative to a healthy vasculature, tumor vasculature is tortuous and leaky with numerous blind endings and arteriovenous shunts. This impairs delivery of nutrients causing local microenvironmental changes that alter the response to therapy [6], [92], [93]. Further, significant heterogeneity in perfusion exists within a tumor, impacting both tumor growth and drug delivery [94]. Differences in treatment response may arise due to variability in tumor perfusion.

Tumor vasculature can be assessed with dynamic contrast enhanced magnetic resonance imaging (DCE-MRI). In DCE-MRI, a series of images are collected before and after a contrast agent is injected into a peripheral vein. Each image represents a snapshot of the tumor in time. Each voxel in the image set gives rise to its own time course which can be analyzed with a pharmacokinetic model to estimate physiological parameters such as the contrast agent transfer rate (Ktrans, related to vessel perfusion and permeability), the extravascular extracellular volume fraction (ve), and the plasma volume (vp) [95].

DCE-MRI parameters have been shown to be predictive of tumor response to therapy [96]. DCE-MRI data have been used in mechanistic models to estimate local nutrient and drug gradients within tumors. For example, in a model of treatment response in breast cancer, increased heterogeneity on DCE-MRI was identified to be a predictor of poor treatment outcomes as increased transport heterogeneity is coupled with increased tumor growth and poor drug response [97]. In theory, DCE-MRI data could be coupled with patient-specific PK measures (i.e., plasma drug concentration timecourses) to create tumor-specific drug distribution maps. Tagami et al. realized the goal of estimating intratumoral drug distribution through a related MRI approach which employed a drug-encapsulated approach with an MRI contrast agent. Changes in MR T1 relaxation time were measured and correlated with distribution of drug within tumors [98]. Coupling measurements of tumor vasculature with mathematical models of drug diffusion through tissue [99] will allow for the modeling the response of tumor cells to therapy to be decoupled from the tumor vasculature, thereby removing a source of variability in patient response.

Tumor-Specific PD Modeling

The efficacy of cytotoxic agents is defined by their ability to induce tumor cell death. Even within a clinically-defined grouping of tumors (e.g., triple negative breast cancer), there exists significant differences in tumor sensitivity to treatment [100]. The assessment of tumor pharmacodynamics is limited to unidimensional tumor changes as defined by the Response Evaluation Criteria in Solid Tumors (RECIST [101]). Briefly, RECIST focuses on changes in the sum of the longest dimension of tumors to assess response to treatment. These changes in tumor size are temporally downstream effects of therapy, limiting the utility of this approach to adapt treatments based on patient-specific tumor measurements. The ability to assess tumor response to treatment in real time is needed to adapt therapy schedules to maximize the odds of treatment success. We now describe three technologies that have been used to monitor treatment response upstream of tumor volume changes: diffusion-weighted magnetic resonance imaging (DW-MRI), fluoro-deoxyglucose positron emission tomography (FDG-PET), and circulating tumor DNA (ctDNA) samples.

Cellular changes within the tumor precede tumor volume changes. In DW-MRI, the diffusion of water molecules through tissue is measured and described by the apparent diffusion coefficient (ADC). This modality relies on the thermally-induced random movement of water molecules (known as Brownian motion). In tissue, this movement is not entirely random as water molecules encounter a number of barriers to diffusion (e.g., cell membranes and extracellular matrix), and the observed diffusion largely depends on the number and separation of barriers that a water molecule encounters. DW-MRI methods have been developed to measure the ADC at the voxel level, and in well-controlled situations the variations in ADC have been shown to correlate inversely with tissue cellularity [102]. Changes in tumor ADC precede tumor volume changes, providing an early biomarker of treatment response [103].

Changes in tumor metabolism can precede tumor morphology changes and may be predictive of treatment response in breast cancer [104]. FDG-PET provides a measure of glucose metabolism in tumors. In FDG-PET, 18F-FDG is injected into a peripheral vein. As it circulates, the FDG is transported into cells and phosphorylated, trapping the FDG within cells. As 18F-FDG decays, it emits positrons, which annihilate with nearby electrons. Each annihilation yields two (nearly) antiparallel 511 kEV photons, which are detected and used to map FDG distribution. FDG-PET data are summarized by the standardized uptake value (SUV), which normalizes for patient weight and injected dose [105].

Tumors continually shed DNA into the bloodstream during the course of tumor development. These circulating tumor DNA (ctDNA) potentially may serve as “liquid biopsies,” providing measurements on the mutational status of breast cancers, assessment of treatment response, and guidance for therapy selection [106], [107], [108]. Notably, these data have been shown to be an early predictor of relapse in breast cancer patients [109].

Taken together, these measurements of tumor pharmacodynamics can be leveraged to parameterize models to describe the tumor response to treatment. For example, since ADC changes following treatment are predictive of ultimate treatment response [110], our group has demonstrated how ADC values can be used to estimate response rates of tumors:

Nx¯t=θθNx¯t0Nx¯t0+θNx¯t0ekx¯t,

where N(, t) is the number of tumor cells at position and time t and k is the spatially-dependent growth rate [27]. This measure of tumor response can be combined with the assessment of off-target hematologic toxicities, providing a pathway to personalize chemotherapy schedules through PK/PD optimization [111]. Similarly, Liu and colleagues have incorporated SUV measurements derived from FDG-PET imaging into a predictive tumor growth model [112]. ctDNA data can be used to track tumor genetic changes and populate evolutionary dynamics models to predict treatment response [24]. The above technologies present independent means to assess tumor response to therapy. With appropriate mathematical models incorporating the data from these modalities, real-time adjustment of therapeutic schedules in response to tumor changes may be possible.

Vision for Systemic Chemotherapy

Given the goal of delivering the optimal therapy on the optimal schedule for each patient, we highlighted some potential tools for realizing that goal in section 3. As noted above, overly complex models, which require several parameters to be estimated for each patient, present a difficult task in translation to a clinical population. Radiation oncology relies on a relatively simplistic approximation of dose response to develop treatment schedules. Potentially, such simple models may improve the use of chemotherapy by integrating currently available measurements of treatment response. It is our vision that a classical oncology toolkit be available to clinicians, to leverage measurable patient data to not only select appropriate treatments but also optimize the schedule on which those therapies are given (Figure 1).

Figure 1.

Figure 1

Vision for systemic chemotherapy. Following diagnosis and staging of a cancer, a patient is evaluated clinically with a panel of imaging tests and bloodwork. These data are used to quantify various tumor properties, drug pharmacokinetics, and off-target toxicities to parameterize a mathematical model of treatment response. The model is leveraged to identify optimal treatment plans. This process is repeated throughout the course of treatment to yield treatment plans that co-evolve with the patient’s tumor.

Following diagnosis and staging of tumors, the patient would be evaluated with a panel of imaging tests and bloodwork. Following an initial round of therapy (and, on occasion, through the course of therapy), the testing is repeated, providing data to initialize and constrain predictive models of treatment response. A pre-defined objective function that balances tumor efficacy with off-target toxicities is parameterized with the patient-specific data and is optimized to identify a patient-specific treatment schedule. Simply, the objective of cancer therapy is to maximize survival while minimizing morbidity. Formally, we define:

maxxSurvival=fTumorx+gorgansToxicityxsubject toorgansToxicityxToxicitymax,organ,

where x is the therapy schedule, f is the functional relationship between tumor behavior and survival, and g is the functional relationship between off-target toxicities and survival. Fortunately, the toxicity limits of various tissues have been defined, and clinical assays have been developed to monitor those toxicities. For example, hematologic toxicities can be measured through blood sampling. Cardiotoxicity can be assessed through electro- and echocardiography. Thus, the function g can be defined. However, Tumor(x), how a tumor responds to treatment plan x, and f, the relationship between survival and tumor behavior, must be defined. If these functions can be defined, a robust literature for optimization problems already exists [113]. Thus, the question becomes, “How can we use (for example) the technologies highlighted above to define and parameterize these functions?”

Next Steps

Medical oncology is in need of a mathematical modeling toolkit that can leverage clinically-available measurements to optimize treatment selection and schedules in the same way radiation oncologists use clinically-available imaging data for treatment planning. Just as the therapeutic choice has been optimized to match tumor genetics, the delivery of those therapeutics can be optimized based on patient-specific PK/PD properties.

Under the current approach to breast cancer therapy, treating clinicians are tasked with integrating multi-modal biomarker changes into a holistic understanding of treatment response. They are challenged to intervene based on those patient-specific measures. In this context, treatment decisions become increasingly complex with the advent of new technologies and treatments. Mathematical modeling offers a means to build a structured, theoretical understanding which summarizes this complexity, providing clinicians assistance in developing treatment plans. For preclinical investigators, modeling similarly can expedite experimental investigations. Simulations are inexpensive relative to in vitro and in vivo experiments. Further, modeling provides opportunities for discovery when model predictions do not match experimental data. In this way, mathematical modeling has the potential to expedite translation of medical discoveries into patient care.

The Cancer Moonshot Initiative [114] highlights the opportunity that exists by adopting screening and treatment plans known to work on a wide-scale basis. There is a need for such implementation science in the development and deployment of cancer therapeutics. While tumor genotype most likely plays an outsized role in determining response, other measurable factors such as tumor microenviroment and patient pharmacokinetics also influence response. The extensive characterization of tumor genetics has yielded an arsenal of therapeutics that can more precisely target cancer cells. An equally focused approach to the science of deploying these therapeutics on an optimal schedule is now needed. In the dosing and scheduling domains, we are in a similar position to cancer therapy prior to the advent of genotyping technologies. Advances in clinical chemistry and imaging sciences offer platforms to develop biologically-driven, treatment response models while maintaining the ability to translate those models to a clinical population. These tools will provide the measurements needed to test various dose and scheduling hypotheses. Revisiting our earlier analogy, what is the F = ma for cancer? We have the means to measure tumor “mass” and “acceleration” (i.e., the multifactorial response of a tumor to therapy). Further, we can measure treatment “force” (i.e., drug pharmacokinetics). A modeling framework that relates these variables would offer the opportunity to adjust and optimize treatment regimens to maximize response. Mathematical models will form the foundation of this approach, and they will hasten the implementation and maximize the benefit of current (and future) therapeutics.

Acknowledgements

This work was supported by the National Institutes of Health (grant numbers R01CA186193, U01CA174706, and F30CA203220), and the Cancer Prevention and Research Institute of Texas RR160005.

Footnotes

1

Conflicts of Interest: The authors declare no potential conflicts of interest.

References


Articles from Translational Oncology are provided here courtesy of Neoplasia Press

RESOURCES