The paper by Gadkar, Lu, and colleagues in this issue of the Journal of Lipid Research offers an opportunity to comment on the intersection of two different philosophies in kinetic modeling that are just beginning to join forces in the practical worlds of disease modeling and systems pharmacology. First, some background.
In many fields of biology, kinetic modeling is a new topic; students often think that the idea of applying engineering analysis to biological systems originated in 2002 when Hiroaki Kitano coined the phrase, “systems biology,” as an umbrella for large scale mechanistic modeling of biomedical systems (1, 2). Readers of the Journal, however, recognize that studies of lipid and lipoprotein metabolism are unique; productive collaborations between experimentalists and modelers have been, for at least 50 years, answering central questions in lipid and lipoprotein metabolism in the pages of this journal.
In some ways, this was a chance occurrence. When Don Fredrickson recruited Bob Levy, and DeWitt Stetten recruited Mones Berman to the National Institutes of Health in the late 1950s and early 60s, stars aligned. Radioisotopes were just becoming available for metabolic research. IBM and Univac were building computers that filled rooms. Together, Berman and Levy and their students attacked the pivotal questions of plasma lipoprotein metabolism. Berman was an engineer. His vision was simple; linear algebra, linear differential equations, and computers comprised the ideal set of tools to formulate models of biological systems and test them against tracer kinetic data (3). He designed a FORTRAN code called SAAM to make it practical. Levy, now a legendary physician-scientist, realized early on that combining ultracentrifugation and radio-iodinated proteins offered a chance to figure out the metabolic properties of plasma lipoproteins.
What, fundamentally, makes tracer kinetics so powerful? Suppose you are confronted with a disease state in which LDL-cholesterol (LDL-C) concentration is substantially above normal. Is this caused by excessive production of LDL-C, or by insufficient removal? It is impossible to answer this query based on measurements of LDL-C alone. Even if we can measure the entire time course of an LDL-C increase upon, say, infusing a recombinant protein of unknown function, it remains impossible to identify the cause of the observed effect by measuring the transient alone.
By introducing LDL with some of its lipid molecules or apolipoproteins tagged so that they can be quantified independently by suitable technology, it becomes a relatively simple matter to distinguish between increased production and decreased removal. An early example of the Berman-Levy collaboration, published in the Journal, applied these ideas to apolipoprotein metabolism (4). Today, the computational tools are even more powerful and comprehensive. Recruiting a population of normal volunteers and a population of individuals with the abnormal phenotype and then collecting the tracer kinetic data has become, by far, the most challenging aspect of answering the pivotal question: increased production, or decreased clearance?
Because the computational techniques of classical tracer kinetics are predicated on an assumption of steady state (nothing is changing with time except the abundance of the tagged or labeled molecules), experimental designs generally consist of steady state experiments in two or more groups with statistical tests used to identify those production rates or rate constants that distinguish one group of subjects from another group. Innumerable publications attest to the utility of this approach and demonstrate its extension to increasingly complex metabolic systems.
The power of tracer kinetics, however, comes with a price. All the molecular mechanisms that we biologists care about are hidden from view. We learn, for example, that the molecular defect resulting in above-normal LDL-C is caused by impaired LDL removal from plasma (as opposed to increased LDL production), but we don’t learn the mechanistic chain of events that led to this change. When modelers find a rate constant that is significantly different, we know that the process characterized by that rate constant is, in some fundamental way, operating differently in the two experimental groups. Investigators who understand this know exactly where to focus their mechanistic efforts.
What is it about tracer kinetics that hides detailed mechanisms? In a steady state system, transcriptional, translational, posttranslational, and allosteric regulation are invisible because nothing is changing. The complex rate law for each process in any biological system potentially includes all of these mechanisms (an example would be a Michaelis-Menten process in which the Vmax is controlled by the abundance of a particular protein whose function is in turn controlled by an activating kinase and by allosteric inhibition). However, in a steady state, all of these regulators are constant, and the full complex rate law reduces to a rate constant. Tracer kinetics extracts this single number, the rate constant, that reflects the net effect of all these control mechanisms in this steady state. That number tells us the fraction of substrate molecules traversing this pathway per minute, per hour, etc. Thus, tracer kinetics tells us only where to look; it cannot tell us what we will find. Knowing where to look is vital information, but tracer kinetics cannot tell us the molecular mechanism underlying an observed experimental result.
Fortunately, there is another school of biological modeling, also with a distinguished history, which fills the mechanistic gap.
Practical mechanistic modeling, in which experimental data are used as constraints, developed in parallel with tracer kinetics and was driven by the same availability of sufficient computing power. In physiology, the main school of thought was Arthur Guyton’s group at the University of Mississippi, and in biochemistry, the main proponents were David Garfinkel and his colleagues at the University of Pennsylvania. These investigators and their students assembled very large complex models of cardiovascular physiology (5) and cardiac energy metabolism (6), respectively, based as much as possible on experimental data. These models consist of very large systems of nonlinear differential equations and they were used to analyze physiological nonsteady states such as the transition from rest to exercise, the mechanism of hypertension, and metabolic transients in cardiac muscle during changes in substrate fuel.
These are early examples of what today is called mechanistic systems biology (MSB). MSB models are mechanistic in the sense that every rate law explicitly includes known or hypothesized control mechanisms so that changes in upstream controllers are propagated to the downstream controlled processes and multiple feedback mechanisms are integrated. Control and regulation are explicit, not hidden. Such models incorporate the molecular mechanisms that are the dominant theme of 21st century biomedical research.
In the practical, translational world of pharmaceutical research and development, there is an instructive parallel. Pharmacokinetic models (what the body does to the drug) are almost always linear, just like tracer kinetics, whereas pharmacodynamic models (what the drug does to the body) are nonlinear but rarely mechanistic. In recent years, mechanistic disease modeling and mechanistic modeling of drug action have become more prominent, and modeling groups at several pharmaceutical firms are expanding their roles under the umbrella of systems pharmacology. The paper by Gadkar, Lu, and colleagues is one of the first such efforts in cholesterol metabolism, and the first to appear in the Journal.
Gadkar, Lu, and colleagues have built on the foundation laid by decades of tracer kinetic modeling, but they expand our modeling horizons by adding mechanistic and molecular detail. The authors formulate an explicit hypothesis and test it quantitatively by asking it to account for the responses to pharmacological perturbations including upregulation of apoA1 synthesis, administration of reconstituted HDL, and infusion of delipidated HDL. Moreover, in previous work (7), this model is said to have predicted correctly the effects of CETP inhibition and ABCA1 upregulation.
Time will tell, of course, whether the particular mechanisms in this model will stand up to further experimental test, but by proposing a specific mechanistic model, by testing it against clinical data, and by acknowledging the importance of constraints from tracer kinetic data, the paper by Gadkar, Lu, and colleagues adumbrates a new era in lipid and lipoprotein metabolic physiology. They are challenging the status quo in lipoprotein kinetics by asserting that models must account for the nonsteady state dynamics of pharmacological perturbations and, at the same time, they accept the counter-challenge that any successful nonlinear mechanistic lipoprotein model must be able to reproduce the results of tracer kinetic studies. We are on the cusp. Linear, steady state, tracer kinetic modeling and nonlinear, mechanistic, nonsteady state modeling appear poised to merge into a single modeling tradition that draws unique strengths from each of its roots.
One of the great advantages of modeling is the ability to compare a theory’s predictions to results of many kinds of experiments. Emerging from the Gadkar and Lu study is a new quantitative concept of HDL remodeling and recycling of apoA1 that was thought by the authors to be possibly capable of accounting for the classic biphasic apoA1 kinetics reported previously (in the Journal) by Ikewaki and colleagues (8). A preliminary test based on calculating the effective rate constants from Gadkar and Lu’s reported steady state masses and fluxes, and then simulating the Ikewaki radiotracer experiment, suggests that the HDL remodeling mechanism and the lipid-poor apoA1 pool may be too fast and too small, respectively. Nevertheless, showing how explicit mechanisms can be tested against tracer data emphasizes how the two branches of modeling can begin to synergize.
Because the Gadkar and Lu model purports to account for steady and nonsteady states involving both apoA1 and lipoprotein cholesteryl esters (CEs), it can also be tested against the classic experimental and modeling studies of lipoprotein CE kinetics published (again in the Journal) by Schwartz and colleagues (9). Based only on reported cholesterol fluxes, there is remarkable agreement between Gadkar et al. and Schwartz et al. Testing the Gadkar and Lu model against the Schwartz tracer time course data would be a valuable short-term modeling project.
Stepping back, the significance of the Gadkar and Lu contribution is its recognition that it should be possible for a single mechanistic model to account for both nonsteady state perturbation data and steady state tracer kinetic data. Both schools of modeling have unique capabilities. Effective projects that leverage both should be strongly encouraged.
The enormity of biological complexity confronts all of biomedical science, but investigators working with reconstituted molecular systems or with cells in culture have intentionally simplified the object of study. In clinical or translational pharmaceutical contexts, the full complexity of human biology becomes inescapable. It remains possible to model a key subsystem as Gadkar, Lu, and colleagues have done with HDL, CE, and apoA1 metabolism, but in the context of pharmaceutical development there are both human and financial incentives to test one’s theory against as many different experimental protocols as possible. The more tests a model passes, the more confidence we have in its predictions. The most comprehensive model is, ultimately, a boon to patients and a competitive advantage to the firm that best understands its value.
In addition to enormous complexity, there is today a widely discussed concern with the reproducibility of published basic science results. This concern is especially acute within the pharmaceutical industry. An important inference from the Gadkar and Lu paper is that formulating a theory as a quantitative model and then testing that theory against all manner of data is one approach to identifying data sets that are inconsistent with much of what we already know. It is always possible that the inconsistent experiment has uncovered an unanticipated mechanism, but even then it is essential to formulate a new theory that accounts for old data and new data simultaneously. Including previously published work, the Gadkar and Lu model has undertaken to account for the results of at least five different therapeutic interventions. Moreover, the model successfully accounts for the results presented.
It behooves all of us to challenge this model with additional protocols and data sets. The model is surely incomplete, and probably some part of it is simply wrong, but it represents an explicit and testable mechanistic theory that should be tested, modified, and tested again for many years to come. To make this feasible, it is desirable to make these models available in standard formats and in public databases. One such standard format is Systems Biology Markup Language (SBML) (10) and the corresponding database of models (models only, no experimental data) is at biomodels.net (11). The Journal could consider mandating such uploads for published models.
Despite the dynamic nature of the Gadkar and Lu model, many of the data sets used to test it are steady state experiments. One of the principal lessons of the entire biological modeling enterprise is that mechanisms are best uncovered and tested by fitting time course results. These can be either steady state tracer kinetics, or nonsteady state physiological or pharmacological perturbations. Indeed, it is even possible to superimpose tracer experiments on physiological perturbations such as meals.
There is no single modeling paradigm for biology. Different schools of modeling focus on different goals, different tools, and different kinds of experimental data. Where modelers agree, however, is on one inescapable truth. Human biology is too complex to be understood and therapeutically manipulated without mathematical and computational help. Every scientist has a mechanistic model in her or his head. What we need to do, as Gadkar, Lu, and colleagues have done, is to convert our mental models to mechanistic computational models and then test those models against as many different kinds of experimental data as possible from as many different laboratories as possible. Combining tracer kinetic modeling with systems pharmacology would be an excellent step in that direction.
REFERENCES
- 1.Kitano H. 2002. Systems biology: a brief overview. Science. 295: 1662–1664. [DOI] [PubMed] [Google Scholar]
- 2.Kitano H. 2002. Computational systems biology. Nature. 420: 206–210. [DOI] [PubMed] [Google Scholar]
- 3.Berman M. 1963. The formulation and testing of models. Ann. N. Y. Acad. Sci. 108: 182–194. [DOI] [PubMed] [Google Scholar]
- 4.Berman M., Hall M. III, Levy R. I., Eisenberg S., Bilheimer D. W., Phair R. D., and Goebel R. H.. 1978. Metabolism of apoB and apoC lipoproteins in man: kinetic studies in normal and hyperlipoproteininemic subjects. J. Lipid Res. 19: 38–56. [PubMed] [Google Scholar]
- 5.Guyton A. C., Coleman T. G., and Granger H. J.. 1972. Circulation: overall regulation. Annu. Rev. Physiol. 34: 13–46. [DOI] [PubMed] [Google Scholar]
- 6.Garfinkel D., Kohn M. C., and Achs M. J.. 1979. Computer simulation of metabolism in pyruvate-perfused rat heart. V. Physiological implications. Am. J. Physiol. 237: R181–R186. [DOI] [PubMed] [Google Scholar]
- 7.Lu J., Hubner K., Nanjee M. N., Brinton E. A., and Mazer N. A.. 2014. An in-silico model of lipoprotein metabolism and kinetics for the evaluation of targets and biomarkers in the reverse cholesterol transport pathway. PLOS Comput. Biol. 10: e1003509. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Ikewaki K., Rader D. J., Schaefer J. R., Fairwell T., Zech L. A., and Brewer H. B. Jr. 1993. Evaluation of apoA-I kinetics in humans using simultaneous endogenous stable isotope and exogenous radiotracer methods. J. Lipid Res. 34: 2207–2215. [PubMed] [Google Scholar]
- 9.Schwartz C. C., VandenBroek J. M., and Cooper P. S.. 2004. Lipoprotein cholesteryl ester production, transfer, and output in vivo in humans. J. Lipid Res. 45: 1594–1607. [DOI] [PubMed] [Google Scholar]
- 10.Hucka, M., F. Bergmann, S. Hoops, S. Keating, S. Sahle, and D. Wilkinson. 2010. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core Available from http://dx.doi.org/10.1038/npre.2010.4959.1 [DOI] [PMC free article] [PubMed]
- 11.Le Novère N., Bornstein B., Broicher A., Courtot M., Donizelli M., Dharuri H., Li L., Sauro H., Schilstra M., Shapiro B., et al. . 2006. BioModels Database: a free, centralized database of curated, published, quantitative kinetic models of biochemical and cellular systems. Nucleic Acids Res. 34: D689–D691. [DOI] [PMC free article] [PubMed] [Google Scholar]