Abstract
Intrinsic/inherent chemical properties are characteristic, irrespective of the number of molecules present. However, toxicity is an extensive/extrinsic biochemical property that depends on the number of molecules. Paracelsus, often considered the father of toxicology, noted that all things are poisonous. Because dose magnitude (i.e., number of molecules) determines the occurrence of poisonous effects, toxicity cannot be an intrinsic/inherent biochemical property. Thus, toxicology's task is to determine case‐specific risks resulting in adverse effects produced by the interaction of toxic doses/exposures, toxic mechanisms, and case‐specific influencing factors. Experimental testing results are known to vary within and between chemicals, test organisms, and experimental conditions and repetitions; however, hazard‐based approaches treat toxicity as a fixed and constant property. A logical alternative is the standard‐risk, case‐specific risk model. In this approach, testing data are defined as standard risks where the nature, magnitude, and toxicity effect is standardized to the organism, chemical, and test conditions. Interpolation/extrapolation of standard risks to site‐specific conditions (i.e., case‐specific risks) is challenging, requiring understanding of the influences of the complex interactions within and between differing species, conditions, and toxicity‐modifying factors. Therefore, Paracelsus's paradigm is perhaps better abbreviated as “dose–causality–response”, because a key interpretive requirement is establishing toxicity causality by separating mode/mechanism of toxic action from modifying factor influences in overall toxicity responses. Unfortunately, the current knowledge base is inadequate. Moving to a standard‐risk–specific‐risk paradigm would highlight the importance of improving the toxicity causality knowledge base. Thereby, a rationale would be provided for enhancing the design and interpretation of toxicity testing that is necessary for achieving advances in routine translation of standard‐risk to specific‐risk estimates—the raison d'être of regulatory risk decision making. Environ Toxicol Chem 2020;39:2351–2360. © 2020 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals LLC on behalf of SETAC.
Keywords: Inherent toxicity, Risk, Hazard, Chemical properties, Modifying factors, Dose–causality–response
INTRODUCTION
It has become commonplace in the toxicological and regulatory literature to refer to the inherent toxicity/hazard, or to the intrinsic toxicity/hazard of a chemical. The terms are not well defined and are often used interchangeably and in various embellished forms. For example: “Chemical hazard potential is the inherent (intrinsic) capacity of a chemical to cause harm. A chemical's hazard potential could be based on its environmental fate properties as well as its toxicity” (Society of Environmental Toxicology and Chemistry 2018). Where terminological inexactitude is problematic, the first step in resolving any difficulties that may have been created is to establish the nature of the flaw.
The general concept is that inherent/intrinsic toxicity/hazard provides information on “Inherent Toxicity—whether a substance is harmful by its very nature to human health or other organisms” (Environment Canada 2017; see additional usage examples at Glosbe 2020). That definition comports with the definitions of the words. The Oxford English Dictionary (2020) defines inherent as “Existing in something as a permanent, essential, or characteristic attribute.” The Merriam‐Webster Dictionary (2020) defines inherent as “involved in the constitution or essential character of something: belonging by nature or habit: intrinsic,” and defines intrinsic as “belonging to the essential nature or constitution of a thing.” By such definitions, inherent or intrinsic chemical characteristics are attributable to each molecule solely as a result of their constitutional identity, and not to some auxiliary or extensive factor. In reviewing other technical definitions, toxicity is generally defined as the adverse (harmful or dangerous) response of an organism to a chemical substance, whereas hazard is considered to be a danger or a risk of an adverse response, with some dictionaries including the adjective unavoidable. Although sometimes used interchangeably, this can be misleading because toxicity refers to actual adverse effects, whereas hazard has more to do with the likelihood of adverse effects occurring.
The logical antecedents of the premise that hazard/toxicity are inherent/intrinsic properties of a chemical are that the hazard/toxicity should be unchanged irrespective of the amount of chemical to which an organism is exposed and irrespective of the manner in which the chemical reaches the organism (i.e., whatever the route, duration, or timing of the exposure). These logical antecedents dictate the existence of a chemical–response relationship that should be readily identifiable under all conditions of exposure. Given these clear definitions and inescapable corollaries, either the inherent/intrinsic terminology is inapplicable to hazard/toxicity or pharmacologists and toxicologists have misinterpreted or misunderstood the past 500 yr of research and observation regarding the responses of living organisms to chemicals. We support the former position because in the last 500 yr, and particularly in the last 100 yr, tremendous advances have been made. However, rather than chemical–response relationships, toxicology has been built on the foundation of dose–response relationships, meaning that hazardous/toxic responses depend not only on the identity of the chemical but also, and primarily, on the dose of the chemical to which any particular organism is exposed.
Also, intrinsic already has a well‐defined use in pharmacology: intrinsic efficacy or intrinsic activity. Both refer to the interaction of a chemical with biological macromolecules, usually an effect‐specific receptor where the interaction results in activation or inhibition of enzymes, active transporters in membranes, DNA response elements, and so forth, and of any molecular interaction of a chemical with a biological macromolecule that has conformational specificity (Borgert et al. 2013). In this context, potency is the result of the 2 factors of affinity and intrinsic efficacy. A full agonist has high affinity and intrinsic efficacy; a strong antagonist has high affinity and no intrinsic efficacy. It is important that this existing definition incorporating intrinsic not be confused with the usage noted above.
Regardless of the exact form in which inherent/intrinsic hazard/toxicity appears, the implication is that this terminology provides important and useful information regarding potential harm to public health or the environment posed by the presence of chemicals. Moreover, it assumes that inherent/intrinsic hazard/toxicity can be assessed. These implications and the prolific use of this type of terminology warrant careful consideration of what it means, that is, whether it provides valid information about a chemical, and whether it can be assessed. Seen in section Toxicity as a physical‐chemical property, toxicity is an extensive biochemical property that cannot be considered an inherent/intrinsic property and these adjectives should not be used because they perpetuate a misleading concept about toxicity and impede development of mechanistic understanding of causal dose–response relationships essential for improving environmental risk assessment methodology.
REVIEW OF ISSUES
Definition of toxicity
The dose–response concept can be traced back to Paracelsus (Philippus Aureolus Theophrastus Bombastus von Hohenheim), often considered the father of toxicology. Although his view is often summarized as “The dose makes the poison”, a translation from Paracelsus's Third Defense of 1654 is:
What is there that is not poison? All things are poison and nothing is without poison. Solely the dose determines that a thing is not a poison (Deichmann et al. 1986).
One of us (L.D. Burgoon) translated an old (1574) German version that includes an example:
Wenn ihr jedes Gift recht auslegen wollt, was ist, das nit Gift ist? Alle Dinge sind Gift, und nichts ist ohne Gift; allein die dosis machts, daß ein Ding kein Gift sei. Zum Exempel: eine jegliche Speise und ein jeglich Getränk, wenn es über seine dosis eingenommen wird, so ist es Gift; das beweist sein Ausgang (Zeno 2020).
If you want to understand poisons, then what is it that is not a poison? All things are poisons, and nothing exists that is not a poison; for it is only the dose that makes a thing not a poison. For example, any food and any drink, when it is taken over a [certain] dose, is a poison; his death proves it.
This ancient definition indicates everything is toxic by nature and the lack of an effect is caused by an insufficient exposure or dose. Thus, there are no nontoxic substances, only nontoxic doses or, from a broader perspective, only nontoxic exposure situations, for the adverse effects under consideration. Using inherent or intrinsic as an adjective adds nothing to clarify the original sense that toxicity is a fundamental characteristic of all substances. Centuries of experience indicate that an effective dose may vary substantially among exposure scenarios, as well as within and between chemical substances and organisms, and may further vary with organism life stage and other factors. Long‐standing experience indicates that the nature and magnitude of toxic effects are also case specific. Hence, toxicity is an emergent property of the interaction of environmental fate properties and toxicity properties of the chemical in question for an organism, and under the case‐specific conditions that influence the nature and magnitude of a number of toxicity‐modifying factors that may act directly, indirectly, or nondirectly (induced) to affect the ultimate expression of toxicity. In short, rather than the commonly used abbreviation of dose–response, Paracelsus's paradigm is better termed a dose–response–causality relationship more accurately abbreviated as dose–causality–response.
Toxicity as a physical–chemical property
From a chemistry perspective (Mackay et al. 2001), toxicity is a physical–chemical property; however, the measurement details are critical. Many physical–chemical properties are termed “intensive” because they depend on the nature of the substance, not its quantity. Toxicity is a special case: an “extensive” chemical property that depends on the number of molecules present when the property is being measured. Whole‐organism toxicity measurement endpoints (e.g., death, inhibition of growth, or reproduction) are a result of a multitude of reactions in living organisms. Because organisms themselves are more than just the sum of their complex and structured chemical composition, they can respond to the initial chemical reactions with a cascade of responses, both in the original reaction processes and new processes plus ongoing temporal variations. Consequently, measurements of the nature of changes or reactions in such test systems are more complex than those used for physical properties such as flammability or heat of combustion.
A toxic response depends on the nature of the chemical substance, the nature of the organism, the characteristics of the dose, and the case‐specific environmental conditions. The natures of the substance and organism are largely intensive, whereas factors determining dose are primarily extensive because they depend on amount. Thus, toxic responses as measured in standard toxicological tests are a function of both intensive and extensive quantities and cannot be solely intensive. This is simply a modern restatement of Paracelsus's adage that because everything is a poison (i.e., ultimately toxic by nature at some point), it is poisonous doses that are of concern. Consequently, because toxicity is the biological manifestation of exposure to multiple physical–chemical properties, both of the test substance(s) of concern and other chemicals in the exposure media and organisms as well as their interactions, it is more appropriate for toxicologists to view toxicity as what can be termed an extensive biochemical property.
A nominally extensive biochemical property such as toxicity shares a limitation with what is termed quasi‐intensive chemical properties such as environmental half‐life of a chemical; their values are not fixed and depend on modifying factors (Mackay et al. 2001). The division of one extensive or quasi‐intensive property by another can cancel the extensive attribute, resulting in a metric with limited intensive characteristics. In toxicology this is exploited as defined toxic event/quantity of chemical (e.g., a quasi‐intensive measurement of toxicity is a fraction of organisms experiencing lethality/exposure concentration causing this effect). More familiarly, this is one median lethal concentration (LC50) or something similar. Such metrics are more accurately termed specific toxicity, by way of analogy with the relationships among other properties such as heat capacity and specific or molar heat capacity.
The term toxic potency is commonly used to refer to such quasi‐intensive or specific toxicity estimates. Potency is the amount of a chemical needed to produce a given effect (e.g., median effect concentration is the concentration/dose of chemical that causes 50% of maximum effect. This is determined by the molecular targets involved in producing toxicity, the concentrations and potencies of the chemical at the various target sites, pharmacokinetic behavior, and other factors too numerous to list, yet there is often no discernible relationship between toxic potency and affinity or efficacy.
Therefore, commonly employed exposure‐based dose metrics, usually based on the total number of molecules present at the time of the toxicity measurement, determine whether the extensive/quasi‐intensive property, specific toxicity, occurs. Although it would be convenient if the number of molecules necessary to produce a given toxicity metric was constant, a variety of modifying factors, including temperature, pH, route and duration of exposure, body size, and so forth, can substantially affect specific toxicity estimates. Toxic doses are neither constant nor consistent.
What and where is a dose?
Where the dose measurement is taken is an additional complication: 1) as an exposure concentration in air, water, or as applied to external surfaces; 2) as an administered quantity (i.e., oral, dietary, or injection); or 3) as an amount received at the internal organism target site(s), often approximated as whole‐body, organ, or tissue concentration. However, the numbers of molecules measured in these 3 categories of dose metrics are different.
For example, for baseline neutral narcosis (i.e., anesthesia) by organic chemicals in small fish, quantitative structure–activity relationship evaluations signal that whereas the exposure‐based molar LC50 data suggest a range of toxicity on the order of 105, the whole‐organism‐based critical body residue indicates approximate equipotency, with a range of toxicity on the order of 101 (McCarty and Mackay 1993). Although internal toxicity target dose estimates are not commonly available for comparison, dose estimates based on exposure media concentrations differ substantially from those based on whole‐organism concentrations. A critical question is which dose metric is appropriate for either risk or hazard estimation—commonly used exposure‐based dose metrics, such as LC50, or organism‐based dose metrics?
Exposure‐based dose metrics include variable influences on both toxicity and environmental fate (i.e., the partitioning behavior of the chemical between the exposure media and the organism); however, there is a fundamental problem with this approach in that it lacks a means of identifying or quantifying those variable influences. These factors alter the adverse toxicity effect and/or the overall toxicological bioavailability (i.e., toxicity and/or environmental fate) caused by the highly variable, case‐specific contribution of these 2 components to exposure‐based dose metrics. Although the inherent/intrinsic toxicity approach notes that the hazard of a substance involves contributions from toxicity and environmental fate, it does nothing to emphasize the critical importance of identifying and separating contributions of various toxicity‐modifying factors from various environmental‐modifying factors in commonly employed dose metrics.
Dose and causality
The link between a dose and an associated response is not merely a correlation. It must be established as a discernible causal relationship (i.e., as noted above, dose–causality–response. Experimental design and associated statistics used in pharmacology and toxicology testing focus on establishing a causal inference largely based on the counterfactual or potential outcome model. This comparison of results from 2 experimental exposure regimes differs only in the absence or presence of the substance being tested (i.e., control and exposure regimes). Because the latter is where an effect caused by the presence of the test substance is expected, several exposure levels are used to allow statistical interpolation of the estimated magnitude of causal exposure/dose associated with the nature and degree of adverse effect under the experimental conditions. As can be seen in any standard testing protocol, a number of parameters are controlled, measured, and reported to confirm that experimental conditions were similar, meeting requirements for statistical analysis and causal inference. Nevertheless, there are a number of confounding factors that can limit the comparability within and among tests by introducing variability/uncertainty, thereby reducing the ultimate utility of the estimates (Höfler 2005; Suter et al. 2010).
Accordingly, causal inferences are often assumed rather than routinely validated. The exposure/dose surrogate chain of exposure–critical body residue–target noted above is an example. To use exposure‐based dose metrics the full surrogate chain relationship should be established. However, this is not the case even for baseline neutral narcosis (anesthesia), the default mode/mechanism of toxic action for organic chemicals. As noted earlier, the exposure‐based molar LC50 data for small aquatic organisms range higher than approximately 105, whereas whole‐body residue is essentially constant, in the range of approximately 101. In addition, little in the way of reliable information is available for the molar concentrations at toxicity target site(s) in organisms.
Chemical activity has been involved in narcosis (anesthesia) research since influential work that resulted in the approach being named the Meyer–Overton theory in approximately 1900. Overton used the osmometric investigative method considering solubility and summing of partial pressures at steady state (Kleinzeller 1999). Ferguson (1939) reviewed the progress since Meyer–Overton, particularly Meyer and Hemmi (1935), and sums up the status at that time, where the measured toxic concentration refers to the concentration in external exposure media, usually water:
The measured toxic concentration, though usually regarded as an index of toxicity, is in reality a function of the intrinsic toxicity of the substance and of its distribution equilibrium. Only when the effect of phase distribution is allowed for can intrinsic toxicities be compared, and can valid deductions be drawn regarding possible relationships between chemical constitution and physiological action. Were such information available, the elucidation of the mechanism of action would be greatly facilitated (Ferguson 1939, p. 388).
Although Ferguson uses “intrinsic toxicity”, he makes several points. First, toxicity is not related to the exposure media concentration; it is a metric related to the amount of chemical in the organism. Second, the exposure water concentration associated with the adverse effect varies widely and can only reliably be estimated when the “distribution equilibrium” is achieved (i.e., the exposure water and organism are at a steady state when the free chemical activity, not necessarily the total chemical concentration, is equivalent outside and inside the organism. Third, the “mechanism of action” statement indicates that narcosis may not be the only mechanism of action possible because similar, but not identical, relationships were noted for nonreversible toxicity. If so, the adverse effect may be neither directly related to the effective exposure concentration nor constant in mechanism and it cannot be inherent. Fourth, “If a true equilibrium exists, the chemical potential of the toxic substance must be the same in all phases partaking in the equilibrium. Hence, the chemical potential of the toxic substance at the actual point of attack is known” (Ferguson 1939, p. 389). This indicates that the chemical activity–toxicity relationship is based on an internal organism phase where the site(s) of toxic action are located. However, that is not the same as the causal amount associated with the adverse effect. In any organism phase containing the site(s) of toxic action, there are binding sites, such as proteins and/or other ligands, that will bind the test chemical, preventing it from being fully dissolved in the phase and altering its contribution to overall chemical activity. Because there are no pure hydrophobic or hydrophilic phases in organisms, only complex varying mixtures within phases, there will always be a disparity in the activity–concentration relationship in relatively pure exposure media phases versus whole organisms or various subcompartments thereof. Fifth, all of the above numbered items are based on the assumption that the toxic agent is the parent chemical itself, rather than partly or wholly a result of one or more metabolites or their derivatives.
McGowan (1952) extended Ferguson's arguments with additional data and analyses, addressing some of the earlier points. Mullins (1954) provided an extensive and detailed review of the theory and current understanding of phase partitioning, molar volume fractions, chemical activity, membrane characteristics, and narcotic and nonnarcotic toxicity. Unfortunately, the understanding gained during the first half of the 20th century regarding causal dose responses that was enhanced during the next quarter century did not fully provide information for the environmental toxicity testing and regulatory frameworks in the later part of the 20th century and beyond.
The above highlights an aspect of dose causality that is not commonly addressed, that is, the proportion of the dose metric being employed that is actually causing the adverse effect. In other words, what proportion of the total number of molecules measured by the dose metric is directly involved in initiating/eliciting the dominant adverse effect in question? In a 96‐h LC50 test a number of organisms are exposed in a flow‐through system with a number of containers, each with differing test chemical concentrations. Mortality is recorded and an LC50 is interpolated at 96 h from the exposure concentrations exhibiting mortality above and/or below the 50% level. Experiment examples can be seen in Brook et al. (1984). Ideally, a steady state has been reached between the exposure concentrations and the exposed fish such that the calculated LC50 is a threshold or incipient estimate largely uninfluenced by toxicokinetics (Sprague 1969).
If the test is rerun with a single exposure level set to the estimated LC50, 50% mortality in the exposed organisms should occur. For this thought experiment, twenty 3‐g fish with a total body lipid level of 5% are exposed in a 10‐L flow‐through container at an LC50 of 1 mmol of a hypothetical organic chemical with a log K OW of 4. This chemical causes toxicity by baseline neutral narcosis and its metabolic biodegradation is negligible (see Mackay et al. 2014; McCarty 2015). Thus, at the start of the test there are (6.02214076 × 1023 molecules/mol × 0.001 mol/L × 10 L) = 6.02214076 × 1021 molecules continuously present in the 10‐L exposure vessel.
Because toxicity is a function of the number of molecules present, the molecules affecting one organism by causing an adverse effect cannot be simultaneously acting on another organism. Thus, the effective causal dose for each organism is one‐twentieth of the total or 0.301107038 × 1021 molecules. At the end of the test, where only 10 organisms are living, the effective causal dose is one‐tenth of the total or 0.602214076 × 1021 molecules. Oddly, the partitioned exposure dose per organism is 2 times higher at the end of the test than at the beginning. Even more oddly, if the test was rerun again with just 2 test organisms, with one dying, the estimated causal doses per organism at the beginning and end of this test would be 3.01107038 × 1021 and 6.0221407 × 1021 molecules, respectively. Further examples with different numbers of test organisms would simply confirm the contention that exposure dose surrogate toxicity metrics, such as the LC50, do not have a direct causal relationship with toxicity.
Because the above causal link is tenuous it is appropriate to examine the next dose surrogate metric, the critical whole‐body chemical concentration or critical body residue. With the exception of surface effect irritation/mucus production or corrosion of skin/integument, the test chemical must enter into the test organisms where it can initiate its toxic mode/mechanism of action. The primary influences controlling the accumulation of chemicals are bioavailability in the exposure medium and absorption, distribution, metabolism, and excretion. A quick estimate of critical body residue can be determined from the LC50 and the bioconcentration factor (BCF), assuming steady state has been reached (i.e., critical body residue ≈ LC50 × BCF. The BCF in small fish of approximately 5% lipid content, for a chemical with a log K OW of 4, is approximately 500. The LC50 is based on existing information for baseline neutral narcosis (see Mackay et al. 2014; McCarty 2015) and is approximately 0.01 mmol/L. Thus, a critical body residue for narcosis toxicity in small fish is 0.01 mmol/L × 500 approximately equal to 5 mmol/L of fish or mmol/kg where fish density is 1.0. Recent experimental work has provided further understanding of critical body residue (Van der Heijden et al. 2015); however, the question remains as to what proportion of the amount of chemical in the organism is directly related to toxicity causality—a little, some, a lot, or all?
The next dose metric in the search for the causality link is the internal toxicity target site. It can be viewed as a subphase of the whole organism: an organ, one or more types of tissue, or even a specific enzyme or protein. The concept of internal toxicity target sites where test chemicals and/or their metabolites act to induce toxicity is complex. For many chemicals, a detailed knowledge of modes/mechanisms of toxic action is limited. For even a widely observed toxicological response such as anesthesia (i.e., baseline neutral narcosis) knowledge is limited. Nonetheless, because there are copious narcosis toxicity data for aquatic organisms, one might agree that an examination is warranted. The evaluation will be carried out via another simple thought experiment because there is little sub‐organism data.
An idealized small fish can be characterized as approximately 3 g and 5% total lipid (Mackay et al. 2014). In a simple one‐compartment, first‐order kinetic model this indicates 2 subphases: the hydrophilic subphase of 95% water, and the hydrophobic subphase of 5% total lipid. Neither can be considered pure; both are complex mixtures of dissolved and particulate matter with multiple cell membrane components; and, for higher organisms, there are various tissues and organ systems. A simple steady‐state approach may provide useful guidance but the key problem is defining toxicity target site(s).
For the 2 competing anesthesia theories (Franks and Lieb 2004), protein pockets and membrane perturbation, it is assumed that the toxicity target site(s) are in the hydrophobic subphase. In Table 1 body compartment sizes are estimated with equivalent mass and volume, assuming a fish density of 1. The hydrophobic subphase is further partitioned into nontarget and target sub‐subphases assuming the target is 0.1, 0.01, 0.001, or 0.0001 of the total hydrophobic subphase. It can be seen that the fish water subphase (Pfw) and the fish total hydrophobic subphase (PfTH) remain the same for all cases, whereas the fish target sub‐subphase (Pfht) decreases dramatically because the fish total hydrophobic sub‐subphase (PfTH) is increasingly dominated by the fish nontarget sub‐subphase (Pfnt). Thus, the expected amount of test organic chemical in the Pfht site decreases by several orders of magnitude; nevertheless, there is no change to the amount of test chemical in PfTH or in the whole fish. However, if the Pfht site test chemical concentration causes an adverse effect in the case of the largest Pfht (0.015 g), it would be unlikely to cause an effect in one or more of the cases where there is a smaller Pfht, despite a constant whole fish concentration of test chemical.
Table 1.
F, g, or mL | Pfw, g | PfTH, g | Pfnt, g | Pfht, g | |
---|---|---|---|---|---|
Pfht = 0.1 × PfTH | 3 | 2.85 | 0.15 | 0.1350 | 0.015 |
Pfht = 0.01 × PfTH | 3 | 2.85 | 0.15 | 0.1485 | 0.0015 |
Pfht = 0.001 × PfTH | 3 | 2.85 | 0.15 | 0.14985 | 0.00015 |
Pfht = 0.0001 × PfTH | 3 | 2.85 | 0.15 | 0.149985 | 0.000015 |
F = whole fish; Pfw = fish hydrophilic (water) subphase; PfTH = fish total hydrophobic (lipid) subphase; Pfnt = fish nontarget hydrophobic (lipid) sub‐subphase; Pfht = fish target hydrophobic (lipid) sub‐subphase.
Next, the LC50‐based critical body residue for neutral narcosis is partitioned to provide an estimate of the proportion of the total number of molecules of test chemical at the fish toxicity target site, not that all molecules in this sub‐subphase are necessarily directly causing toxicity. This is presented in Table 2, based on modeling results for a hypothetical organic chemical with a log K OW of 4 (McCarty 2015). Table 2 also includes the estimated steady‐state LC50 (CwSS) and the estimated steady‐state critical body residue (CfSS), as well as the estimated molar amounts in each of the various phases/subphases. The ratio of the Pfht to the total whole fish molar amounts is also calculated for the 4 Pfht sizes. The data are not significant digits; rather, the numbers are simply presented in a manner that allows the partitioning to be explained.
Table 2.
log K OW 4 | log K OW 4 | log K OW 4 | log K OW 4 | |
---|---|---|---|---|
CwSS, mmol/L | 0.0100 | 0.0100 | 0.0100 | 0.0100 |
CfSS, mmol/L or kg | 5.0095 | 5.0095 | 5.0095 | 5.0095 |
Pfw, mmol | 0.0000285 | 0.0000285 | 0.0000285 | 0.0000285 |
PfTH, mmol | 0.0150000 | 0.0150000 | 0.0150000 | 0.0150000 |
Pfnt, mmol | 0.0135050 | 0.0148500 | 0.0149850 | 0.0149985 |
Pfht, mmol | 0.0015000 | 0.0001500 | 0.0000150 | 0.0000015 |
Total F, mmol | 0.0150285 | 0.0150285 | 0.0150285 | 0.0150285 |
Total F, mmol/L or kg | 5.0095 | 5.0095 | 5.0095 | 5.0095 |
Pfht/Total F |
0.0998 (~10%) |
0.00998 (~1%) |
0.000998 (~0.1%) |
0.0000998 (~0.01%) |
CwSS = modeled steady‐state median lethal concentration (LC50); CfSS = modeled steady‐state critical body residue (CBR); F = whole fish; Pfw = fish hydrophilic (water) subphase; PfTH = fish total hydrophobic (lipid) subphase; Pfnt = fish nontarget hydrophobic (lipid) sub‐subphase; Pfht = fish target hydrophobic (lipid) sub‐subphase.
First, in Table 1 only a small proportion of the test chemical molecules in the exposed organism are at the putative toxicity site for neutral narcosis. For sizes of 0.015 to 0.000015 g at 5% body lipid the proportions range from approximately 10% to 0.01%. Although it is unlikely that all of the molecules present in this sub‐subphase are directly involved, these values will be used for discussion purposes.
Second, a signal‐to‐noise problem is evident based on the Pfht to total whole fish ratios in Table 2. The target compartment is from 0.0998 to 0.0000998 of the total molar amount in the test chemical the organism, that is, approximately 90 to 99.99% of the test chemical in exposed organisms is not at the target site. Consequently, it is not likely that real differences and/or variations in the target site levels can be reliably estimated from whole‐body residue measurements. The noisiness/variability of LC50 data is a long‐standing, well‐known issue. Sprague noted that the reproducibility/variability of acute aquatic toxicity testing data for the same chemicals and test species was in the range of a 0.5 to 1 order of magnitude (Fogels and Sprague 1977), whereas Brooke et al. (1984) commented, “Data on any one species vary immensely in quality, and it is not uncommon to find toxicity data on the same chemical and test species varying more than a 1000‐fold range.” A body of good‐quality measured critical body residues does not exist; however, available data suggest that noisiness/variability is present, perhaps less extensive than that reported for LC50s, but still confounded by chemicals that exhibit different modes of toxic action (McCarty et al. 2013).
This conveys that current exposure‐based or whole‐body‐based dose estimates might be considered at an accuracy/precision level of one significant digit, perhaps 2 at the very best, whereas approximately 3 to 4 significant digits are necessary to distinguish the Pfht contribution to total whole fish steady state. This simple analysis suggests it is not possible to reliably determine the putative causal toxicity target site amount/concentration of test chemical from available noisy/variable exposure‐based or whole‐body‐based experimental testing data. Because baseline neutral narcosis is thought to be the least toxic mode of toxic action for organic chemicals, the signal‐to‐noise problem may be exacerbated because chemicals with specific modes of toxic action are considered.
Finally, the nature of the causal relationship must be elucidated. Conventionally, this is carried out by a mode/mechanism of toxic action classification scheme, where a mode is a group of substances that act by a common, although not necessarily identical, mechanistic pathway because the specific site(s) of toxic action may be at different locations and/or have somewhat different characteristics. Various mode/mechanism classification schemes can be found in the literature; however, a key assumption is that a single mode/mechanism is producing a dominant, if not exclusive, adverse effect. Although a convenient approximation, the near‐universal occurrence of one or more side effects (i.e., additional, usually adverse effects, secondary to the primary effect) indicate its shortcomings. This is an important issue for the inherent/intrinsic toxicity concept because it is clear that any exposure‐based dose metric will likely be confounded by the combination/interaction of primary and secondary toxic effects, with the latter composed of one or more secondary mechanisms.
The hazard‐risk paradigm problem
The current hazard‐risk paradigm is ambiguous in distinguishing between hazard and risk. Quantification of harm is based on toxicity testing data that establish doses/exposures, where some type of harm is likely to occur under a defined set of conditions. Hazard‐based approaches consider the testing results as reported. Risk‐based approaches attempt to correct the dose/exposure metrics for case‐specific differences influenced by various toxicity‐modifying factors, thereby providing estimates of circumstances both where harm is very likely and very unlikely to be observed (McCarty 2012).
For aquatic organisms, an LC50 indicates that an adverse effect (50% of the test organisms die during the test) occurs under standardized laboratory conditions. However, the standard metrics such as LC50 may vary by a factor or 10 or more if determined under an alternative set of laboratory conditions. For other chemicals and/or species, LC50 estimates may differ by several orders of magnitude. There is also the issue of external versus internal dose/exposure metrics noted above. Clearly, hazard is not a fixed or constant fact; rather, it is case‐dependent and quantified by what should be considered a standardized risk assessment, the LC50 in the above example. Because toxicity metrics vary with test organism and test protocol/conditions, so will any determination of inherent/intrinsic. Similarly, for the same test organisms in various field conditions, substantial differences in the nature and magnitude of various toxicity‐modifying factors add a layer of complexity that contributes to variability in adverse effects. It is the lack of consideration for the well‐known variability of toxicity testing data that moves the hazard‐based practice, based on an inherent/intrinsic toxicity concept beyond redundancy to misleading and of little or no utility for quantitative risk assessment purposes. It is illogical to employ a fixed hazard approach based on a constant inherent/intrinsic toxicity concept, when the toxicity testing data that are the technical foundation are neither fixed nor constant.
A conceptually sound alternative is the standard risk–case‐specific risk model practice (McCarty 2012). Standard risk is simply conventional exposure‐based standardized testing data (e.g., LC50, 10% effect concentration, no‐observed‐effect level, etc.) where the nature/magnitude of a number of toxicity‐modifying factors is specified. A case‐specific or specific risk is the translation of standard risk exposure‐effect relationships to alternative exposure and/or effect conditions (McCarty et al. 2018). This approach emphasizes the requirement for translation between standard‐risk data obtained in toxicity testing and case‐specific risks that is the focus of regulatory risk assessments. Although it will be difficult to evaluate the effect of all possible toxicity‐modifying factors, the simple requirement to explicitly and routinely evaluate such influences will do much to drive development of appropriate methodology.
The standard risk–case‐specific risk strategy also explicitly emphasizes that a detailed understanding of the influences of various modifying factors on the exposure‐effect dose surrogate chain is necessary. This allows standard risk metrics to be quantitatively adjusted to case‐specific risk metrics and vice versa. This includes identification/separation of the modifying influences of various physical, chemical, and environmental factors on environmental fate and toxicological bioavailability. Also included are identification/separation of various toxicity‐modifying influences within and among differing modes/mechanisms of toxic action, and identification/separation of various toxicity‐modifying influences within and among types of test organisms.
A simple, single factor example is the influence of water hardness on aquatic metal toxicity. For cadmium the reaction of dissolved metal ions with carbonate and hydroxide in the exposure water forms relatively insoluble CdCO3 and Cd(OH)2 reducing effective water column exposure levels (i.e., dissolved levels are less than total metal concentrations) resulting in lower or absent toxicity (McCarty et al.1978). Other inorganic and organic ligands may also contribute to reduced bioavailability along with concomitant pH and/or alkalinity changes. Most jurisdictions have promulgated water quality guidance for various metals that adjust observed toxicity data for hardness using empirically derived relationships. Interpolating/extrapolating among different exposure characteristics illustrates the basic standard risk–case‐specific risk concept of adjusting toxicity information for modifying factor influence but the approach largely remains in its infancy.
More detailed knowledge concerning various modifying factor influences and interactions is essential for development of regulatory guidance ranging from specific contaminated sites to regional or jurisdiction‐wide rules and from single chemicals to mixtures. Although the standard risk–case‐specific risk paradigm better explains why hazard translation is difficult in the hazard‐risk paradigm and why the former, with its explicit consideration of good modeling practices and the confounding influences of multiple toxicity modifying factors, is preferable, the current lack of detailed knowledge is an impediment both to development/refinement of more sophisticated methodologies and additional regulatory applications.
DISCUSSION
The issues reviewed above can be related to confusion in the application of 2 key philosophical methods utilized in scientific inquiry: Popper's (2002) “empirical falsification” procedure for testing the validity of a theory and the “counterfactual” practice of Hume (Hofler 2005) and Lewis (1973) for determining causality.
Paracelsus did not provide a unique definition for toxicity or define poison, toxic, hazard, or even dose. He provided a theory presented as a relational statement about poison and dosage: the state of being a poison is not a property peculiar to any particular substance because all substances possess it. Poisonous effects are related to amount of substance present in doses to which organisms are exposed and the case‐specific circumstances and conditions associated with each exposure scenario. Because both the nature and magnitude of dose is variable, as is the associated case‐specific circumstances and conditions, the outcome must also be variable, ranging from nothing to maximum adverse effects. Thus, Paracelsus's paradigm is better termed a causal dose–response relationship succinctly captured as dose–causality–response. Consequently, toxicity cannot be an inherent/intrinsic property of a substance.
Viewed from the perspective of Popper, standard toxicity testing is an exercise in testing the null hypothesis of “all chemicals substances are safe”, that is, they do not cause adverse toxic effects. Looking back over the last 500 yr it is clear that Paracelsus's theory on the relationship between poisonous effects and dose is correct; there are no safe (i.e., nonpoisonous or nontoxic) chemicals/substances, only nontoxic doses. Under the right circumstances, virtually everything can produce toxic effects.
Yet, every modern toxicity test can be viewed as an empirical falsification test of Paracelsus's theory. This is so ingrained in toxicity testing frameworks that if some adverse toxicity response is not found, modification of the employed test or substitution of another test design is carried out to ensure that the lack of safety (i.e., toxicity) is not missed. For example, in aquatic toxicity testing with very hydrophobic chemicals, where water solubility is limiting, co‐solvents are often added to increase the effective exposure concentration (i.e., increase bioavailability) and toxicity is usually found. For extremely hydrophobic chemicals, alternative exposure routes—dietary or direct injection—are used to deliver sufficient amounts into test organisms to obtain some measurable adverse response for exposure conditions that are unlikely to be encountered under field conditions.
For regulatory risk assessment and decision making the crucial information needed from experimental testing is different from that required for theory validation. Information is needed on the wide range of exposure/dose regimes that both may and may not cause adverse effects. Toxicity testing protocols include control exposures (i.e., no added test chemical) that are the basis for a counterfactual causality evaluation. As long as no significant adverse response is observed in the control exposure, the test exposure results are indicative of causality association for that experimental result. The multiplicity of test results needed to evaluate causality, or the lack of causality, in a wide range of exposure/dose regimes is missing. This provides information on potential confounding variables and interactions that may lead to variations in causal associations for differing experimental conditions. The ultimate objective of regulatory risk assessment and decision making is not a collection of case‐specific causality associations; rather, it is to use these associations to develop a mechanistic explanation for the adverse effects and the influences of modifying factors on its magnitude and extent, that is, a mechanistic causality description. This is why epidemiological studies are typically uninformative for causal analyses.
However, it must be emphasized that such concerns are focused on current regulatory initiatives. Older regulatory guidance should not be construed as useless or invalid. In the initial phase of modern environmental regulation development, largely in the second half of the 20th century, data obtained with standardized toxicity testing protocols were a key component in achieving successful advances in environmental protection. However, older guidance was developed with simple protocols designed to work with limited amounts of varying exposure‐based toxicity data containing substantial uncertainties. It was largely an administrative procedure involving a basic data quality screening followed by sorting to identify the more toxic results. Then policy‐based safety or application factors, often of the order of 10, 100, or 1000 times, were applied. The resultant objectives provided simple guidance that, when implemented, often achieved substantial improvements in environmental quality, However, the newer regulatory development processes advanced in the 21st century have become sophisticated in their objectives without resolving the limitations of the toxicity testing information development process inherited from the last century (McCarty 2013).
Thus, modern regulatory risk assessment and decision making are not about empirical falsification; rather, they are focused on establishing the presence or absence of case‐specific causality. Counterfactual assessment/knowledge is required for both experimental testing interpretation and for extrapolation in case‐specific risk decision making. To accomplish this there must be sufficient information and knowledge to address the 5 standard questions of investigative problem solving: who? what? when? where? and why (i.e., how)? Addressing data from a multiplicity of counterfactual tests can be used to generate sufficient appropriate information and understanding regarding toxicity causality to develop general knowledge (i.e., the aforementioned mechanistic causality description) that allows estimation of outcomes for a wide variety of cases where the questions have differing answers. However, the lack of understanding regarding the exposure/dose surrogate chain (exposure–critical body residue–target) in currently available experimental toxicity data continues to confound both causality assessment and risk determination and evaluation.
Thus, the question comes back full circle to inherent/intrinsic toxicity/hazard and chemical‐response relationships versus extrinsic toxicity/risk and dose–causality–response relationships. Centuries of experience and experimentation are consistent with our slightly updated statement of Paracelsus's paradigm: dose–causality–response. Toxic responses consistently depend on dose as well as a wide array of additional factors, all of which, such as dose, are extensive to the test substance. Moreover, the mechanisms by which chemicals produce toxicity are now understood to also be dependent on dose (Slikker et al. 2004a, 2004b). It is possible that both paradigms are well supported by the published data; however, we have been unable to locate any publications showing that the toxic response to a chemical is constant irrespective of extensive factors, or that the apparent dependence of toxic responses on extensive factors can be removed to reveal inherent/intrinsic hazardous/toxic responses at all doses and concentrations. That is to say, there is no evidence of a consistent, reliable chemical‐response relationship.
Absent data to the contrary, it would seem reasonable to conclude that toxicity is an extensive property of chemicals and that it is explicitly not an inherent/intrinsic property. Accordingly, we recommend that use of the adjectives inherent/intrinsic be eliminated altogether when referring to hazard/toxicity. Not only do these adjectives convey and perpetuate a misleading concept about toxicity, their continued use thwarts the long‐standing purpose and goal of toxicology: a mechanistic understanding causal–dose–response relationships.
CONCLUSIONS
Intrinsic/inherent properties of a chemical are those possessed by every molecule of a substance, without the need for the presence of a certain number of molecules or for the chemical to be encountered under any particular condition. However, toxicity is an extensive biochemical property that depends on the number of molecules present at the time and case‐specific conditions of the measurement. Thus, a narrow focus on the intrinsic/inherent hazard concept, in either theory or practice, detracts and confounds application of the well‐known, widely understood toxicological principle that a toxic response depends on the nature of the chemical substance, the nature of the organism, the characteristics of the dose, and the case‐specific environmental conditions.
This clarification reveals a problem with regulatory decision making that uses hazard‐based approaches; it implies that hazard is fixed and constant. Both hazard‐ and risk‐based assessment use the same experimental toxicity testing data that are well known to be variable within and between chemicals, test organisms, and experimental repetitions. Thus, toxicity appears to be anything but fixed and constant. Rather than use the term hazard, it is more defensible to define experimental toxicity testing data as a standardized or standard risk determination. This means that the nature, magnitude, and probability of the toxicity determined in each experimental test is standardized to the test organism, test chemical, and test conditions employed in the iteration of each test.
A key objective of regulatory toxicology is to control or avoid the occurrence of toxicity in the natural environment. The task is to interpolate/extrapolate standardized experimental results to other species, situations, and conditions, and to produce appropriate site‐specific risk determinations. A detailed understanding of the influences of the differing conditions on toxicity‐modifying factors is necessary; however, a sufficient knowledge base is not currently available. This may be related to misunderstanding regarding the objective of toxicity testing. The current approach is a confused combination of aspects of the empirical falsification approach for testing theory validity and the counterfactual approach for determining toxicity causality. The result is establishment of causality associations specific to the testing design and conditions rather than more broadly applicable mechanistic causality descriptions. The latter requires a collection of targeted causality associations to facilitate determination of the dominant mode/mechanism of toxic action separately from the influence of modifying factors such that the interaction of mode/mechanism and modifying factors can be interpolated/extrapolated in case‐specific ways to explain/predict toxicity causality under conditions differing from the original testing data. To do this, it will be essential to thoroughly understand the exposure/dose surrogate chain (exposure–critical body residue–target) relationships such that various dose metrics can be reliably translated to the appropriate metric for the task at hand.
Clearly, regulatory guidance and decision making related to identification and control of chemicals in the environment need to be substantially improved. The most important change is to shift from a hazard‐risk paradigm to a standard specific‐risk paradigm where establishing toxicity causality is a key focus for both theory and practice. Case‐specific details for each dose–causality–response relationship investigation must be carefully parsed so as to build up a knowledge base of how modifying factors influence the nature and extent of toxicity in various circumstances and conditions. Although the task appears daunting, the considerable existing toxicity testing dataset could be mined for examples to provide some limited range toxicity‐modifying factor causality prediction relationships. This, combined with improvements in the quantity and quality of information on critical body residues and mode/mechanism‐specific estimation of internal toxicity target site(s), would go a long way toward improving the routine translation of standard risk estimates to specific risk estimates. The ultimate result would be a major improvement in both the quality and quantity of useful, more thoroughly understood dose–causality–response relationships, with the capability of providing better information concerning environmental regulations and associated risk decision making with toxicity estimates that better reflect the diversity of case‐specific conditions encountered in the environment.
Disclaimer
The present study did not receive any specific grant from funding agencies in the public, commercial, or not‐for‐profit sectors.
[Correction added 20 November 2020. Copyright changed after initial publication.]
Data Availability Statement
Data, associated metadata, and calculation tools are accessible from the corresponding author (lsmccarty@rogers.com).
REFERENCES
- Borgert CJ, Baker SP, Matthews KC. 2013. Potency matters: Thresholds govern endocrine activity. Regul Toxicol Pharmacol 67:83–88. [DOI] [PubMed] [Google Scholar]
- Brooke LT, Call DJ, Geiger DL, Northcott CE, eds. 1984. Acute Toxicities of Organic Chemicals to Fathead Minnows (Pimephales promelas), Vol 1. University of Wisconsin–Superior, Superior, WI, USA. [Google Scholar]
- Deichmann WB, Henschler D, Holmstedt B, Keil G. 1986. What is there that is not poison? A study of the Third Defense by Paracelsus. Arch Toxicol 58:207–213. [DOI] [PubMed] [Google Scholar]
- Environment Canada . 2017. Evaluating existing substances. [cited 2020 September 1]. Available from: http://www.ec.gc.ca/ese-ees/default.asp?lang=En&n=495117B5-1
- Ferguson J. 1939. The use of chemical potentials as indices of toxicity. Proc Roy Soc B‐Biol Sci 127B:387–404. [Google Scholar]
- Fogels A, Sprague JB. 1977. Comparative short‐term tolerance of zebrafish, flagfish, and rainbow trout to five poisons including potential reference toxicants. Water Res 2:811–817. [Google Scholar]
- Franks NP, Lieb WR. 2004. Seeing the light—Protein theories of general anesthesia. Anesthesiology 101:235–237. [DOI] [PubMed] [Google Scholar]
- Glosbe . 2020. Inherent toxicity in English. [cited 2020 September 1]. Available from: https://glosbe.com/en/en/inherent%20toxicity
- Höfler M. 2005. Causal inference based on counterfactuals. BMC Med Res Methodol 5:28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kleinzeller A. 1999. Charles Ernest Overton's concept of a cell membrane In Deamer DW, Kleinzeller A, Famborough DM, eds, Membrane Permeability: 100 Years since Ernest Overton. Current Topics in Membranes—Vol 48. Academic, San Diego, CA, USA, pp 1–22. [Google Scholar]
- Lewis D. 1973. Causation. J Philosophy 70:556–567. [Google Scholar]
- Mackay D, McCarty LS, Arnot JA. 2014. Relationships between exposure and dose in aquatic toxicity tests for organic chemicals. Environ Toxicol Chem 33:2038–2046. [DOI] [PubMed] [Google Scholar]
- Mackay D, McCarty LS, MacLeod M. 2001. On the validity of classifying chemicals for persistence, bioaccumulation, toxicity, and potential for long‐range transport. Environ Toxicol Chem 20:1491–1498. [PubMed] [Google Scholar]
- McCarty LS. 2012. Model validation in aquatic toxicity testing: Implications for regulatory practice. Regul Toxicol Pharmacol 63:353–362. [DOI] [PubMed] [Google Scholar]
- McCarty LS. 2013. Are we in the Dark Ages of environmental toxicology? Regul Toxicol Pharmacol 67:321–324. [DOI] [PubMed] [Google Scholar]
- McCarty LS. 2015. Data quality and relevance in ecotoxicity: The undocumented influences of model assumptions and modifying factors on aquatic toxicity dose metrics. Regul Toxicol Pharmacol 73:552–561. [DOI] [PubMed] [Google Scholar]
- McCarty LS, Arnot JA, Mackay D. 2013. Evaluation of critical body residue data for acute narcosis in aquatic organisms. Environ Toxicol Chem 32:2301–2314. [DOI] [PubMed] [Google Scholar]
- McCarty LS, Borgert CJ, Posthuma L. 2018. The regulatory challenge of chemicals in the environment: Toxicity testing, risk assessment, and decision‐making models. Regul Toxicol Pharmacol 99:289–295. [DOI] [PubMed] [Google Scholar]
- McCarty LS, Henry JAC, Houston AH. 1978. Toxicity of cadmium to goldfish (Carassius auratus) in hard and soft water. J Fish Res Board Can 35:35–42. [Google Scholar]
- McCarty LS, Mackay D. 1993. Enhancing ecotoxicological modeling and assessment: Critical body residues and modes of toxic action. Environ Sci Technol 27:1719–1728. [Google Scholar]
- McGowan J. 1952. The physical toxicity of chemicals. II. Factors affecting physical toxicity in aqueous solutions. J Appl Chem 2:323–328. [Google Scholar]
- Merriam‐Webster Dictionary . 2020. Inherent, intrinsic. [cited 2020 September 1]. Available from: https://www.merriam-webster.com/dictionary
- Meyer KM, Hemmi H. 1935. Contributions to the theory of anesthesia. Biochem Ztschr 277:39–71. (in German; as cited in Ferguson 1939). [Google Scholar]
- Mullins LJ. 1954. Some physical mechanisms in narcosis. Chem Rev 54:289–323. [Google Scholar]
- Oxford English Dictionary . 2020. Inherent. [cited 2020 September 1]. Available from: https://en.oxforddictionaries.com
- Popper K. 2002. The Logic of Scientific Discovery. Routledge, London, UK. [Google Scholar]
- Slikker W, Andersen ME, Bogdanffy MS, Bus JS, Cohen SD, Conolly RB, David RM, Doerrer NG, Dorman DC, Gaylor DW, Hattis D, Rogers JM, Setzer RW, Swenberg JA, Wallace K. 2004a. Dose‐dependent transitions in mechanisms of toxicity. Toxicol Appl Pharmacol 201:203–225. [DOI] [PubMed] [Google Scholar]
- Slikker W, Andersen ME, Bogdanffy MS, Bus JS, Cohen SD, Conolly RB, David RM, Doerrer NG, Dorman DC, Gaylor DW, Hattis D, Rogers JM, Setzer RW, Swenberg JA, Wallace K. 2004b. Dose‐dependent transitions in mechanisms of toxicity: Case studies. Toxicol Appl Pharmacol 201:226–294. [DOI] [PubMed] [Google Scholar]
- Society of Environmental Toxicology and Chemistry . 2018. Environmental risk assessment of chemicals. Technical Issue Paper. Pensacola, FL, USA.
- Sprague JB. 1969. Measurement of pollutant toxicity to fish. I. Bioassay methods for acute toxicity. Water Res 3:793–821. [Google Scholar]
- Suter GW, Norton SB, Cormier SM. 2010. The science and philosophy of a method for assessing environmental causes. Hum Ecol Risk Assess 16:19–34. [Google Scholar]
- Van der Heijden SA, Hermens JLM, Sinnige TL, Mayer P, Gilbert D, Jonker MTO. 2015. Determining high‐quality critical body residues for multiple species and chemicals by applying improved experimental design and data interpretation concepts. Environ Sci Technol 49:1879–1887. [DOI] [PubMed] [Google Scholar]
- Zeno.org . 2020. Paracelsus, Septem Defensiones, The other Defension, concerning the new diseases and nomina of the registered doctoris Theophrasti, The third Defension because of the writing of the new prescriptions (in German). Septem Defensiones, Die dritte Defension wegen des Schreibens der neuen Rezepte. [cited 2020 September 1]. Available from: http://www.zeno.org/Philosophie/M/Paracelsus
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data, associated metadata, and calculation tools are accessible from the corresponding author (lsmccarty@rogers.com).