Abstract
We propose a new method for optimal experimental design of population pharmacometric experiments based on global search methods using interval analysis; all variables and parameters are represented as intervals rather than real numbers. The evaluation of a specific design is based on multiple simulations and parameter estimations. The method requires no prior point estimates for the parameters, since the parameters can incorporate any level of uncertainty. In this respect, it is similar to robust optimal design. Representing sampling times and covariates like doses by intervals gives a direct way of optimizing with rigorous sampling and dose intervals that can be useful in clinical practice. Furthermore, the method works on underdetermined problems for which traditional methods typically fail.
Electronic supplementary material
The online version of this article (doi:10.1208/s12248-011-9291-8) contains supplementary material, which is available to authorized users.
KEY WORDS: interval analysis, optimal experimental design, set-values methods
INTRODUCTION
Population pharmacometric experiments are used to study processes of drug absorption, distribution, elimination, and effect/side effect, and how these vary across subjects in a population (1). Such knowledge is important in order to propose relevant dosing strategies and guidelines for drugs. Proper system modeling and design of experiments are crucial in order to maximize information extraction and thereby determine model parameters as precisely as possible. Usually, dynamic mathematical models are considered, and typically, nonlinear mixed effect models (NLMEM) are applied (2,3). Here, population variation is partly explained by covariates like age, weight and dose, and the remaining unexplained variation by random effects.
It is often desirable to use sparse sampling in late stages of population kinetic analysis studies (e.g., phases II and III of clinical trials in drug development), mainly due to limited resources and practical reasons (4). To obtain accurate and precise parameter estimates from sparse data, formulating and solving optimal experimental design problems is of importance (4–6). An optimal design problem is composed of the following main components:
A search domain, D, of possible experimental designs (sometimes referred to as design space): Naturally, this domain is dependent on the available resources and on what factors can be varied in the experimental setup. Commonly, the number of individuals with a certain sampling pattern, the specific sampling points for individuals and groups of individuals, and the choice of dose (or other covariates) for each individual are varied. All possible permutations build up the search domain.
Prior knowledge of the mathematical model and its parameters.
An objective function, fobj, that, for instance, measures the expected imprecision of the parameter estimates for a particular experimental design.
Together, these components form an optimization problem, formally defined as
![]() |
1 |
where d∗ is the optimal design.
In traditional methods, the structure and statistical properties of the model are mostly assumed known, while the parameters are either assumed known (local optimal design, e.g., D-optimality (7,8)) or known to the level of statistical distributions (used by robust approaches, e.g., ED-optimality (9,10)). Furthermore, in most applications the objective is to optimize some function of the Fisher information matrix (FIM; defined as the negative of the expected values over data of the second order partial derivatives of the log likelihood with respect to the parameters), e.g., to maximize the determinant of the FIM. The FIM is either calculated directly from known parameter values (e.g., D-optimality) or computed by estimating the expectation of the FIM over the known statistical distributions of the parameters (5).
An alternative approach is based on simulation and parameter estimation of designs from the search domain. Basically, for each design from D that is tried, data are simulated from the model. From this simulated data, but without information about the data-generating parameters, the parameters are estimated. One advantage of this approach is that other objective functions than those based on the FIM and its underlying assumptions are allowed. For example, the objective function can be based on measures of the estimated parameters. One disadvantage is that parameter estimation often constitutes a hard problem.
Parameter estimation methods are often based on distributional assumptions regarding the parameters and reformulated as local optimization problems. Because of nonlinearities in the model, problems with local minima are usually encountered, and good initial guesses are therefore required. Model linearization may reduce these problems but increases risk of oversimplification.
In this paper, we describe how set-valued methods can be applied to solve optimal design problems in population pharmacometric experiments. A set-valued method works on intervals instead of points on the real axis. Uncertainty is naturally represented by the width of an interval rather than by some distribution. Specifically, our method is based on simulation and set-valued parameter estimation of designs from the search domain. There are several benefits of using a global search based on set-valued methods for optimal design in population pharmacometric experiments:
The method requires no prior information in the form of point estimates for the parameters (as in local optimal design), since the parameters are represented by intervals and can incorporate any level of uncertainty. In this respect, our approach is similar to robust optimal design (10). Notably, no numerical integration, e.g., quadrature or sampling methods, is required as in traditional robust optimal design methods.
Sampling times and covariates like doses can be represented by intervals, which give a direct way of optimizing with rigorous sampling/dose intervals that can be useful in clinical practice.
General problems with parameter estimation in nonlinear models are avoided in set-valued parameter estimation. No distributional assumptions regarding parameters, nor model linearization, are required. Besides, problems with local minima are avoided, since the method outputs parameter intervals that are consistent with data. For nonidentifiable problems, e.g., with infinitely many solutions, set-valued parameter estimation brackets all solutions, while the traditional maximum likelihood method only outputs one solution.
Naturally, there are limitations with our approach. The random effects are always estimated for each individual, and not estimated as a variance component over the population. Hence, compared to traditional methods, parameter estimation might be harder since there are more parameters to estimate. It is also the case that set-valued parameter estimation output can only be indirectly compared to the output from traditional methods, and vice versa. There are also limitations of current software and algorithms in the area. For example, set-valued parameter estimation in differential equation models is an ongoing research project. A fundamental difference to many traditional modeling and identification methods is that a probabilistic framework is not applied.
For a basic introduction to interval methods we refer to (11), and for further reading we recommend (12).
To illustrate and evaluate a new method, it is reasonable to start with basic models of fundamental interest. We have chosen to consider two models that were previously considered by Foracchia et al. for the introduction of the optimal design software PopED (5).
Model I
A standard one-compartment open model with parameters suggested by Al-Banna et al. for an unspecified system (13). The pharmacokinetics for the jth time point of the ith individual is given by
![]() |
2 |
where the parameters Vi and Cli characterize the pharmacokinetics, and ai is the single, intravenous bolus dose (here assumed fixed at 450 mg). The random unexplained variability in fModel I is modeled by proportional error with є ~ N (0, σ2), where σ2 = 0.0225. For evaluation purposes, we have occasionally used є = 0, and these cases are clearly indicated in the text.
Taking into account how parameters vary in the population, the following population model is considered:
![]() |
3 |
![]() |
4 |
![]() |
5 |
where θ are fixed effect parameters and η are random effect parameters following a normal distribution. In this study, we have used the same parameters as in (5): θ = (3.0, 30) and the variances ω2 = (0.25, 25.0). Figure 1a depicts a summary of simulated concentration time courses: the temporal median behavior together with the 5% and 95% percentiles of the model for sampled random effects and residual variances.
Fig. 1.
Temporal behavior of a Model I and b Model II based on 5,000 sampled realizations of the random effects and residual variance. The blue curve indicates the median and the lower and upper red dotted curves represent the 5% and 95% percentiles, respectively
Model II
A one-compartment model describing the absorption and distribution of the drug theophylline (5,14). The substance kinetics for the ith individual is given by
![]() |
6 |
where the parameters ka,i, ke,i, and Cli characterize the pharmacokinetics for the individual, ai is the dose and the additive random unexplained variability is modeled by є ~ N (0, σ2), where σ2 = 0.419. Occasionally, we have also used є = 0.
The following population model is considered
![]() |
7 |
![]() |
8 |
![]() |
9 |
![]() |
10 |
where θ is a fixed effect parameter and eη is a multiplicative random effect parameter following a log-normal distribution. Also, for Model II, we have used the same parameters as in (5), that is θ = (2.71, 0.0763, 0.0373) and the variances ω2 = (0.784, 0.0185, 0.0238). A summary of simulated concentration time courses is given in Fig. 1b.
Model II exhibits two complicating factors compared to Model I. First, the model is undefined when ka = ke (a singularity; division by zero). Second, the parameters ka and ke can be interchanged and are hence not uniquely identifiable (i.e., swapping ka and ke does not change the model behavior). Supplement S3 details how these issues are handled.
This paper is organized as follows. In the next section, “PARAMETER ESTIMATION,” we describe how set-valued methods can be used to estimate the parameters of NLMEMs, and we evaluate the parameter estimation method on the two test models. In the following section, “EXPERIMENTAL DESIGN,” we consider experimental design using the parameter estimation as a subroutine. The paper ends with a “DISCUSSION” and “CONCLUSIONS.”
PARAMETER ESTIMATION
The model parameters are estimated by use of interval methods, constraint propagation, and branch and bound procedures (15–20). For a basic introduction to interval methods and for illustrative examples of constraint propagation we specifically refer to Supplement S1 and (18). The input consists of initial search regions for each parameter, measurement data (when the parameter estimation routine is used in the context of experimental design, this data is simulated), and the model function. In contrast to traditional parameter estimation methods, there is no objective function based on distributional assumptions (like maximum likelihood), and no point estimate is given as output. Instead, the goal is to discard subsets of parameter values that are proved to be inconsistent with the data. The union of the remaining subsets forms a covering of the solution set and is the output of the parameter estimation. Basically, the branch and bound algorithm is used to select which subsets of the parameter ranges to consider next.
Each data point is represented by an interval. For noisy data, we cannot generally expect to find any consistent parameters. To compensate for the inconsistency caused by noise, the widths of the data intervals are increased (referred to as data expansion or inflation). The proposed approach really estimates the random effect parameters of each individual, and not the distribution of those over the set of sampled individuals.
For NLMEMs, we have implemented a general framework allowing estimation of both the fixed effects θ and the random effects η. Since we assume no distributional assumptions, it is natural to consider the two following basic cases:
Independent estimation. Both θ and η are estimated simultaneously for several individuals. The output consists of one parameter range for each θ, and one range for each η and each individual. Notably, for each q, the parameters θq and ηq are often highly correlated. For the population models applied in Model I and Model II, and with no distributional assumptions on η, several solutions are possible. Therefore, one can generally expect wide parameter ranges in this case. This motivates the second case.
Constrained estimation. The θs are assumed known, and we fix the search ranges for these to thin intervals before estimating η. The output consists of one parameter range for each ηq and each individual. Hence, in practice, the problem reduces to estimation of the individual parameters (i.e., Vi and Cli for Model I, and ka,i, ke,i, and Cli for Model II).
An Example
To exemplify the method for parameter estimation, consider a simplistic model with one random effect, η, that is estimated for five individuals. The output from the interval-based parameter estimation method is one parameter range for each individual, as illustrated in Fig. 2a and for independent estimation, one parameter range for θ. For a model with several estimated parameters, the output consists of a box, i.e., a vector of intervals. In general, the output can be composed of several boxes.
Fig. 2.
Example of parameter estimation from data of five individuals. The fixed effect parameter is assumed fixed, e.g., θ = 0, and no distributional assumption regarding the random effect parameter is made. a The random effect parameter is estimated for each of the individuals, and each estimate is given as a parameter range. The ranges are depicted in relation to the feasible search range [−3, 3]. b The distribution of the parameter is approximated by a histogram obtained by sampling the ranges in a (here assuming equal probability of each individual, and uniform distribution within each interval). The gray vertical line indicates the estimated mean of the distribution, which can be used to estimate θ given a population model
The output can subsequently be processed in various ways, often by taking distributional assumptions into account. For instance, one can sample from the five parameter ranges by assuming equal probability of each individual and uniform distribution within each interval. See Fig. 2b for a resulting histogram using 500 such samples. Note that the width of a range does not influence its probability of being sampled, since each individual is sampled with the same probability. The sampled histogram may indicate the underlying distribution of the random effect. We note, however, that we cannot generally expect the histogram to exactly resemble the true underlying distribution. This is natural, since we deal with intervals consistent with data and not probability distributions.
Comparing Results to Traditional Methods
It is not straightforward to compare the parameter estimates obtained by our set-valued approach (in the form of parameter ranges as in Fig. 2a) to estimates from traditional methods like NONMEM (14). The main reasons for this are that the set-valued approach uses no distributional assumptions, no initial starting point, and that the output form differs from traditional methods. The set-valued approach finds a solution set that contains all solutions, while a traditional method finds an approximation to one solution. Therefore, three main processing steps are required to allow such comparisons:
Sampling the parameter boxes given certain distributional assumptions as exemplified above in the subsection “An Example”.
Estimating the variance (ω2) of the random effect by the variance of the sampled distribution (like in Fig. 2b). This step requires a population model (e.g., k = θ + η) with distributional assumptions (e.g., η ~ N(0,Ω)). Potential residual variability may be included in the ω2 estimates. For constrained estimation, the initially fixed θ can be corrected by the mean of the distribution (indicated by a gray vertical line in Fig. 2b).
For problems with the same (or less) number of data points compared to the number of parameters, flexible models like Model I and II can perfectly fit data and there is hence no residual error (the model goes exactly through all data points). Therefore, the random effect variance (ω2) must be separated from the random unexplained variability (since the latter is “built in” into the estimates of ω2). An approximation method for this is outlined in Supplement S2.
Naturally, these three steps imply loss of both accuracy and precision as well as information present in the parameter boxes. It is therefore hard to judge whether set-valued parameter estimation leads to a gain in parameter precision in general. Probably, the answer is problem dependent. Still, to get a rough indication of the performance of our method we present several comparisons to NONMEM. Here, NONMEM is run under ideal conditions (correct form of the distributions, and initial starting points equal to the known true values) and is therefore expected to perform better than our approach.
Examples of estimating the parameters of Model I and II, both from perfect data and data with random unexplained variability, are given in Supplement S2 and S3.
Nonidentifiability
When the number of data points is less than the number of parameters, there may be infinitely many solutions. Naturally, using a set-valued parameter estimation method that outputs all solutions, and not only one single solution, is useful for such problems. Figure 3 illustrates the case when only two sampling points are available to estimate the three random effect parameters (η1, η2, and η3) of Model II using constrained estimation. Output consists of a list of three-dimensional boxes covering the solution curve. We note that a set of solutions implies nonidentifiability. By allowing infinitely small boxes, the covering of the solution would approach the theoretical curve. Given sufficient data, only a single point and no curve would be obtained.
Fig. 3.
Coverings of the set of consistent parameters for Model II using only two data points. Search ranges: η 1 ∈ [−5, 5], η 2 ∈ [−3, 3], and η 3 ∈ [−3, 3]. The θs are fixed (constrained estimation). a Parameters used to generate data are k a = 1.93, k e = 0.0941, Cl = 0.0308, i.e., η = (−0.342, 0.210, −0.193) indicated by the arrow. Interchanging k a and k e, i.e., η = (−3.36, 3.23, −0.0941) results in an identical fit, but the solution lies outside the parameter search ranges. b Parameters used to generate data are k a = 0.398, k e = 0.0840, Cl = 0.0336, i.e., η = (−1.92, 0.067, −0.105) indicated by an arrow. Interchanging k a and k e, i.e., η = (−3.47, 1.65, −0.105) results in an identical fit, also indicated by an arrow
We generally note that the shape of the covering of the solution can be used to analyze potential correlation between the parameters and that estimation in these situations would be impossible using traditional individual estimation methods.
In the context of experimental design, the discussion of nonidentifiability is relevant since there are experimental design problems with fewer sampling points than parameters, especially when prior information is substantial. For example, when screening a series of chemical compounds with highly similar properties, one can think of an underdetermined experimental design. The model is fitted to the sparsely generated data of the specific compound combined with prior information, e.g., data from the whole series.
EXPERIMENTAL DESIGN
The usefulness of a population pharmacokinetic experiment is dependent on the precision at which the parameters can be estimated from the resulting data. In essence, experimental design is used to suggest the most informative way of designing the experimental setup, given prior knowledge of model structure and parameters.
To properly define experimental design problems, we use the following notation: The “sampling pattern” gives the number of sampling points (ni) for each individual (i). The “sampling schedule” represents the time points at which samples will be taken. A “group” contains a number of individuals with the same covariates, sampling pattern, and sampling schedule. In general, the search domain of possible designs can be defined by the following design factors: the number of groups in the study, the number of subjects in each group, and the number of samples and sampling schedule for each subject in a group. In addition, covariates (like dose) can be optimized for each group.
The software code used to generate the presented optimal design data is available at http://www2.math.uu.se/∼warwick/CAPA.
Search Domain
The examples in this study are taken from the literature and focus on the case where the sampling schedule should be optimized, given a fixed number of groups, a fixed number of subjects in each group, as well as a fixed number of samples for each subject in a group.
The sampling schedule is defined by a discrete set of time points. In optimal design, prior information is assumed about the model structure (i.e., the equations), and it is therefore natural to take advantage of this information and use a grid that samples tighter in time regions with peaks and/or rapidly changing trajectories. Both Model I and Model II approach a constant value fi,j(ti,j) = 0 for large t, and since Model II exhibits a peak relatively close to t = 0 (Fig. 1), a nonuniform search domain is reasonable.
Therefore, for the results presented in this paper, the sampling schedule is defined by a discrete set of Ngrid time points in [0, Tmax], nonuniformly distributed as
![]() |
11 |
In this way, the density of potential sample points is highest close to t = 0. Note that there is no limitation on our methodology with respect to choice of grid points; any distribution including a uniformly spaced grid works. Also, note that by increasing Ngrid without bound, the search domain approaches the continuous case. However, in practice, Ngrid is chosen small enough to allow an exhaustive search of the domain, using reasonable computation effort.
Objective Function
The choice of objective function influences the solution of an optimal design problem, and one should carefully chose a function representing the desired information in a certain context. Basically, we want to minimize a function measuring the level of unidentifiability of the parameter estimates. Therefore, it is natural to consider an objective function based on the widths of the estimated parameter intervals, and in the remainder of this paper we will consider the sum of widths of the intervals. To account for different scales (e.g., parameters with different magnitudes or large parameter search ranges), we normalize each width with respect to the midpoint of the interval and obtain the following measure:
![]() |
12 |
where pl is the estimated interval for parameter pl, and Np is the number of parameters to estimate in the model. In order to avoid division by zero in Eq. 12, we let parameters with search ranges covering zero be normalized by the maximum (of the absolute values) of the endpoints of the search range (referred to as the magnitude), instead of the midpoint.
The output of the parameter estimation procedure may consist of several boxes, especially when the problem is not identifiable (Fig. 3). Naturally, fwidth is then calculated for each box i, and the total objective function is defined as
![]() |
13 |
We generally note that many variants of and extensions to this objective function are possible.
Selection of Optimal Design
The basic principle of the search method is as follows. We iteratively evaluate designs from the search domain of possible designs. Each design is evaluated in the following way:
- Repeat several times:
- Simulate data from the current design with sampled random effects.
- Compute the set of consistent parameters (as described in section “PARAMETER ESTIMATION”).
- Evaluate the objective function, fobj.
Output the average of all objective values,
.
The method monitors
for each design and finally outputs the best design, i.e., the design corresponding to the smallest
. Notably, for certain problems, stability might improve by taking the median instead of the mean in step 2 above.
The optimal design problem is hard and to achieve computational efficiency it is natural to use heuristic algorithms. The specific approach taken here is to divide the design search into two phases: one global search over the entire design space of sampling points and one local search that fine-tunes the best solution obtained in the global phase. The basic idea is to let each design evaluation in the global search be computationally fast by allowing a reduced accuracy compared to a corresponding evaluation in the local search. The best design found by the global search is used as initial starting point for the subsequent detailed local search which is characterized by high accuracy but also high computational cost. This approach has been successfully applied previously on similar optimization problems in different fields (21,22). As with all heuristics, a global optimum cannot be guaranteed, and there is a trade-off between the quality of the solution and the computational effort.
In the global search, the following approximation is applied: we decompose the problem into separate groups (each group has the same covariates), and optimize the sampling points of each group separately. Decomposition is an approximation technique that has been used in heuristic search algorithms in order to increase computational speed and allow a more global search (18,23). The group structure is part of the problem specification, and it is natural to use this information when decomposing the problem.
In the local search, we take the best solution obtained from the global search as input. Instead of decomposing the problem, we now estimate all individuals in all groups simultaneously. To search the discrete space we iteratively evaluate the objective functions on neighboring time points, and make a greedy step to the position that improves the objective function most. Termination occurs when no better point is found. We note that the proposed local search method can be replaced by any other local search method, e.g., a modified Fedorov exchange algorithm (24).
The number of repetitions of step 1 in the algorithm required for a stable solution is problem dependent. For our test models, we have typically used 60 repetitions.
At the simulation stage (step 1a), the parameters do not need to be known, but can be defined as intervals. This corresponds to the situation when prior information about parameters is limited (e.g., robust optimal design). For both Model I and II we let the uncertainty be proportional to the variance of respective random effect parameter
and represent the prior knowledge of θi as an interval, referred to as
and defined as
![]() |
14 |
where α is a constant. The case when α = 0 (θi is known) corresponds to local optimal design, e.g., D-optimality. In this paper, we have used independent estimation when α > 0 and constrained estimation when α = 0. However, any combination is feasible within our system.
Optimal Design for Model I
We first considered Model I and an optimal design problem for one group with 33 individuals. There are two sampling times per individual (ni = 2) to be optimized (0 ≤t1 ≤t2 ≤ 24), and uncertainty in the fixed effect parameters is given by α = 0. The search domain is defined by Ngrid = 17, and from Eq. 11 we obtain the following time points:
![]() |
The initial parameter ranges were defined as η1 ∈ [−2, 2] and η2 ∈ [−20, 20], and for independent estimation, θ1 ∈ [1, 5] and θ2 ∈ [10, 50].
Running the optimal design algorithm, the optimum of the objective function was obtained for t = 0.0 and t = 24. These sampling points are known to be the best ones and are also found in (5). A typical result from the parameter estimation of the optimal design is given in Fig. 4.
Fig. 4.
Estimated parameter ranges for η
1, η
2, and a summary of simulated concentration time courses obtained from a typical parameter estimation in the best optimal design (t
1 = 0 and t
2 = 24) for Model I, given α = 0.0 and the procedure described in subsection “Comparing Results to Traditional Methods.” a Each individual is represented by one box. All boxes are small, almost points, and they have been slightly enlarged for visibility. From these data, parameters are estimated as: θ
1 = 3.06, θ
2 = 29.3,
= 0.79,
= 9.0, and variance of є = 0.0092. b Model I plotted for various realizations of the random effects sampled from the distribution in a (each individual is equally likely to be sampled, and each interval is sampled uniformly). The blue curve indicates the median, and the lower and upper red curves represent the 25% and 75% percentiles, respectively. Five thousand realizations were considered. For comparison, corresponding curves when sampling from the true distributions are indicted by dotted lines
Table I presents data for the set-valued approach and for NONMEM when repeating parameter estimation several times with various sampled data from the optimal design. In comparison to NONMEM, the fixed effects are adequately estimated by the set-valued approach, while the random effect variances and the random unexplained variability are less accurate. The reasons for these differences are those discussed in the subsection “Comparing Results to Traditional Methods.”
Table I.
Parameter Estimates Obtained for Model I Using the Methodology Described in Subsection “Comparing Results to Traditional Methods”
| Parameters | True value | Set-valued estimation | NONMEM | ||
|---|---|---|---|---|---|
| Estimate | 95% C.I. | Estimate | 95% C.I. | ||
| θ 1 | 3.00 | 2.94 | [2.63, 3.38] | 3.05 | [2.84, 3.26] |
|
0.25 | 0.23 | [0.043, 0.53] | 0.254 | [0.104, 0.462] |
| θ 2 | 30.0 | 30.7 | [28.5, 32.8] | 30.3 | [28.0, 32.7] |
|
25.0 | 28.7 | [2.18, 70.0] | 25.4 | [3.02, 54.0] |
| σ 2 | 0.0225 | 0.012 | [0.0027, 0.028] | 0.020 | [2.2e–06, 0.042] |
Standard bootstrap percentile confidence intervals were calculated by resampling from the parameter ranges obtained (25). The 2.5th and 97.5th percentiles of the empirical bootstrap distribution (200 replications) formed the limits of the 95% bootstrap percentile confidence interval. Results from NONMEM (first order conditional estimation with interaction) is included for comparison
C.I. confidence interval
For this small example with two sampling points, we can visualize the dependence of the objective function, fobj, on the sampling points. Interestingly, we can also compare such plots to the corresponding plots obtained by standard optimal design calculations, here computed in PopED. Figure 5a, b shows results from PopED with traditional objective functions, D- and A-optimality, respectively. Figure 5c depicts results from PopED using an alternative objective function inspired by the objective function used by the set-valued method (CV-optimality; see figure legend for its definition). Finally, Fig. 5d presents data for the proposed set-valued approach. For all rows in Fig. 5, the left column corresponds to nonrobust optimal design, and the middle and right columns present robust optimal designs with different αs (i.e., ED-optimality, robust A-optimality (26), robust CV-optimality, and robust set-valued optimality, respectively).
Fig. 5.

Experimental design for Model I with two sampling points per individual and in total 33 individuals. The plots show the objective function dependence on the two sampling times. The higher value the better design. a–c Results obtained by PopED. a D-optimality, objective function: det(FIM). b A-optimality, objective function 1/tr(FIM−1). c We hypothesized that the coefficient of variation (CV; standard deviation divided by mean) of a traditional parameter estimate roughly correlates with the relative interval widths of a corresponding set-valued parameter estimate, and hence defined the objective function as 1/sum(CVi) where CVi refers to the coefficient of variation of parameter i. We refer to this objective function as CV-optimality. d Results obtained by our approach. The negative log error is shown for easy comparison to a–c and best resolution. For all methods, results obtained by different levels of uncertainty in θ 1 and θ 2 are presented: α = 0% (left column), α = 15% (middle column), and α = 30% (right column)
We generally note that the choice of objective function influences the result. While the set-valued approach is fundamentally different from previously used optimal design approaches it shares similarities with the CV and robust CV measures, and some support for this is also given in Fig. 5 for α = 0. However, for higher levels of uncertainty in θ1 and θ2, minor peaks can be observed for a design with the first sampling point at t = 0, and the second sampling point slightly greater than t = 0, similar but less pronounced as in robust A-optimality (Fig. 5b). Overall, data confirm the importance of properly choosing the objective function for the actual question, and indicate that results from the set-valued approach do not significantly deviate from those of traditional methods for this relatively simple and well-studied problem.
We next repeated some of the optimal design cases studied in (5,27).
One group of 33 subjects with three sampling times per individual.
Two groups of 25 subjects with two sampling times per individual.
Three groups of 15, 10, and 10 subjects, respectively, and with two, three, and four sampling times per individual, respectively.
In these runs, the search domain was defined by Ngrid = 7, resulting in the following time points:
![]() |
according to Eq. 11.
The results for different uncertainties in the prior knowledge of the fixed effects, as well as results obtained by PopED (using D and ED optimal design and discrete sampling points) are presented in the upper part of Table II. For all these problems, we observe only small differences in the optimal designs proposed by PopED and our approach with α = 0 and α = 0.3. Hence, for this set of three test problems, the proposed method gives reasonable results for nonrobust and robust optimal design.
Table II.
Optimal Designs Obtained by Our Approach
| Comparison to PopED and varying αs. Point doses and point sampling times. | |||||
| Problem | Our approach | PopED | Our approach | PopED | |
| α = 0.0 | α = 0.0 | α = 0.3 | α = 0.3 | ||
| 1 | gr. 1: | 0, 0.67, 24 | 0, 0, 24 | 0, 0.67, 24 | 0, 11, 24 |
| 2 | gr. 1: | 0.67, 24 | 0, 24 | 0, 24 | 0, 24 |
| gr. 2: | 0, 24 | 0, 24 | 0, 24 | 0, 24 | |
| 3 | gr. 1: | 0, 24 | 0, 24 | 0, 24 | 0, 24 |
| gr. 2: | 0, 2.7, 24 | 0, 0, 24 | 0.0, 0.67, 24 | 0, 11, 24 | |
| gr. 3: | 0, 0.67, 11, 24 | 0, 0, 24, 24 | 0.0, 0.67, 2.7, 24 | 0, 0, 11, 24 | |
| Our approach and constant α = 0.0. Interval doses or interval sampling times. | |||||
| Dose ± 25% | Dose ± 50% | Δt = 1 h | Δt = 3 h | ||
| 1 | gr. 1: | 0, 0.67, 2.7 | 0, 0.67, 2.7 | 0, 11, 24 | 0, 24, 24 |
| 2 | gr. 1: | 6.0, 24 | 11, 24 | 0.67, 24 | 0.67, 24 |
| gr. 2: | 6.0, 24 | 11, 24 | 0, 17 | 0.67, 24 | |
| 3 | gr. 1: | 2.7, 24 | 0, 0.67 | 0, 24 | 0, 24 |
| gr. 2: | 0, 0.67, 24 | 0, 0.67, 0.67 | 0, 6.0, 24 | 0, 17, 24 | |
| gr. 3: | 0.67, 0.67, 6.0, 24 | 0, 2.7, 11, 24 | 0.67, 0.67, 6, 24 | 0.67, 17, 17,17 | |
The upper table gives a comparison to PopED for varying uncertainty in the prior knowledge of the parameters α. Note that α = 0.0 corresponds to local optimal design (D-optimality). The lower table considers interval doses or interval sampling times. For interval doses, the original dose of 450 mg is for instance changed to the interval [337.5, 562.5] when the dose interval is ±25%. For interval sampling times, table data give the lower bound for each time interval. The upper bound is obtained by adding Δt
One major benefit with our set-valued method is that covariates easily can be represented as intervals in the design search. To exemplify, we first consider doses as intervals. A dose interval design search addresses the following question: given a specified uncertainty in the dose, which is the optimal design? Uncertainty in the dose can be due to measurement error in drug or body weight, nonmodeled early degradation of the drug, etc. Furthermore, when designing a dose finding study our set-valued optimal design method can calculate one robust design for all doses in a prespecified interval. To illustrate the use of our method for doses as intervals, we repeated problems 1–3 for α = 0 using dose intervals of ±25% and ±50%. Data presented in the lower part of Table II indicate that a dose interval alters the proposed optimal design and that the difference compared to a fixed dose increases with increased uncertainty in the dose. For these problems, data indicate earlier sampling times when dose uncertainty increases.
In a second example, we turn to sampling times as intervals. The interpretation of such a design search is: given that sampling occurs in a Δt interval, which is the optimal design? We repeated problems 1–3 for α = 0 using two different choices of sampling time intervals: Δt = 1 h and Δt = 3 h. Based on the previously defined search domain, the following interval variant was applied:
. Data in the lower part of Table II indicate that the 1-h window only results in minor changes to the optimal design, while the 3-h window affects the proposed design to a larger extent for problem 1 and 3.
Optimal Design for Model II
We next considered optimal design problems for Model II. The discrete search space is defined according to Eq. 11 with Ngrid = 10 and Tmax = 25 h. Hence, the following time points were used:
![]() |
The initial parameter ranges were defined as η1 ∈ [−5, 5], η2,3 ∈ [−3, 3], and for independent estimation, θ1 ∈ [0.079, 94], θ2 ∈ [0.044, 0.13], and θ3 ∈ [0.020, 0.069].
To further evaluate the proposed approach, some of the optimal design cases studied in [5] were considered:
Twelve different individuals (different covariate values for each individual) (m = 12, gi = 1, a = (4.02, 4.4, 4.53, 4.4, 5.86, 4, 4.95, 4.53, 3.1, 5.5, 4.92, 5.3)). There are three sampling times per individual (ni = 3) to be optimized (0 ≤t1 ≤t2 ≤t3 ≤ 25), and data are noisy as already described.
Three groups of four individuals each (different covariate values for each group) (m = 3, gi = 4, a = (5.86, 4.6, 3.1)). There are three sampling times per group to be optimized for sampling times as for problem 1 above.
The results for different uncertainties in the prior knowledge of the fixed effects, as well as results obtained by PopED are summarized in the upper part of Table III. In particular, we note that the result of the presented approach using α = 0 shares many characteristics with the PopED design. For instance, the choice of placing the third sampling point earlier than t = 25 in order to obtain a higher signal-to-noise ratio. For higher α, uncertainty in θ must also be taken into account, and our approach then prefers placing the last sampling point at the end of the allowed time interval.
Table III.
Optimal Designs Obtained by Our Approach
| Comparison to PopED and varying αs. Point doses and point sampling times. | |||||
| Problem | Our approach | PopED | Our approach | PopED | |
| α = 0.0 | α = 0.0 | α = 0.30 | α = 0.30 | ||
| 1 | gr. 1: | 0.31, 2.8, 11 | 0.31, 1.2, 15 | 0.31, 4.9, 25 | 0.31, 1.2, 15 |
| gr. 2: | 0.31, 2.8, 15 | 0.31, 1.2, 15 | 0.31, 2.8, 25 | 0.31, 1.2, 15 | |
| gr. 3: | 0.31, 7.7, 20 | 0.31, 1.2, 15 | 0.31, 4.9, 25 | 0.31, 1.2, 15 | |
| gr. 4: | 0.31, 2.8, 11 | 0.31, 1.2, 15 | 1.2, 11, 25 | 0.31, 1.2, 15 | |
| gr. 5: | 0.31, 2.8, 15 | 1.2, 2.8, 15 | 0.31, 2.8, 25 | 0.31, 1.2, 4.9 | |
| gr. 6: | 0.31, 7.7, 25 | 0.31, 1.2, 15 | 0.31, 2.8, 25 | 0.31, 1.2, 15 | |
| gr. 7: | 0.31, 2.8, 25 | 0.31, 1.2, 15 | 0.31, 2.8, 20 | 1.2, 2.8, 15 | |
| gr. 8: | 0.31, 2.8, 7.7 | 0.31, 1.2, 15 | 0.31, 2.8, 25 | 0.31, 1.2, 15 | |
| gr. 9: | 0.31, 2.8, 11 | 0.31, 1.2, 2.8 | 0.31, 7.7, 25 | 0.31, 1.2, 15 | |
| gr. 10: | 0.31, 2.8, 11 | 0.31, 1.2, 15 | 0.31, 4.9, 25 | 0.31, 1.2, 15 | |
| gr. 11: | 0.31, 2.8, 15 | 0.31, 1.2, 15 | 0.31, 2.8, 25 | 0.31, 1.2, 15 | |
| gr. 12: | 0.31, 1.2, 20 | 0.31, 1.2, 15 | 0.31, 4.9, 25 | 1.2, 2.8, 15 | |
| 2 | gr. 1: | 0.31, 15, 15 | 0.31, 1.2, 15 | 1.2, 4.9, 25 | 0.31, 1.2, 15 |
| gr. 2: | 0.31, 15, 20 | 0.31, 1.2, 15 | 0.31, 2.8, 25 | 0.31, 1.2, 15 | |
| gr. 3: | 1.2, 15, 15 | 0.31, 1.2, 11 | 0.31, 7.7, 25 | 0.31, 1.2, 7.7 | |
| Our approach and constant α = 0.0. Interval doses or interval sampling times. | |||||
| Problem | Dose ± 25% | Dose ± 50% | Δt = 1 h | Δt = 3 h | |
| 1 | gr. 1: | 0.31, 0.31, 15 | 0.31, 2.8, 25 | 0.31, 2.8, 20 | 0.31, 20, 25 |
| gr. 2: | 0.31, 0.31, 25 | 0.31, 0.31, 25 | 0.31, 2.8, 25 | 0.31, 20, 20 | |
| gr. 3: | 2.8, 2.8, 11 | 0.31, 0.31, 15 | 0.31, 11, 20 | 0.31, 2.8, 20 | |
| gr. 4: | 1.2, 2.8, 15 | 0.31, 0.31, 15 | 0.31, 7.7, 25 | 0.31, 11, 25 | |
| gr. 5: | 0.31, 1.2, 20 | 0.31, 1.2, 20 | 0.31, 4.9, 20 | 0.31, 4.9, 25 | |
| gr. 6: | 0.31, 2.8, 25 | 0.31, 2.8, 25 | 0.31, 4.9, 25 | 0.31, 7.7, 20 | |
| gr. 7: | 1.2, 2.8, 25 | 0.31, 2.8, 20 | 0.31, 7.7, 20 | 2.7, 20, 25 | |
| gr. 8: | 0.31, 1.2, 20 | 0.31, 1.2, 20 | 0.31, 4.9, 20 | 0.31, 7.7, 25 | |
| gr. 9: | 0.31, 1.2, 20 | 0.31, 2.8, 25 | 0.31, 4.9, 25 | 0.31, 20, 25 | |
| gr. 10: | 0.31, 2.8, 20 | 0.31, 2.8, 20 | 0.31, 7.7, 25 | 0.31, 20, 25 | |
| gr. 11: | 0.31, 4.9, 25 | 0.31, 1.2, 11 | 0.31, 2.8, 25 | 0.31, 4.9, 20 | |
| gr. 12: | 0.31, 2.8, 25 | 0.31, 1.2, 25 | 0.31, 2.8, 20 | 0.31, 15, 25 | |
| 2 | gr. 1: | 0.31, 20, 20 | 0.31, 25, 25 | 0.31, 25, 25 | 0.31, 25, 25 |
| gr. 2: | 0.31, 15, 25 | 1.2, 11, 25 | 0.31, 15, 15 | 0.31, 15, 25 | |
| gr. 3: | 0.31, 15, 25 | 0.31, 20, 25 | 1.2, 7.7, 20 | 0.31, 25, 25 | |
The upper table gives a comparison to PopED for varying uncertainty in the prior knowledge of the parameters α. Note that α = 0.0 corresponds to local optimal design (D-optimality). The lower table considers interval doses or interval sampling times. For interval sampling times, table data give the lower bound for each time interval. The upper bound is obtained by adding Δt
To further evaluate the quality of the different optimal designs proposed in Table III, Table IV presents a comparison of the estimated parameters for the best designs obtained by the proposed method and by PopED. These data indicate that any of the two parameter estimation methods gives similar output for the two optimal designs. This, in turn, indicates that the optimal design proposed by the set-valued approach is reasonable. The quality of the parameter estimates are better for NONMEM than for the set-valued approach as expected (see subsection “Comparing Results to Traditional Methods”).
Table IV.
Parameter Estimates for the Optimal Designs of Problem 1 for α = 0 Given in Table III
| Parameter estimation for the optimal design obtained by the presented approach | |||||
| Set-valued estimation | NONMEM | ||||
| Param. | True value | Estimate | 95% C.I. | Estimate | 95% C.I. |
| θ1 | 2.71 | 3.09 | [1.48, 7.75] | 2.82 | [1.58, 4.75] |
|
0.784 | 0.833 | [0.080, 2.05] | 0.693 | [0.157, 1.66] |
| θ 2 | 0.0763 | 0.0781 | [0.0576, 0.101] | 0.0761 | [0.0619, 0.0903] |
|
0.0185 | 0.0559 | [0.0073, 0.115] | 0.0208 | [1.8e–6, 0.0888] |
| θ 3 | 0.0373 | 0.037 | [0.029, 0.043] | 0.0371 | [0.0322, 0.0428] |
|
0.0238 | 0.0443 | [0.0016, 0.14] | 0.0206 | [2.4e–6, 0.0589] |
| σ 2 | 0.419 | 0.0365 | [0.0094, 0.214] | 0.383 | [0.0763, 0.803] |
| Parameter estimation for the optimal design obtained by PopED | |||||
| θ 1 | 2.71 | 2.94 | [1.35, 7.23] | 2.80 | [1.53, 4.69] |
|
0.784 | 1.03 | [0.0304, 3.73] | 0.688 | [0.146, 1.71] |
| θ 2 | 0.0763 | 0.0791 | [0.0495, 0.109] | 0.0759 | [0.0626, 0.0898] |
|
0.0185 | 0.0501 | [1.8e–4, 0.158] | 0.0200 | [1.8e–6, 0.0890] |
| θ 3 | 0.0373 | 0.0361 | [0.025, 0.045] | 0.0372 | [0.0323, 0.0428] |
|
0.0238 | 0.0713 | [6.4e–3, 0.188] | 0.0211 | [2.4e–6, 0.0631] |
| σ 2 | 0.419 | 0.0941 | [5.0e–9, 0.732] | 0.380 | [0.0578, 0.841] |
Estimates are obtained using the methodology described in subsection “Comparing Results to Traditional Methods.” The upper table uses the design obtained by our approach and the lower table uses the design obtained by PopED. Results from NONMEM (first order conditional estimation) is included for comparison. C.I.s were calculated using the method described in Table I
C.I. confidence interval
We next repeated all runs using dose intervals, see the lower part of Table III. Similar to the corresponding results of Model I, data in the table indicate altered designs compared to the case with a fixed dose. For this problem, the level of uncertainty in the dose does not substantially alter the design.
For sampling times as intervals, we observe similar trends as for doses as intervals: the third sampling interval is late, and in this case also the second interval is later compared to the case with sampling times as points and not as intervals (lower part of Table III).
Finally, in order to demonstrate that underdetermined problems can also be run using our set-valued design search, we considered the following illustrative problem: one group of one individual (m = 1, g = 1, a = 5.86). There are two sampling times (ni = 2) to be optimized (0 ≤ t1 ≤t2 ≤25). The discrete search space is defined as above with Ngrid = 10. Hence, the problem is underdetermined since Model II has three parameters and there are only two data points. Running this problem, we obtained the optimal sampling times t1 = 0.31 and t2 = 25. We note that the use of only two sampling points gives a solution curve with infinitely many solutions, as in Fig. 3, but that the proposed sampling times give the most precise description of this solution curve (according to our objective function).
Computational Time
Regarding the calculation times, one can generally say that robust optimal design with a traditional method is based on time-consuming sampling from the parameter interval as well as numerical integration, while neither of these is required for a set-valued method. On the other hand, the set-valued method is based on simulation and parameter estimation which is computationally demanding. The optimal design runs for problem 1 of Model I and Model II take about 1 and 36 h, respectively, on a standard computer for the proposed method. To make a comparison to PopED we measured the average time required to evaluate one design for problem 2 of Model II on a standard desktop computer (no decomposition of the problem in the parameter estimation of the set-valued method). For the set-valued method, the average time was about 500 s for α = 0 and 240 s for α = 30. The corresponding figures for PopED were about 50 ms for α = 0 and 9 s for α = 30.
Since the proposed method and the methods in PopED have fundamental differences, we believe that there are situations where the proposed method will be preferred although it, in its current implementation, is computationally slower than PopED.
Summary of Results
For the presented problems, set-valued optimal design search is feasible and calculates optimal designs that are reasonable based on comparisons to the standard tool PopED. For problem 1 of Model II, this is also confirmed by comparisons of parameter estimation for the calculated optimal designs. Furthermore, the presented results support the listed benefits of the set-valued method in the “INTRODUCTION.”
The examples for Model I and II with α > 0 demonstrate that the set-valued method requires no prior information in the form of point estimates for the parameters, and that the calculated optimal designs, also in this case, are reasonable based on comparisons to PopED. Since no sampling from the parameter intervals is required, the same problem specification and design search can be used for both local and robust optimal design.
Results for both Model I and II demonstrate that it is feasible to represent sampling times and covariates by intervals in the design search. Similar to the point above, this variant of robust optimal design search requires no sampling from the intervals.
The presented results indicate that optimal design based on simulation and parameter estimation is feasible using set-valued methods. Some of the main obstacles with traditional parameter estimation methods are avoided (distributional assumptions, model linearization, local minima, and/or underdetermined problems). In particular, for Model II we demonstrate feasibility of an optimal design search for a nonidentifiable problem with infinitely many solutions.
DISCUSSION
We have proposed a new approach for optimal design in population pharmacometric experiments. The approach is based on rigorous global search methods using interval analysis, where the parameters are represented by intervals and, hence, can incorporate any level of uncertainty. In the same way, sampling times and covariates like doses can be given as intervals.
The method was evaluated on two test models and problems from the literature. While Model I can be considered relatively simple, Model II exhibits several complications: a singularity when two of the parameters equal each other, nonidentifiability, and log-normal random effects with long tails. The analysis and presented results (detailed in the supplement) clearly demonstrate that a global search based on set-valued methods is able to adequately cope with these problems. Besides, the use of interval analysis actually forces problems with, e.g., singularity and nonidentifiability to be dealt with and not hidden.
Set-valued parameter estimation can solve problems where traditional maximum likelihood methods fail, as indicated in Fig. 3 and in the optimal design results of Model II. Therefore, in the future, set-valued methods might play an important role when analyzing models with unidentifiable parameters such as mechanistic physiologically based pharmacokinetic (PB/PK) models.
Concerning representing sampling times and covariates by intervals, an alternative approach is to optimize without interval methods and then study the objective function on points around the point solution for each of the design variables. This may provide an indication of sensitivity of design variables and such information can be used to suggest sampling/dose windows. Our presented approach can be compared to optimizing the sampling time and/or the dose as a random variable (28).
The proposed design heuristic is efficient in the sense that although a computational demanding simulation-based technique is used, the problems tried in this paper are solved without supercomputing. We also generally note that the design search is independent on output obtained from an interval estimation analysis and can hence be applied to any parameter estimation method.
For real systems, the random effect distributions are often unknown and can have complicated forms, e.g., multimodal distributions with several local maxima of the probability density function. Correlations between the parameters further complicate the situation. Set-valued methods can be an important complement to traditional methods to analyze such systems.
Generally, many objective functions can be connected to an optimization method and the performance of each function is typically problem dependent. Therefore, the natural way to evaluate the choice of objective function is to try several potential objective functions on a large set of realistic models and problems from the domain. While this is beyond the scope of this paper, it is interesting to discuss the choice of objective function.
In this paper, the parameters are really estimated and, therefore, the objective function can be defined over the estimated parameter space. We have chosen to consider an objective function based on the widths of the estimated parameters. For the considered test problems, the specific choice of objective function has proven successful, in the sense that our results compare relatively well with those from PopED. Furthermore, as indicated in the section for underdetermined problems, parameter correlations are revealed by analyzing the set of consistent parameters, and a measure of these correlations can take part in the objective function. An alternative is to define the objective function on the time series domain, and measure the discrepancy between the data points and model predictions.
One particular extension of interest in optimal design for population pharmacometric experiments is to take bias in the parameter estimates into account. Basically, one can estimate the bias by comparing the estimated parameters to the data-generating parameters. This bias can then be used as a measure to select between different designs, and hence not only for validation purposes. For example, by combining the objective function with a biased term, compared to the mean squared error, one obtains an objective function taking bias into account.
In summary, this study clearly indicates that set-valued methods for optimal design exhibit several desirable properties compared to traditional methods. Further research in this direction is therefore highly motivated.
CONCLUSIONS
We have proposed a new method for optimal experimental design of population pharmacometric experiments based on global search methods using interval analysis. The evaluation of a specific design is based on multiple simulations and parameter estimations. Main advantages of the method are that no prior point estimates for the parameters are required, the method works on underdetermined problems, and that sampling times and covariates like doses can be represented by intervals. The latter gives a direct way of optimizing with rigorous sampling/dose intervals that can be useful in clinical practice.
ELECTRONIC SUPPLEMENTARY MATERIAL
Below is the link to the electronic supplementary material.
(PDF 391 kb)
REFERENCES
- 1.Bonate PL. Pharmacokinetic-pharmacodynamic modeling and simulation. New York: Springer; 2006. [Google Scholar]
- 2.Davidian M, Giltinan DM. Nonlinear models for repeated measurement data: an overview and update. J Agric Biol Environ Stat. 2003;8:387–419. doi: 10.1198/1085711032697. [DOI] [Google Scholar]
- 3.Pinheiro JC, Bates DM. Model building for nonlinear mixed-effects models. [Technical report 91]. Madison [WI]: University of Wisconsin; 1995.
- 4.Ogungbenro K, Dokoumetzidis A, Aarons L. Application of optimal design methodologies in clinical pharmacology experiments. Pharm Stat. 2009;8:239–52. doi: 10.1002/pst.354. [DOI] [PubMed] [Google Scholar]
- 5.Foracchia M, Hooker A, Vicini P, Ruggeri A. POPED, a software for optimal experiment design in population kinetics. Comput Methods Programs Biomed. 2004;74:29–46. doi: 10.1016/S0169-2607(03)00073-7. [DOI] [PubMed] [Google Scholar]
- 6.Nyberg J, Karlsson MO, Hooker AC. Simultaneous optimal experimental design on dose and sample times. J Pharmacokinet Pharmacodyn. 2009;36:125–45. doi: 10.1007/s10928-009-9114-z. [DOI] [PubMed] [Google Scholar]
- 7.Box GEP, Lucas HL. Design of experiments in nonlinear situations. Biometrika. 1959;46:77–90. [Google Scholar]
- 8.D’Argenio DZ. Optimal sampling times for pharmacokinetic experiments. J Pharmacokinet Biopharm. 1981;9:739–56. doi: 10.1007/BF01070904. [DOI] [PubMed] [Google Scholar]
- 9.Fedorov VV. Theory of optimal experiments. New York: Academic; 1972. [Google Scholar]
- 10.Tod M, Rocchisani JM. Comparison of ED, EID and API criteria for the robust optimization of sampling times in pharmacokinetics. J Pharmacokin Biopharm. 1997;25:515–37. doi: 10.1023/A:1025701327672. [DOI] [PubMed] [Google Scholar]
- 11.Tucker W. Validated numerics for pedestrians. In European Congress of Mathematics, Eur. Math. Soc., Zrich, Austria. 2005;851–860.
- 12.Moore RE. Interval analysis. Englewood Cliffs: Prentice-Hall; 1966. [Google Scholar]
- 13.Al-Banna MK, Welman AK, Whiting B. Experimental design and efficiency parameter estimation in population pharmacokinetics. J Pharmacokinet Biopharm. 1990;18:347–60. doi: 10.1007/BF01062273. [DOI] [PubMed] [Google Scholar]
- 14.Beal S, Sheiner L. NONMEM’s user’s guide, technical report. San Francisco: University of California; 1992. [Google Scholar]
- 15.Kearfott B. Rigorous global search: continuous problems. Dordrecht: Kluwer Academic Publishers; 1996. [Google Scholar]
- 16.Jaulin L, Kieffer M, Didrit O, Walter E. Applied interval analysis: with examples in parameter and state estimation, robust control and robotics. 1. London: Springer; 2001. [Google Scholar]
- 17.Tucker W, Moulton V. Parameter reconstruction for biochemical networks using interval analysis. Reliab Comput. 2006;12:389–402. doi: 10.1007/s11155-006-9009-2. [DOI] [Google Scholar]
- 18.Tucker W, Kutalik Z, Moulton V. Estimating parameters for generalized mass action models using constraint propagation. Math Bioscience. 2007;208:607–20. doi: 10.1016/j.mbs.2006.11.009. [DOI] [PubMed] [Google Scholar]
- 19.Danis A, Hooker A, Tucker W. Rigorous parameter estimation for noisy mixed-effects models. International symposium on nonlinear theory and its applications. Krakow, Poland; 2010.
- 20.Danis A, Hooker A, Tucker W. Rigorous parameter estimation for noisy mixed-effects models. Preprint: http://www2.math.uu.se/∼warwick/CAPA/publications/publications.html (2011).
- 21.Gennemark P, Wedelin D. Efficient algorithms for ordinary differential equation model identification of biological systems. IET Syst Biol. 2007;1(2):120–9. doi: 10.1049/iet-syb:20050098. [DOI] [PubMed] [Google Scholar]
- 22.Dieck Kattas G, Gennemark P, Wedelin D. Structural identification of GMA models: algorithm and model comparison. In Quaglia P. CMSB’10: Proceedings of the 8th International Conference on Computational Methods in Systems Biology 2010. ACM, New York; 2010;107–113. doi:10.1145/1839764.1839777.
- 23.Gennemark P. Wedelin, D. Improved parameter estimation for completely observed ordinary differential equations with application to biological systems. In computational methods in systems biology, 2009; LNCS5688:205–217.
- 24.Ogungbenro K, Graham G, Gueorguieva I, Aarons L. The use of a modified Fedorov exchange algorithm to optimise sampling times for population pharmacokinetic experiments. Comput Methods Programs Biomed. 2005;80:115–25. doi: 10.1016/j.cmpb.2005.07.001. [DOI] [PubMed] [Google Scholar]
- 25.Efron B, Tibshirani RJ. An introduction to the bootstrap. New York: Chapman & Hall/CRC; 1993. [Google Scholar]
- 26.Zhou J, Wolfson B. A Bayesian A-optimal and model robust design criterion. Biometrics. 2003;59:1082–8. doi: 10.1111/j.0006-341X.2003.00124.x. [DOI] [PubMed] [Google Scholar]
- 27.Dufful SB, Retout S, Mentre F. The use of simulated annealing for finding optimal population designs. Comput Methods Programs Biomed. 2002;69:25–35. doi: 10.1016/S0169-2607(01)00178-X. [DOI] [PubMed] [Google Scholar]
- 28.Pronzato L. Information matrices with random regressors. Application to experimental design. J Stat Plan Inference. 2002;108:189–200. doi: 10.1016/S0378-3758(02)00278-1. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
(PDF 391 kb)





























