Abstract
We consider the problem of optimal design of experiments for random effects models, especially population models, where a small number of correlated observations can be taken on each individual, while the observations corresponding to different individuals are assumed to be uncorrelated. We focus on c-optimal design problems and show that the classical equivalence theorem and the famous geometric characterization of Elfving (1952) from the case of uncorrelated data can be adapted to the problem of selecting optimal sets of observations for the n individual patients. The theory is demonstrated by finding optimal designs for a linear model with correlated observations and a nonlinear random effects population model, which is commonly used in pharmacokinetics.
Keyword and Phrases: c-optimal design, correlated observations, Elfving's theorem, pharmacokinetic models, random effects, locally optimal design, geometric characterization
1 Introduction
It is a common situation in pharmacokinetic trials that only a very small number of measurements can be taken on a single patient, but a larger number of n different patients are available [Sheiner et al. (1977), Schmelter (2007), Colombo et al. (2006)]. In this situation it is impossible to reliably estimate parameters of interest for each patient. However, often these individual parameters are not of primary interest, because it is assumed that the individual parameters are realizations of some global distribution. Therefore, the main aim of the experiment is the estimation of the mean and/or variance of this distribution. This results in a random effects model and is called the population approach [Retout and Mentré (2003)]. Unfortunately, the common random effect causes measurements at different timepoints on a single patient to be correlated, therefore most of the commonly used tools of classical optimal design theory are not applicable in this context. Compared to the case of uncorrelated observations the optimal design problem for dependent data is intrinsically more difficult. Most authors used asymptotic arguments to determine efficient designs [see Sacks and Ylvisaker (1968), Bickel and Herzberg (1979), Näther (1985), Müller and Pázman (2003) among others]. In general the powerful equivalence theorem [Pukelsheim (2006)] cannot be transferred to the case of dependent data [for an equivalence theorem in the case of repeated and correlated observations at the same measurement setting see Fedorov (1972)]. Note also that geometric characterizations of optimal designs are only available for the uncorrelated case [Elfving (1952), Ford et al. (1992), Haines (1995) or Studden (2005)].
The purpose of the present paper is to derive a geometric characterization of c-optimal designs, which minimize the variance of a linear combination of the estimates of the parameters (specified by a given vector c), for models with correlated observations. Note that many commonly used objectives in the statistical analysis (as designing the experiment for the estimation of the area under the curve, the maximum concentration or, in dose finding studies, the minimal effective dose) yield optimality criteria which are special cases of the c-optimality criterion [see Atkinson et al. (1993)]. In the following sections we show that if the number of available observations is the same for each patient, the total information of all observations on a single patient, accounting for correlations, can be expressed as a sum of information matrices in the usual form for uncorrelated observations. More precisely, if m observations are available for each patient, there exist vector valued functions f̃ℓ, ℓ = 1, …, m such that the total information matrix for the set of m observations on this patient can be written in the form
| (1.1) |
For this representation we introduce for the individual observations a design space of m observations for each patient in addition to the original design space. Using this representation, we can derive an equivalence theorem for c-optimal designs using the general theory in Pukelsheim (2006) and obtain a geometric characterization of c-optimal designs for the problem of allocating the n available patients to different sets of m individual observations. As a result we obtain a generalization of the famous result of Elfving to the case of dependent data.
The theoretical details are presented in Section 2. In Section 3 we demonstrate the application of these ideas in two examples, a linear model and a basic nonlinear model taken from population pharmacokinetics. Finally, some technical details are given in an Appendix. For the sake of brevity this paper concentrates on the geometric characterization of locally c-optimal designs. However, it is worthwhile to mention that in the case of uncorrelated observations Elfving-type characterizations are also available for other optimality criteria, including D-, E- and Bayesian optimality criteria [see Dette (1993a,b) and Dette (1996)]. An interesting problem of future research is the extension of the methodology developed in the present paper to obtain geometric characterizations of optimal designs for correlated observations with respect to these criteria.
2 An Elfving representation for models with correlated observations
We begin our discussion with the case of linear models where the results are slightly more transparent. The analysis of nonlinear models can easily be reduced to this situation (see Remark 2.1), while the case of random effect models is discussed in Section 3. Assume that m observations can be taken, each on a number of n individuals in the linear model
| (2.1) |
where Yij denotes the j–th observation on the i–th individual and xij is the experimental condition corresponding to this observation, which is chosen from a compact interval χ ⊂ ℝ. We use xi = (xi1, …, xim) to denote all experimental conditions corresponding to the individual i. The vector θ = (θ1, …, θk)T ∈ Θ ⊂ ℝk is the vector of parameters to be estimated, f(x) = (f1(x), …, fk(x))T denotes a vector of known functions and εij denotes a random error term with expectation 0 and variance σ2(xij) (i = 1, …, n, j = 1, …, m) depending on xij. Observations on the same individual are assumed to be correlated, with corr(εij, εij*) = c(xij, xij*), while data corresponding to different individuals are assumed to be independent, i.e. corr(εij, εi*j*) = 0, whenever i ≠ i*. We express the total covariance matrix of errors as the block diagonal matrix V = diag(V1, …, Vn) ∈ (ℝm×m)n, with matrices
on the diagonal. We now write Fi = (f(xi1), …, f(xim)) ∈ ℝk×m as the design matrix for individual i, i = 1, …, n and define the matrix F = (F1, …, Fn) = (f(x11), …, f(xnm)) ∈ ℝk×nm as the design matrix corresponding to all patients. The information matrix (proportional to the inverse of the covariance matrix) of the generalized least squares estimate for the parameter θ can be expressed as
| (2.2) |
The following arguments demonstrate that this expression can be rewritten in a form closer to the usual form of information matrices, which is obtained in the case of uncorrelated observations. We begin with an alternative representation for the individual information matrices , i = 1, …, n. For this purpose we collect all experimental conditions corresponding to one individual in a vector xi = (xi1, …, xim) ∈ ℝm and consider χm as a design space. An exact design is characterized by a tuple , where xi ∈ χm and ni ∈ ℕ such that . This means that ni of the n patients are treated under the experimental condition xi = (xi1, …, xim)T (i = 1, …, p). Our first result provides the information matrix corresponding to one observation at the experimental condition xi.
Lemma 2.1
An information matrix of the form can also be expressed as
| (2.3) |
where the functions f̃ℓ: χm → ℝk are defined as the columns of the k × m-matrix .
Note that the vectors f̃ℓ(xi) in the representation (2.3) are defined implicitly (at least for k ≥ 4) and can be easily calculated by a singular value decomposition of the matrices Vi. Using Lemma 2.1 the total information matrix for an exact design of m observations each on n subjects can therefore be written as
| (2.4) |
Following Kiefer (1974) we define an approximate design as a probability measure ξ on χm with finite support. Similarly to (2.4) the information matrix of an approximate design ξ using p different sets of m single subject measurements (with weights ξ(xi) = ξ(xi1, …, xim) at the points xi) can be expressed as
| (2.5) |
If ξ puts masses ξi = ξ(xi) at points this means that approximately ni ≈ nξi patients have to be treated under experimental conditions xi = (xi1, …, xim) (i = 1, …, p). In practice the integers ni are obtained by an appropriate rounding procedure from the quantities nξi [see for example Pukelsheim and Rieder (1992)]. Note that the design space here is χm, i.e. the space of all possible m-observation sets.
Recall that for a given vector c ∈ ℝk an approximate design ξc is called c-optimal if and only if c ∈ Range (M(ξc)) and ξc minimizes the expression cT M−(ξ)c, where M−(ξ) denotes the generalized inverse of the matrix M(ξ) (note that this expression is approximately proportional to the variance of the generalized least squares estimate for the linear combination cTθ). We can now use the general theory developed in Pukelsheim (2006) and the representation (2.5) to derive a condition, which can be used to check the optimality of a given approximate design with respect to a given optimality criterion. In the special case of c-optimal designs, i.e. designs which are optimal for the estimation of a linear combination cTθ of the parameters (c ∈ ℝk), we obtain the following result.
Theorem 2.1
A design ξc is c-optimal in a regression model with information matrix of the form (2.5) if and only if there exists a generalized inverse G of the matrix M(ξc) such that the inequality
| (2.6) |
holds for all x ∈ χm. Moreover, there is equality in (2.6) at any support point of the design ξc.
We can use this theorem to apply Theorem 3.3 of Dette and Holland-Letz (2009) and derive a geometric characterization of c-optimal designs for models with information matrices of the form (2.5), which generalizes the classical result of Elfving (1952) to the case of dependent data. We define a generalized Elfving set by
| (2.7) |
[note that the set ℛm reduces for m = 1 to the classical Elfving space considered by Elfving (1952)]. We can now formulate our main theorem.
Theorem 2.2
A design is locally c-optimal in a model with information matrix of the form (2.5) if and only if there exist constants γ > 0, ε11, …, ε1p, …, εm1, …, εmp satisfying
| (2.8) |
such that the point γc ∈ ℝk lies on the boundary of the generalized Elfving set ℛm defined in (2.7) and has the representation
| (2.9) |
Remark 2.1
Theorem 2.1 and 2.2 can easily be generalized to the case of nonlinear fixed effects models of the form
| (2.10) |
where η denotes a (not necessarily linear) function defined on χ × Θ. A rather detailed review and numerous references on optimal designs for nonlinear models can be found in Atkinson and Haines (1996). In the situation considered in this paper, standard results on nonlinear regression models show that the covariance matrix of the nonlinear generalized least squares can asymptotically be approximated by (2.2) where Fi = (f(xi1), …, f(xim)) ∈ ℝk×m and the vector f is given by
| (2.11) |
Here the function f depends on the unknown parameter θ. Following Chernoff (1953) we assume that a preliminary guess for θ is available. In this case the information matrix in (2.4) is well defined and all results of this section remain correct for the nonlinear model (2.10). In particular locally c-optimal designs can be characterized by the equivalence Theorem 2.1 and the geometric characterization in Theorem 2.2, using the modified design matrix Fi = (f(xi1), …, f(xim)) with functions f(x) defined in (2.11).
The concept of locally optimal designs has been criticized due to its sensitivity with respect to misspecification of the unknown parameter. Robust optimal designs could be obtained using a Bayesian or minimax approach [see e.g. Chaloner and Verdinelli (1995), Dette (1995), Müller and Pázman (1998)]. A geometric method of constructing Bayesian optimal designs for one-parameter models and a two-point prior distribution is given by Haines (1995) for the case of uncorrelated observations, but its generalization to models with more parameters, arbitrary prior distributions or correlated observations seems to be difficult. A generalization of Elfving's characterization to these more sophisticated criteria may be derived as a generalization of results of Dette (1996), who considered the case of uncorrelated observations. However, these investigations are extremely complicated and are deferred to future research.
3 Examples
We will demonstrate the application of the geometric characterization of Elfving type in two examples, a simple two parameter fixed effects polynomial model with intrinsically correlated observations and a nonlinear population model which is commonly used in pharmacokinetics.
3.1 Quadratic regression
As an example of a linear model we consider a two parameter fixed effects quadratic model without intercept, where m observations are taken for each of the n patients, that is
| (3.1) |
We begin with the case m = 2 and assume that observations corresponding to the same patients are correlated with covariance function cov(εi1, εi2) = σ2c(xi1, xi2) = σ2λ|xi1–xi2| λ ∈ [0, 1]. We obtain
The eigenvalues of the matrix are given by and with corresponding eigenvectors (1, −1)T and (1, 1)T, respectively. Therefore we obtain for the matrix the representation
For the model (3.1) we have f(x) = (x, x2)T, Fi = (f(xi1), f(xi2)) and it follows from Lemma 2.1 that the functions f̃ℓ(xi) (ℓ = 1, 2) are given by the columns of the matrix that is
For the choice of parameters λ = 0.6, σ2 = 0.04 and the design space χ = [0, 2] the corresponding generalized Elfving set ℛ2 defined by (2.7) is depicted in Figure 1. Every pixel in the figure is induced by a measurement set x ∈ χm (m = 2), where the functions f̃ℓ and the quantities εℓ in (2.7) are evaluated at a (dense) grid. Note that a more dense grid would yield a smoother figure. Both parts of the figure represent the same Elfving space, but the coloring in the left part corresponds to potential values of the first measurement x1 of x = (x1, x2), while the coloring in the right part corresponds to the second measurement x2 (see the legend of Figure 1).
Figure 1.

The Elfving set ℛ2 defined in (2.7) for a quadratic regression model (3.1) with two observations per patient. The axes d1 and d2 represent the first and second dimension of the two dimensional elements of this set. The vector c is depicted by the red line, while the two black circles denote the points used in the Elfving representation (2.9).
Suppose we want to estimate the linear combination cTθ defined by the vector c = (−1, 1)T, which is marked as the red line in Figure 1. The optimal sets of measurements are those which can be used to construct the point of the intersection of the boundary of the Elfving space with the line in the direction of the vector c. This representation may require a single point of the form
or several points p(x1), …, p(xp) of this type, where p ≤ k and k represent the number of parameters in the model (here k = 2). Each point xj = (xj1, …, xjm) ∈ χm corresponds to a set of m measurements per patient (for model (3.1) we have m = 2). The weights used in the convex combination yield the weights of the optimal design, i.e. the proportions of total observations taken at the corresponding point xj. The actual components xj1 and xj2 of the point xj can be determined from the coloring of the point p(xj) in the left and right part of Figure 1, respectively. Thus, we can easily determine the support points graphically. For example, from Figure 1 we observe that two points, say x1 and x2, are required to represent the boundary point γc, which are marked by two circles. From the left part of the Figure we obtain that the colour of x1 is pink, while the colour of the second point is green, and from the legend in the right upper part of the figure we obtain the values x11 = 0.0 and x21 = 1.2 for the first components of x1 and x2, respectively. Similarly, in the right part of Figure 1 we can observe blue and red colours for the two points, which yields x12 = 0.8 and x22 = 2.0 for the second components of x1 and x2, respectively. Of course, in concrete applications the value of the components can be determined more precisely from the exact red/green/blue value of the corresponding pixel of the points in the representation (2.9) using appropriate software. Therefore the locally c-optimal design is given by
| (3.2) |
and advises the experimenter to use two different individual measurement sets. This means that 48% of the patients should be treated at experimental conditions x11 = 0, x12 = 0.8 and 52% should be treated at x21 = 1.2 and x22 = 2.0. Note that it seems counterintuitive to use measurements at the point x = 0, which carries no information in a model with uncorrelated observations. However, this heuristic argument is not necessarily true in the case of correlated observations.
Alternatively, we can use the figure to determine a hyperplane H supporting the Elfving space at the point γc. This plane is defined through a vector d = (d1, d2)T fulfilling dTz = 1 for all z ∈ H, (γc)Td = 1 and rTd ≤ 1 for all r ∈ ℛ2. The support points are then given as the solution of the system of equations
[see the proof of Theorem 3.3 in Dette and Holland-Letz (2009)]. This yields an alternative derivation of the design (3.2). Note that the optimality of this design can be verified by Theorem 2.1.
We now suppose that m = 3 observations are available for each individual in the quadratic regression model (3.1). The design matrix is thus given by Fi = (f(xi1), f(xi2), f(xi3)) and the functions f̃ℓ(xi), ℓ = 1, 2, 3 can be determined similarly as in the case of m = 2 observations per individual.
The corresponding Elfving set is depicted in Figure 2. As m = 3 here, three subfigures are needed, each subfigure corresponds to one of the components xi = (xi1, xi2, xi3). We can observe that only one point is used in the Elfving representation (2.9) and we obtain by a similar reasoning as in the first part of this example that for c = (−1, 1)T the c-optimal design is given by
Figure 2.

The Elfving set ℛ3 defined in (2.7) for the quadratic regression model (3.1) with 3 observations per patient. The axes d1 and d2 represent the first and second dimension of the two dimensional elements of this set. The vector c is depicted by the red line, while the black circle shows the point used in the Elfving representation (2.9).
This means that all individuals have to be treated at experimental conditions 0, 1.0 and 2.0.
3.2 A nonlinear population model
In order to demonstrate the applicability of the methodology to population pharmacokinetic models, we consider a generic nonlinear random effects model
| (3.3) |
where η : χ × ℝk → ℝ is a known function and the errors εi = (εi1, …, εim) for each patient are normally distributed with mean 0 and covariance matrix Wi ∈ ℝm×m, i = 1, …, n. The quantities b1, …, bn ∼
(θ, Ω) denote k-dimensional independent normally distributed random variables with mean θ and covariance matrix Ω representing the effect of the corresponding subject under investigation [see Beatty and Piegorsch (1997), Ette et al. (1995), Cayen and Black (1993)]. We also assume that the random variables b1, …, bn and the vector (ε11, …, εnm)T are independent.
Due to the nonlinearity of the model an explicit representation of the corresponding Fisher information matrix cannot be derived. Following Retout and Mentré (2003) we propose to use a first-order Taylor expansion to derive an approximation of this matrix. Assuming differentiability of the regression function we use the expansion
| (3.4) |
where
denotes the gradient of the regression function with respect to b. This means that similarly to the case of fixed effects nonlinear models (see Remark 2.1) the nonlinear model (3.3) is approximated by the linear model (3.4). For the construction of the optimal design we assume that knowledge about the parameters θ and Ω is available from previous or similar experiments and consider the determination of locally optimal designs [see Chernoff (1953)]. As a consequence, the covariance matrix of the nonlinear least squares estimate in the model (3.3) is approximated by replacing the function f in model (2.1) with f(x) = f(x, b)∣b=θ. The variance of the random vector Yi = (Yi1, …, Yim) now includes the variance caused by the random effect and can be approximated by
Consider for example the simple first order elimination model with two observations for each subject (bi = (bi1, bi2))
| (3.5) |
which is widely used in pharmacokinetics [see e.g. Rowland (1993)]. We assume that the errors εij are homoscedastic and uncorrelated with variance σ2 > 0, that is and for the parameters we consider the case
A straightforward calculation shows that
Therefore, we have f(x)=(e−θ2x, −θ1xe−θ2x)T and the functions f̃1, f̃2 are defined in a similar manner as it is illustrated in Example 1.
The corresponding generalized Elfving set is depicted in Figure 3. If we are interested in the optimal design for estimating the area under the curve
Figure 3.

The Elfving set ℛ2 defined in (2.7) for the first order elimination model with 2 observations per patient. The axes d1 and d2 represent the first and second dimension of the two dimensional elements of this set. The vector c is depicted by the red line.
it is easy to see that this objective corresponds to a locally c-optimal design problem for the vector , which is marked as the red line in Figure 3. From this Figure it can be seen that only one point is needed in the Elfving representation (2.9), and we obtain by a similar reasoning as in Section 3.1 that the locally c-optimal design for the estimation of the area under the curve is given by
This design means that all patients should be treated under experimental conditions x1 = 0.0 and x2 = 2.0. The optimality of this design can also be verified by Theorem 2.1.
Acknowledgments
The authors would like to thank Martina Stein, who typed parts of this manuscript with considerable technical expertise. This work has been supported in part by the Collaborative Research Center “Statistical modeling of nonlinear dynamic processes” (SFB 823) of the German Research Foundation (DFG), the BMBF Project SKAVOE and the NIH grant award IR01GM072876:01A1. The third author was partially supported by EPSRC grant EP/D048893/1. The authors would also like to thank three anonymous referees and the associate editor for some very constructive comments which led to substantial improvements of an earlier version of this manuscript.
4 Appendix: Proofs of main results
4.1 Proof of Lemma 2.1
Let denote the eigenvalue decomposition of the positive semidefinite matrix Vi, with Ui the matrix of eigenvectors and Di = diag(λi1, …, λim) the diagonal matrix of eigenvalues of Vi. Then the matrix
is the root of , i.e. , and we obtain
Defining
and writing f̃ℓ(xi) for the ℓ-th column of , we have
which completes the proof of Lemma 2.1.
4.2 Proof of Theorem 2.1
Let Ξ denote the set of all approximative designs on χm and let
denote the set of all information matrices of the form (2.5). The set ℳ is convex and the information matrix M(ξc) of a locally c-optimal design for which the linear combination cTθ is estimable [i.e. c ∈ Range (M(ξ))] maximizes the function (cTM−c)−1 in the set ℳ ∩
c, where
Consequently we obtain from Theorem 7.19 in Pukelsheim (2006) that the design ξc is c-optimal if and only if there exists a generalized inverse, say G, of the matrix M(ξc) such that the inequality
holds for all A ∈ ℳ, where there is equality for any matrix A ∈ ℳ which maximizes (cTM−c) −1 in the set ℳ. Note that the family ℳ is the convex hull of the set
and therefore the assertion of Theorem 2.1 follows by a standard argument of optimal design theory [see e.g. Silvey (1980)].
4.3 Proof of Theorem 2.2
Recall that the information matrix at the experimental condition x = (x1, …, xm) is of the form
| (4.1) |
Therefore, the result is a direct consequence of Theorem 3.3 in Dette and Holland-Letz (2009), which presents a geometric characterization of Elfving type for c-optimal designs in models with an information matrix of the form (4.1).
Contributor Information
Tim Holland-Letz, Email: tim.holland-letz@rub.de, Ruhr-Universität Bochum, Medizinische Fakultät, 44780 Bochum, Germany.
Holger Dette, Email: holger.dette@rub.de, Ruhr Universität Bochum, Fakultät für Mathematik, 44780 Bochum, Germany.
Andrey Pepelyshev, Email: a.pepelyshev@sheffield.ac.uk, University of Sheffield, Department of Probability & Statistics, Sheffield, U.K..
References
- Atkinson AC, Chaloner K, Herzberg AM, Juritz J. Optimum experimental designs for properties of a compartmental model. Biometrics. 1993;49:325–337. [PubMed] [Google Scholar]
- Atkinson AC, Haines LM. Designs for nonlinear and generalized linear models. In: Ghosh S, Rao CR, editors. Handbook of Statistics 13, Design and Analysis of Experiments. North-Holland Publishing Co.; Amsterdam: 1996. pp. 437–475. [Google Scholar]
- Beatty DA, Piegorsch WW. Optimal statistical design for toxicokinetic studies. Statistical Methods in Medical Research. 1997;6:359–376. doi: 10.1177/096228029700600405. [DOI] [PubMed] [Google Scholar]
- Bickel PJ, Herzberg AM. Robustness of design against autocorrelation in time I: Asymptotic theory, optimality for location and linear regression. Ann Statist. 1979;7(1):77–95. [Google Scholar]
- Cayen M, Black H. Role of toxicokinetics in dose selection for carcinogenicity studies. In: Welling P, de la Iglesia F, editors. Drug toxicokinetics. Marcel Dekker; New York, N. Y.: 1993. [Google Scholar]
- Chaloner K, Verdinelli I. Bayesian experimental design: A review. Statistical Science. 1995;10:273–304. [Google Scholar]
- Chernoff H. Locally optimal designs for estimating parameters. Ann Math Statist. 1953;24:586–602. [Google Scholar]
- Colombo S, Buclin T, Cavassini M, Decosterd L, Telenti A, Biollaz J, Csajka C. Population pharmacokinetics of atazanavir in patients with human immunodeficiency virus infection. Antimicrobial Agents and Chemotherapy. 2006;50 doi: 10.1128/AAC.00098-06. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dette H. Elfving's Theorem for D-optimality. Annals of Statistics. 1993a;21:753–766. [Google Scholar]
- Dette H. A new interpretation of optimality for E-optimal designs in linear regression models. Metrika. 1993b;40:37–50. [Google Scholar]
- Dette H. Designing of experiments with respect to “standardized” optimality criteria. Journal of the Royal Statistical Society, Ser B. 1995;59:97–110. [Google Scholar]
- Dette H. A note on bayesian c- and D-optimal designs in nonlinear regression models. Annals of Statistics. 1996;24:1225–1234. [Google Scholar]
- Dette H, Holland-Letz T. A geometric characterization of c-optimal designs for het-eroscedastic regression. Ann Statist. 2009;37(6B):4088–4103. [Google Scholar]
- Elfving G. Optimal allocation in linear regression theory. Annals of Mathematical Statistics. 1952;23:255–262. [Google Scholar]
- Ette E, Kelman A, Howie C, Whiting B. Analysis of animal pharmacokinetic data: Performance of the one point per animal design. Journal of Pharmacokinetics and Biopharmaceutics. 1995;23:551–566. doi: 10.1007/BF02353461. [DOI] [PubMed] [Google Scholar]
- Fedorov VV. Theory of Optimal Experiments. Academic Press; New York: 1972. [Google Scholar]
- Ford I, Torsney B, Wu CFJ. The use of canonical form in the construction of locally optimum designs for nonlinear problems. Journal of the Royal Statistical Society, Ser B. 1992;54:569–583. [Google Scholar]
- Haines LM. A geometric approach to optimal design for one-parameter non-linear models. Journal of the Royal Statistical Society, Series B. 1995;57(3):575–598. [Google Scholar]
- Kiefer J. General equivalence theory for optimum designs. Annals of Statistics. 1974;2:849–879. [Google Scholar]
- Müller CH, Pázman A. Applications of necessary and sufficient conditions for maximum efficient designs. Metrika. 1998;48:1–19. [Google Scholar]
- Müller W, Pázman A. Measures for designs in experiments with correlated errors. Biometrika. 2003;90(2):423–434. [Google Scholar]
- Näither W. Exact design for regression models with correlated errors. Statistics. 1985;16(4):479–484. [Google Scholar]
- Pukelsheim F. Optimal Design of Experiments, Classics in Applied Mathematics. Society for Industrial and Applied Mathematics; Philadelphia, PA: 2006. [Google Scholar]
- Pukelsheim F, Rieder S. Efficient rounding of approximate designs. Biometrika. 1992;79:763–770. [Google Scholar]
- Retout S, Mentré F. Further developments of the Fisher information matrix in nonlinear mixed-effects models with evaluation in population pharmacokinetics. Journal of Biopharmaceutical Statistics. 2003;13:209–227. doi: 10.1081/BIP-120019267. [DOI] [PubMed] [Google Scholar]
- Rowland M. Clinical Pharmacokinetics: Concepts and Applications. Williams and Wilkins; Baltimore: 1993. [Google Scholar]
- Sacks J, Ylvisaker ND. Designs for regression problems with correlated errors; many parameters. Ann Math Statist. 1968;39:49–69. [Google Scholar]
- Schmelter T. Considerations on group-wise identical designs for linear mixed models. Journal of Statistical Planning and Inference. 2007;137:4003–4010. [Google Scholar]
- Sheiner L, Rosenberg B, Marathe V. Estimation of population characteristics of pharmacokinetic parameters from routine clinical data. Journal of Pharmacokinetics and Biopharmaceutics. 1977;5:445–479. doi: 10.1007/BF01061728. [DOI] [PubMed] [Google Scholar]
- Silvey SD. Optimal Design. Chapman and Hall; London: 1980. [Google Scholar]
- Studden WJ. Elfving's theorem revisited. Journal of Statistical Planning and Inference. 2005;130:85–94. [Google Scholar]
