Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jul 14.
Published in final edited form as: J Biom Biostat. 2014;5(5):211. doi: 10.4172/2155-6180.1000211

A New Robust Method for Nonlinear Regression

MA Tabatabai 1, JJ Kengwoung-Keumo 2, WM Eby 3, S Bae 4, U Manne 5, M Fouad 4, KP Singh 4,*
PMCID: PMC4501042  NIHMSID: NIHMS695172  PMID: 26185732

Abstract

Background

When outliers are present, the least squares method of nonlinear regression performs poorly. The main purpose of this paper is to provide a robust alternative technique to the Ordinary Least Squares nonlinear regression method. This new robust nonlinear regression method can provide accurate parameter estimates when outliers and/or influential observations are present.

Method

Real and simulated data for drug concentration and tumor size-metastasis are used to assess the performance of this new estimator. Monte Carlo simulations are performed to evaluate the robustness of our new method in comparison with the Ordinary Least Squares method.

Results

In simulated data with outliers, this new estimator of regression parameters seems to outperform the Ordinary Least Squares with respect to bias, mean squared errors, and mean estimated parameters. Two algorithms have been proposed. Additionally and for the sake of computational ease and illustration, a Mathematica program has been provided in the Appendix.

Conclusion

The accuracy of our robust technique is superior to that of the Ordinary Least Squares. The robustness and simplicity of computations make this new technique more appropriate and useful tool for the analysis of nonlinear regressions.

Keywords: Robust nonlinear regression, Least Square estimator, Growth models, Tumor size, Metastasis, Monte-carlo simulation

Background

Nonlinear regression is one of the most popular and widely used models in analyzing the effect of explanatory variables on a response variable and it has many applications in biomedical research. With the presence of outliers or influential observations in the data, the ordinary least squares method can result in misleading values for the parameters of the nonlinear regression and the hypothesis testing, and predictions may no longer be reliable. The main purpose of robust nonlinear regression is to fit a model to the data that gives resilient results in the presence of influential observations, leverage points and/or outliers. Rousseeuw and Leroy [1] defined vertical outliers as those data points with outlying values in the direction of the response variable, while leverage points are outliers in the direction of covariates. An observation may be influential if its removal would significantly alter the parameter estimates. Edgeworth [2] proposed the Least Absolute Deviation as a robust method. Huber [3] introduced the method of M-estimation. Rousseeuw [4] introduced the Least Trimmed Squares-estimates. The S-estimator was introduced by Rousseeuw and Yohai [5]. Yohai and Zammar [6] introduced the τ-estimator of linear regression coefficients. It is a high efficiency estimator and has a high breakdown point. Tabatabai and Argyros [7] extended the τ-estimates to the nonlinear regression models. Stromberg [8] introduced algorithms for Yohai's MM estimator of nonlinear regression and Rousseeuw's least median estimators of nonlinear regression. Tabatabai et al. [9] introduced the TELBS robust linear regression method.

In Medical, biological and pharmaceutical research and development nonlinear regression analysis has been a major tool for investigating the effect of multiple explanatory variables on a response variable when the data follows a nonlinear pattern. When outliers and influential observations are present, nonlinear least squares performs poorly. In this paper we introduce a new robust nonlinear regression method capable of handling such cases. Minn et al. [10] showed that lung tumor size can lead to metastasis. Also, aggressive tumor growth is a marker for cells destined to metastasize. They validated their statement by analyzing the lung metastasis gene-expression signature using a nonlinear model. Arisio et al. [11] study of breast cancer confirmed that the size of tumor is an important predictor of axillary lymph node metastases. Ramaswamy, et al. [12]) found that gene-expression signature is a significant factor associated with metastasis in solid tumors carrying such gene expressions. Maffuz et al. [13] showed that pure ductal carcinoma in situ is not associated with lymphatic metastasis independently of tumor size. Hense et al. [14] found that the occurrence and primary metastases in Ewing tumors is related to tumor size, pelvic site and malignant peripheral neuroectodermal tumors. Umbreit et al. [15] studied a group of patients who had undergone surgical resection for a unilateral, sporadic renal tumor. They concluded that tumor size is significantly associated with metastasis in patients suffering from renal masses. Wu et al. [16] retrospectively analyzed 666 patients with nasopharyngeal carcinoma and concluded that the tumor volume was correlated with cervical lymph node metastasis as well as distant metastasis after radiation therapy. In computer vision, robust regression methods have been used extensively to estimate surface model parameters in small image regions and imaging geometry of multiple cameras. Coras et al. [17] used nonlinear regression and showed that micromolar doses of peroxisome prolefector-activated receptor γ reduce glioma cell proliferation.

Roth [18] applied nonlinear sigmoidal curves to monitor the accumulation of polymerase chain reaction products at the end of each cycle by fluorescence. In human blood samples, Kropf et al. [19] found a nonlinear binding association between transforming growth factor beta1 (TGF -β1) and α2-Macroglobulin as well as TGF-β1 and latency-associated peptide (LAP). Yang and Richmond [20] used nonlinear least squares to estimate the effective concentration of unlabeled human interferon-inducible protein 10 that yields 50% maximal binding of iodinated protein 10 to chemokine receptor CXCR3. Hao et al. [21] examined the significance of Nav1.5 protein in cellular processes by applying a nonlinear regression which relates the gene expression of Nav 1.5 protein and TGF-β1 as well as Nav 1.5 protein and vimentin. TGF–β families are important factors in regulation of tumor initiation, progression, and metastatic activities, Bierie, et al. [22]. Coras et al. [17] applied nonlinear regression models to show that traglitazone concentration has a tendency to inhibit 1 TGF-β1 release in glioma cell culture.

This paper introduces a new robust nonlinear regression estimator. This new method for robust nonlinear regression has a bounded influence and high breakdown point and asymptotic efficiency under normal distribution and is able to estimate the parameters of nonlinear regression in such a way that is close to the parameter estimates we would have estimated with the absence of outliers in the data. In addition, this new robust nonlinear regression method is computationally simple enough to be used by practitioners.

Methods and Models

We begin with the introduction of our new robust nonlinear regression model. The introduction of the model is followed by two algorithms describing its implementation. We then apply this new model to a real data set with an outlier present. In addition, we will analyze a problem involving tumor size and metastases with and without outliers. Monte Carlo simulations are also performed to evaluate the robustness of our method, in comparison with the ordinary least squares method.

Robust nonlinear regression model

Consider the general nonlinear model of the form

yi=g(θ;xi)+εi,i=1,2,,n

Where y1, y2,..., yn is a sample of n observations with k predictor variables in the model and the parameter vector θ=(θ1, θ2,..., θp ). The errors εi's are random variables. In a designed experiment, xij's are fixed but when xij's are observational, they are random variables. The predictor can be fixed, random, or mixed. The ordinary least squares estimate of the parameter vector θ is given by

θ^=argminθRpi=1nεi2,

and the robust estimate of the parameter vector θ is derived by

θ^=argminθRpi=1npω(ti)Li, (1)

where the function ρω(x) is defined as

ρω(x)=1Sech(ωx),

and the positive real number ω is called the tuning constant. The function Sech(·) is the hyperbolic secant function and ti's are defined by

ti=(1hii)(yig(θ;xi))σ, (2)

where σ is the error standard deviation and hii's are the diagonal elements of the matrix H of the form

H=Diag[G(GtG)1Gt]

where the matrix G is defined as

G=(g(θ;x1)θ1g(θ;x1)θ2g(θ;x1)θpg(θ;x2)θ1g(θ;x2)θ2g(θ;x2)θpg(θ;xn)θ1g(θ;xn)θ2g(θ;xn)θp)=(G1G2Gp).

For j =1,2,...,k, we define

Mj=Median{x1j,x2j,..,xnj}

and for i =1,2,..., n, we define

Li=j=1kMax{Mj,xij}

If σ is unknown, one may use one of the following two estimators of σ which were proposed by Rousseeuw and Croux [23].

σ^=1.1926Median{i:1in}(Median{j:1jn}rirj)orσ^=2.2219{rirj;i<j,i,j=1,,n}(l), (4)

where ri=yig(θ^;xi) and l=([n2]+12) is the binomial coefficient and {.}(l) is the l th order statistic.

The above estimators of σ have high breakdown points. Under the normality assumption for error terms, the estimators given in (3) and (4) have higher efficiency than median absolute deviation (MAD). In this paper all of our computations are performed using formula (3).

The function ρω: R→R is a differentiable function satisfying the following properties:

  1. ρω(0)=0,

  2. ρω is bounded,

  3. xR,ρω(x)0,

  4. xR,ρω(x)=ρω(x),

  5. limxρω(x)=limxρω(x)=1,

  6. a,bR,a>bρω(a)ρω(b),

  7. limxdρω(x)dx=0.

  8. κ>0,limxρω(κx)ρω(x)=1.

Taking the partial derivatives of (1) with respect to parameters and setting them equal to zero results in the following system of equations

i=1nψω(ti)Litiβj=0,j=1,2,,p, (5)

where ψω is the derivative of ρω which is equal to

ψω(x)=ωSech(ωx)Tanh(ωx)

Define the weights wi as

wi=ψ(ti)(1hii)σ(yig(θ;xi))Li. (6)

Then for j =1, 2,..., p, the equation (5) can be written as

i=1nwi(yig(θ;xi))g(θ;xi)θj=0.

The matrix of weights, W is a diagonal matrix whose elements on the main diagonals are w1,w2,...,wn, and the estimator of the parameter vector θ is given by

θ(X,y)^=(GtWG)1GtWy.

If g(θ;xi) is linear function of parameters, then the above model would be identical to TELBS robust linear regression model. Asymptotically, θ^ has a normal distribution with mean θ and variance-covariance matrix of the form

V=σ2E(ψω2(t))[E(ψω(t))]2E((GtG)1),

where

E[ψω(t)]=ψω(t)et222πdt

and

E[ψω2(t)]=ψω2(t)et222πdt.

The function ψω(t) is defined as

ψω(t)=ω2[Sech3(ωt)Sech(ωt)Tanh2(ωt)].

Under the assumption of normality for the underlying distribution, the asymptotic efficiency, Aeff, is defined as

Aeff=(E[ψω(t)])2E[ψω2(t)]. (7)

The tuning constant ω can be calculated by solving equation (7) for ω.

An estimate for the variance-covariance matrix is derived and given as follows

V^=n2σ^2i=1nψω2(ti)(np)(i=1nψω(ti))2(GtG)1.

The robust deviance is defined as

D=2σ^2i=1n1Sech(ωti)Li.

The deviance plays a major role in model fitting. A smaller value of deviance is preferred over larger values. Following Akaike Criterion [24] and Ronchetti [25], the robust equivalence of AIC is denoted by AICR, and is given by

AICR=Dσ^2+2pE[ψω2(t)]E[ψω(t)],

and the Robust Schwarz Information Criterion BICR is given by

BICR=Dσ^2+2pln(n).

A robust coefficient of determination is given by

R2=1(Median{i:1in}riMedian{i:1in}yi)2.

For more details, see Rosseeuw and Leroy [1].

There are numerous variable selection techniques available in the literature. One may use the stepwise procedure that may involve in forward selection or backward elimination. For each set S ⊆{x1, x2..., xp} of explanatory variables, the robust final predicted error of Maronna et al. [26] is denoted by RFPE(S) and is defined as

REPE(s)=i=1nρω(ti)n+#(s)i=1nψω2(ti)ni=1nψω(ti),

where #(S) it the number of elements in the set S. In the forward selected or backward elimination, choose the one whose inclusion or deletion results in the smallest value of RFPE. To perform hypothesis testing, we let ΩRp be the parameter space and {θj1, θj2,...,θjq} be a subset of {θ1,θ2,...,θp}.

Define Ω0={θΩ:θj1=θj2==θjq=0}, and the function f(θ) as

f(θ)=i=1nρω(ti)Li.

Then a robust likelihood ratio type test statistic for testing the null hypothesis H0:θΩ0 against the alternative H1:θΩ0c is

Sn2=2(SupθΩ0f(θ)SupθΩf(θ))q

For more information, the reader is referred to Hampel et al. [27]. Asymptotically under the null hypothesis E[ψω(t)]E[ψω2(t)]Sn2 has a chi-square distribution with q degrees of freedom, where the Wald type test statistic is defined as Wn2=n(θ^j1,θ^j2,,θ^jq)Vq1(θ^j1,θ^j2,,θ^jq)t, and 1nVq is the asymptotic variance-covariance matrix for the given vector (θ^j1,θ^j2,,θ^jq). the null distribution of the statistic Wn2 is asymptotically a chi-square distribution with q degrees of freedom.

Any of the following two robust algorithms can be used to estimate the parameter vector θ and standard deviation σ of a nonlinear regression model:

Algorithm I

  1. Set j=0 and σ^(0)=1. Calculate the initial estimate θ^(0) of parameter vector θ by minimizing i=1nρω(yig(θ;xi))Li.

  2. Set j=j+1 and calculate the followings:
    1. σ^(j) using formula (3) or (4).
    2. Use θ^(j1) to evaluate matrix G^(j).
    3. Evaluate matrix H(j)=Diagonal[(G^(j)((G^(j))t)1G^(j)].
    4. The ith diagonal element of matrix H(j) is denoted by hi(j).
  3. Calculate θ^(j) by minimizing f(θ)=i=1nρω(ti(j))Li, where ti(j)=(yig(θ;xi))(1hi(j))σ^(j),

  4. If convergence occurs, stop. Otherwise go to step 2 and continue the process.

The convergence occurs when θ^(j)θ^(j1)0. The convergence criterion is to stop the algorithm when θ^(j)θ^(j1)εorθ^(j)θ^(j1)θ^(j)ε. One way choose the value of ε=0.00001.

Algorithm II

  1. Set j=0 and σ^(0)=1. Calculate the initial estimate θ^(0) of prameter vector θ by minimizing i=1nρω(yig(θ;xi))Li.

  2. Set j=j+1. For 1in, calculate σ^(j),ti(j) and weight wi(j) where ωi(j)=ψ(ti(j))(1hi(j))σ^(j)(yig(θ^(j);xi))Li and ti(j)=(yig(θ;xi))(1hi(j))σ^(j), use them to calculate the weights matrix w(j).

  3. Use information from step 2 to calculate. θ^(j)=(G(j))tW(j)G(j)1(G(j))tW(j)y

  4. If convergence occurs, stop. Otherwise go to step 2 and continue the process.

The convergence occurs when θ^(j)θ^(j1)0. The convergence criterion is to stop the algorithm when θ^(j)θ^(j1)εorθ^(j)θ^(j1)θ^(j)ε. One may choose the value of ε=0.00001.

Drug concentration data

Kenakin [28] used a set of responses to the concentration of an agonist in a functional assay. They fit the following model to their data. In this data, observation 5 has an outlier in the response direction,

Response=Basel+MaxBasel1+10n(log(EC50log(A)) (8)

Table 1 shows their actual and predicted concentrations as well as our results for fitted Hyperbolastic model of type III (H3). For this example, the new robust technique is an effective regression tool in estimating model parameters in the presence of outliers. Figure 1 shows the fitted curve using hyperbolastic model of type III (H3). Figure 2 uses formula (8) and the least squares fitted curve for the concentration data.

Table 1.

Parameter Estimates for the Concentration Data.

OBS Concentration Response Least Square Calculating using 1 Hyperbolastic H3
1 0.01 2 2 0.8 2
2 0.03 8 8.7 5.2 8
3 0.1 28 27.8 28 27.9
4 0.3 59 59.0 62.6 61.4
5 1 95 84.3 77.9 78.1
6 3 78 84.3 80.4 78.4
7 10 80 84.3 80.8 78.4

Figure 1.

Figure 1

Fitted curve using hyperbolastic model of type III (H3).

Figure 2.

Figure 2

The least squares fitted curve for the concentration data.

Tumor metastasis

The data in Table 2 consist of 12 observations. The response variable is the fraction of breast cancer patients with metastases and the predictor variable is the tumor size. Table 2 is from Michaleson et al. [29]. This data was originally collected by Tabar et al. [30-32] and Tubiana et al. [33,34]. To assess the robustness of our new method with regard to a special class of nonlinear growth models, we utilize this tumor metastasis data that is free of outliers. We first fit a model to the data using the robust method as well as least squares when there is no outlier present. Then we plant outliers in X direction, Y direction and both X and Y direction. In the X direction we change the X value in observation 12 from 90 to 2. In the Y direction, we change the y value in observation 6 from .55 to 3 and in both X and Y direction we change observation 12 in X direction from 90 to 2 and observation 7 in Y from .56 to 3.

Table 2.

Tumor Size Versus Fraction Metastasized Data.

Tumor size x 12 17 17 25 30 39 40 50 60 70 80 90
Fraction Metastasized y 0.13 0.20 0.27 0.45 0.42 0.55 0.56 0.66 0.78 0.83 0.81 0.92

For illustrative purposes, we have fitted hyperbolastic of type II, Gompertz and logistic models. In the past, these models have been used to monitor cancer progression and regression. Each model has three parameters θ1, θ2, and θ3 with θ1 and θ2 being positive and εi are random errors. The response yi is the fraction metastasized and xi is the tumor size for individual i. The left graphs in Figures 3-5 are fitted curves using our proposed robust nonlinear regression technique and the graphs on the right sides of Figures 3-5 have been drawn using the nonlinear least square regression technique by planting outliers in X direction, Y direction and both X and Y direction. As you can see, when there is no outlier in the data all models perform well regardless of using the robust method or Least Squares. But when we plant outliers in the X, Y, and/or XY directions the fits become unacceptable for Least Squares whereas the robust method performs well for all models.

Figure 3.

Figure 3

Hyperbolastic model of type II.

Figure 5.

Figure 5

The logistic model.

The hyperbolastic model of type II or simply H2 has the form yi=θ11+θ2Arcsinh[exp[θ3xi]]+i.

The Gompertz model is of the form: yi = θ1 exp[–θ2 exp(–θ3xi)]+∈i.

The logistic model is of the form yi=θ11+θ2exp[θ3xi]+i.

Simulation

We perform simulation experiments to evaluate the robustness of our new nonlinear regression method compared to the least squares method. We have simulated the biochemistry model known as Michaelis-Menten kinetics. In biochemistry this model expresses the reaction velocity V as a function of concentration of substrate C as

V=αCβ+C,

where the parameter α denotes the maximum reaction velocity and β is the substrate concentration at which the initial velocity V0 is 50% of the maximum reaction velocity. The larger the parameter β, the lower is the efficiency between the substrate and enzyme. This model has also been used in many biological systems such as gene regulatory system. In order to investigate the robustness of our new method relative to the method of least squares, we considered the nonlinear Michaelis-Menten equation of the form

yi=θ1xiθ2+xi+εi,i=1,2,,n

Where the response variable is yi and xi is fixed. In our simulations we set xi=i and εi as the standard normal distribution with mean 0 and standard deviation 1. We performed 1000 repetitions using two sample sizes n=20 and n=50. The outliers were randomly chosen in the direction of X, Y and both X and Y. We used contamination levels of 0%, 10%, 20%, 30%, and 40%. In this simulation the parameter values are θ1=5 and θ2 =1. The software Mathematica is used in the simulation process. To evaluate the robustness of these estimators, we randomly choose 10%, 20%, 30% and 40% of the simulated observations and contaminate the selected data by magnifying their size by a factor of 100 in the direction of explanatory variable X, response variable Y, and both response Y and explanatory X variables. Finally, we estimate both bias and mean squared errors using the following equations

bias=l=1mθ^lmθ

Where m is the number of iterations in the simulation. The mean squared error is estimated by

MSE=l=1m(θ^lθ)2m.

Tables 3-5 give the summary of our simulation outcome for both small and large sample sizes. The asymptotic efficiency for our simulation studies has been set to 95% level. By examining the simulation tables, we find out that in the absence of contamination in the simulated data, both the least square and the proposed robust method perform well with respect to bias, mean square error and mean estimated parameter values. However, when contamination enters into our simulated data in the direction of explanatory variable X, or response variable Y, or both X and Y, then the new method outperforms the least squares method for both small and large samples. We also observe that the estimated values of parameters θ1 and θ2 are in close proximity of the true values of the parameters θ1 and θ2. The simulation results clearly indicate the robustness of our new nonlinear regression technique relative to least squares method when outliers or influential observations are present.

Table 3.

Bias, Mean Square Errors (MSE) and Mean Estimated Parameter (MEP) with Percentage Contamination in the X Direction.

n=20 0% 10% 20% 30% 40%
θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2
Least-Squares Bias 0.0375 0.1083 25343 47179 8583 6953 2407 5185 1604 4015
MSE 0.0479 0.4842 5.5E+9 1.5E+10 4.6E+9 2.1E+9 3.2E+8 2.2E+9 1.0E+8 1.4E+9
MEP 5.0375 1.1083 25348 45180 9324 7329 3456 7733 1934 5234
Robust Method Bias 0.0901 0.1525 0.0056 0.2242 0.034 0.2421 0.0199 0.2265 0.0176 0.3269
MSE 0.0534 0.5646 0.2111 0.9423 0.2502 1.2292 0.2578 0.8812 0.2767 1.6324
MEP 5.0701 1.1327 5.0056 1.2201 5.0134 1.2440 5.0199 1.2263 4.9824 1.3269
n=50 0% 10% 20% 30% 40%
θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2
Least-Squares Bias 0.0131 0.1430 5214 4108 7326 5423 2113 4927 1476 3104
MSE 0.0396 0.2566 6.1E+9 1.7E+9 3.2E+9 1.8E+9 2.5E+8 1.8E+9 7.9E+7 1.2E+9
MEP 5.0131 1.0414 30123 58972 8589 6954 2412 6186 1609 4016
Robust Method Bias 0.0701 0.1625 0.0328 0.0914 0.02925 0.1285 0.0494 0.1843 0.0050 0.2876
MSE 0.2167 0.5221 0.2333 0.6257 0.2718 0.7224 0.3150 0.8543 0.2937 0.9745
MEP 5.0701 1.1131 5.0328 1.0914 5.0292 1.1285 5.0494 1.1843 4.9734 1.1567

Table 5.

Bias, Mean Square Errors (MSE) and Mean Estimated Parameter (MEP) with Percentage Contamination in the X-Y-Direction.

n=20 0% 10% 20% 30% 40%
θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2
Least-Squares Bias 0.0375 0.1083 25342.0 45178 8582 6952 2406 5184 1603 4015
MSE 0.0479 0.4842 5.5E+9 1.5E+10 4.6E+9 2.1E+9 3.2E+8 2.2E+9 1.0E+8 1.4E+9
MEP 5.0375 1.1083 25347 45179 8587 6953 2411 5185 1608 4016
Robust Method Bias 0.0901 0.1525 0.0762 0.0895 0.0081 0.0599 0.1191 0.1685 0.0176 0.0417
MSE 0.0534 0.5646 0.1676 0.3112 0.1484 0.6847 0.2901 0.7813 0.2539 0.7448
MEP 5.0701 1.1327 5.0762 1.0895 4.9919 1.0599 5.1191 1.1685 5.0176 0.0417
n=50 0% 10% 20% 30% 40%
θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2
Least-Squares Bias 0.0131 0.1430 660 892 583 582 566 478 559 404
MSE 0.0396 0.2566 455959 1.0E+6 345138 437874 323227 252951 314270 176481
MEP 5.0131 1.0414 665 893 588 583 571 479 564 405
Robust Method Bias 0.0701 0.1625 0.0173 0.0736 0.0892 0.1767 0.0399 0.0059 0.01445 0.0256
MSE 0.2167 0.5221 0.0526 0.3141 0.0746 0.4530 0.0474 0.2435 0.0857 0.4877
MEP 5.0701 1.1131 4.9827 1.0736 5.0892 1.1767 4.9601 1.0060 4.9856 1.0256

Conclusion

In this paper we introduced a new robust estimator of nonlinear regression parameters. In addition, robust testing for hypothesis about model parameters was introduced. Moreover, two algorithms were developed to perform the robust nonlinear estimation of model parameters. The computer simulation revealed the robustness of our new estimator. This robust method provides a powerful alternative to least squares method. The robust method presented in this paper has influence functions bounded in both the response and the explanatory variable direction. It has high asymptotic breakdown point and efficiency. A Mathematica program is also provided to ease in computations. This program does the necessary calculations to perform the robust nonlinear regression analysis of the drug concentration example given in this paper.

Supplementary Material

Apx

Figure 4.

Figure 4

The Gompertz mode.

Table 4.

Bias, Mean Square Errors (MSE) and Mean Estimated Parameter (MEP) with Percentage Contamination in the Y Direction.

n=20 0% 10% 20% 30% 40%
θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2
Least-Squares Bias 0.0375 0.1083 68395 12511 115870 12323 115212 9858 136703 8915
MSE 0.0479 0.4842 1.50E+10 1.50E+8 1.2E+11 1.2E+9 1.4E+11 9.9E+8 2.4E+11 1.0E+9
MEP 5.0375 1.1083 68400 12512 115875 12324 115217 9859 136708 8916
Robust Method Bias 0.0901 0.1525 0.0358 0.0523 0.0362 0.0250 0.0899 0.1605 0.2034 0.3292
MSE 0.0534 0.5646 0.2296 0.5710 0.2555 0.6212 0.3534 0.8233 0.3928 1.095
MEP 5.0701 1.1327 4.9641 1.0523 4.9638 0.9751 5.0899 1.1601 5.2034 1.3292
n=50 0% 10% 20% 30% 40%
θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2 θ 1 θ 2
Least-Squares Bias 0.0131 0.1430 23650 11692 18843 5605 25855 5462 33790 5170
MSE 0.0396 0.2566 4.6E+9 1.1E+9 7.6E+9 6.3E+9 1.2E+10 5.1E+08 3.1E+10 7.1E+08
MEP 5.0131 1.0414 23655 11693 18848 5606 25860 5463 33795 5171
Robust Method Bias 0.0701 0.1625 0.0365 0.0646 0.0262 0.0143 0.0504 0.0640 0.0022 0.0510
MSE 0.2167 0.5221 0.0555 0.2067 0.0660 0.3303 0.0673 0.3861 0.0710 0.6139
MEP 5.0701 1.1131 5.0365 1.0646 4.974 0.9857 5.0505 1.0650 5.0022 1.0510

Acknowledgement

Research reported in this paper was partially supported by the Center grant of the National Cancer Institute of the National Institutes of Health to the University of Alabama at Birmingham Comprehensive Cancer Center (P30 CA013148), the Cervical SPORE grant (P50CA098252), the Morehouse/Tuskegee University/UAB Comprehensive Cancer Center Partnership grant (2U54-CA118948), and the Mid-South Transdisciplinary Collaborative Center for Health Disparities Research (U54MD008176). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Footnotes

Citation: Tabatabai MA, Kengwoung-Keumo JJ, Eby WM, Bae S, Manne U, et al. (2014) A New Robust Method for Nonlinear Regression. J Biomet Biostat 5: 199. doi:10.4172/2155-6180.1000199

References

  • 1.Rousseuw PJ, Leroy AM. Robust Regression and Outlier Detection. Wiley; New York: 1987. [Google Scholar]
  • 2.Edgeworth FY. On observations relating to several quantities. Hermathena. 1987;6:279–285. [Google Scholar]
  • 3.Huber PJ. Robust regression: asymptotics, conjectures, and Monte Carlo. Ann Stat. 1973;1:799–821. [Google Scholar]
  • 4.Rousseeuw PJ. Least median of squares regression. J Am Stat Assoc. 1984;79:871–880. [Google Scholar]
  • 5.Rousseeuw PJ, Yohai VJ. Robust and nonlineartime series analysis. Vol. 26. Springer-Verlag; 1984. Robust regression by means of S estimators. pp. 256–274. [Google Scholar]
  • 6.Yohai VJ, Zamar RH. High breakdown point estimates of regression by means of theminimization of an efficient scale. J Amer Statist Assoc. 1988;83:406–413. [Google Scholar]
  • 7.Tabatabai MA, Argyros IK. Robust estimation and testing for general nonlinearregression models. Appl Math Comput. 1993;58:85–101. [Google Scholar]
  • 8.Stromberg AJ. Computation of high breakdown nonlinear regression parameters. Annals of Statistics. 1993;15:642–656. [Google Scholar]
  • 9.Tabatabai MA, Eby WM, Li H, Bae S, Singh KP. TELBS robust linear regressionmethod. Open Access Medical Statistics. 2012;2:65–84. [Google Scholar]
  • 10.Minn AJ, Gupta GP, Padua D, Bos P, Nguyen DX, et al. Lung metastasis genes couple breasttumor size and metastatic spread. PNAS. 2007;104:6740–6745. doi: 10.1073/pnas.0701138104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Arisio R, Sapino A, Cassoni P, Accinelli G, Cuccorese MC, et al. What modifies the relation between tumor size and lymph node metastases in T1 breastcarcinomas? J Clin Pathol. 2000;53:846–850. doi: 10.1136/jcp.53.11.846. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ramaswamy S, Ross KN, Lander ES, Golub TR. A molecular signature of metastasis in primary solid tumors. Nature Gentics. 2003;33:49–54. doi: 10.1038/ng1060. [DOI] [PubMed] [Google Scholar]
  • 13.Maffuz A, Barroso-Bravo S, Nájera I, Zarco G, Alvarado-Cabrero I, et al. Tumor size as predictor of microinvasion, invasion, and axillary metastasis in ductalcarcinoma in situ. J Exp Clin Cancer Res. 2006;25(2):223–227. [PubMed] [Google Scholar]
  • 14.Hense HW, Ahrens S, Paulussen M, Lehnert M, Jürgens H. Factors associated with tumor volume and primary metastases in Ewing tumors: Results from the (EI) CESS studies. Annals of Oncology. 1999;10:1073–1077. doi: 10.1023/a:1008357018737. [DOI] [PubMed] [Google Scholar]
  • 15.Umbreit EC, Shimko MS, Childs MA, Lohse CM, Cheville JC, et al. Matastatic potential of a renal mass according to original tumor size at presentation. BJU International. 2011;109:190–194. doi: 10.1111/j.1464-410X.2011.10184.x. [DOI] [PubMed] [Google Scholar]
  • 16.Wu Z, Mo-Fa Gu, Zeng, Shao-Min H, Yong S. Correlation between nasopharyngeal carcinoma tumor volume and the 2002 International Union Against Cancertumor classification system. Radiation Oncology. 2013;8:87. doi: 10.1186/1748-717X-8-87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Coras R, Hölsken A, Seufert S, Hauke J, Eyüpoglu IY, et al. The peroxisome proliferator-activated receptor-ɣagonist roglitazone inhibits transforming growth factor- β-mediated glioma cell migration andbrain invasion. Mol Cancer Ther. 2007;6:1745–1754. doi: 10.1158/1535-7163.MCT-06-0763. [DOI] [PubMed] [Google Scholar]
  • 18.Roth CM. Quantifying Gene Expression. Curr Issues MolBiol. 2002;4:93–100. [PubMed] [Google Scholar]
  • 19.Kropf J, Schurek JO, Wollner A, Gressner A. Immunological measurement of Transforming growth factor-beta 1 (TGF-b1) in blood; assay development and comparison. Clinical Chemistry. 1997;43(10):1965–1974. [PubMed] [Google Scholar]
  • 20.Yang J, Richmond A. The Angiostatic Activity of Interferon-Inducible Protein-10/CXCL10 in Human Melanoma Depends on Binding to CXCR3 but Not to Glycosaminogly can. Mol Ther. 2006;9(6):846–855. doi: 10.1016/j.ymthe.2004.01.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Hao X, Silva EA, Månsson-Broberg A, Grinnemo KH, Siddiqui AJ, et al. Angiogenic effects of sequential release of VEGF-A165 and PDGF-BB with alginate hydrogels after myocardial infarction. Cardiovasc Res. 2007;75:178–185. doi: 10.1016/j.cardiores.2007.03.028. [DOI] [PubMed] [Google Scholar]
  • 22.Bierie B, Stover DG, Abel TW, Chytil A, Gorska AE, et al. Transforming Growth Factor–β Regulates Mammary Carcinoma Cell Survival and Interaction with the Adjacent Microenvironment. Cancer Res. 2008;68:1809–1819. doi: 10.1158/0008-5472.CAN-07-5597. [DOI] [PubMed] [Google Scholar]
  • 23.Rousseuw PJ, Croux C. Alternatives to the Median Absolute Deviation. J Am Stat Assoc. 1993;88:1273–1283. [Google Scholar]
  • 24.Akaike H. A new look at the statistical model identification. IEEE Transactions on Automatic Control. 1974;19(6):716–723. [Google Scholar]
  • 25.Ronchetti E. Robust model selection in regression. Stat and Prob Letters. 1985;3(1):21–23. [Google Scholar]
  • 26.Maronna R, Martin D, Yohai VJ. Robust Statistics: Theory and Methods. Wiley; New York: 2006. [Google Scholar]
  • 27.Hampel FR, Ronchetti EM, Rousseeuw PJ, Stahel WA. Robust Statistics: The Approach Based on Influence Functions. Wiley; New York: 1986. [Google Scholar]
  • 28.Kenakin TP. A Pharmacology Primer: Theory, Applications, and Methods. Third Edition Academic Press; 2009. pp. 286–287. [Google Scholar]
  • 29.Michaelson JS, Halpern E, Kopans D. Breast cancer: Computer simulation methodfor estimating optimal intervals for screening. Radiology. 1999;21:551–560. doi: 10.1148/radiology.212.2.r99au49551. [DOI] [PubMed] [Google Scholar]
  • 30.Tabar L, Fagerberg G, Duffy SW, Day NE, Gad A, et al. Update in theSwedish two-county program of mammographic screening for breast cancer. Radiol Clin North Am. 1992;30:187–210. [PubMed] [Google Scholar]
  • 31.Tabar L, Fagerberg G, Chen HS, Duffy SW, Smart CR, et al. Efficacy of breast cancer screening by age:New results from the Swedish two-county trial. Cancer. 1995;75:2507–2517. doi: 10.1002/1097-0142(19950515)75:10<2507::aid-cncr2820751017>3.0.co;2-h. [DOI] [PubMed] [Google Scholar]
  • 32.Tabar L. Breast cancer screening with mammography in women aged 40–49 years. Int J Cancer. 1996;68:693–699. doi: 10.1002/(SICI)1097-0215(19961211)68:6<693::AID-IJC1>3.0.CO;2-Z. [DOI] [PubMed] [Google Scholar]
  • 33.Tubiana M, Koscielny S. Natural history of human breast cancer: recent data andclinical implications. Breast Cancer Res Treat. 1991;18:125–140. doi: 10.1007/BF01990028. [DOI] [PubMed] [Google Scholar]
  • 34.Tubiana M, Koscielny S. The natural history of human breast cancer: implications fora screening strategy. Int J Radiat Oncol Biol Phys. 1990;19:1117–112. doi: 10.1016/0360-3016(90)90213-4. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Apx

RESOURCES