Skip to content

Fitting nonlinear functions in R

March 16, 2017

Following up on a post by Markus Gesmann, I wanted to look at logistic growth curves with a known inflection point. This is an example of functional data analysis with widespread applications, such as population dynamics and pharmacokinetics. Mages’ blog looked at the dugongs data from a textbook (Ratkowsky, 1983), which was subsequently analysed by Carlin & Gelfand (1991) and included in Vol. II of the BUGS manual as well as the Stan user guide. Markus compared point estimates from the R function nlm() with Bayesian inference using Stan. The methods were in close agreement with each other, as well as with the Gibbs sampler of Carlin & Gelfand. This made me curious to explore beyond this simple example, building towards the generalised logistic function that is a solution to the ordinary differential equation (ODE) of Richards (1959).

Note that this approach is not the same as the Runge-Kutta (RK45) and backward-differentiation formula (BDF) methods for solving ODEs in Stan. In the case of the dugongs data (and the other examples that I consider) the times are not strictly ordered (ie. there are multiple observations at a single timepoint. This results in the following error message from the integrate_ode function:

 [1] "Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:" 
 [2] "Exception thrown at line 32: integrate_ode_rk45: times is not a valid ordered vector. The element at 3 is 1.5, but should be greater than the previous element, 1.5"

The model from Carlin & Gelfand (1991) is as follows:

E[Y_t] = \alpha - \beta\gamma^{x_t}

Which is the solution to the following ODE:

\frac{dY}{dx} = - \beta\gamma^{x} log(\gamma) = (Y - \alpha)log(\gamma)

We don’t need to solve the equation numerically, since the solution is already available in closed form. Our interest is primarily in parameter estimation, particularly for ill-posed problems when the parameters are not well-identified.

Base R includes the nls() function, which stands for nonlinear least squares. By default, it uses the Gauss-Newton algorithm to search for parameter values that fit the observed data. In this case, repeated observations at the same timepoint are not an issue:

dat <- list(
"N" = 27,
"x" = c(1, 1.5, 1.5, 1.5, 2.5, 4, 5, 5, 7, 8, 8.5, 9, 9.5, 9.5, 10,
        12, 12, 13, 13, 14.5, 15.5, 15.5, 16.5, 17, 22.5, 29, 31.5),
"Y" = c(1.8, 1.85, 1.87, 1.77, 2.02, 2.27, 2.15, 2.26, 2.47, 2.19,
        2.26, 2.4, 2.39, 2.41, 2.5, 2.32, 2.32, 2.43, 2.47, 2.56, 2.65,
        2.47, 2.64, 2.56, 2.7, 2.72, 2.57))
plot(dat$x, dat$Y, xlab="Age", ylab="Length")
nlm <- nls(Y ~ alpha - beta * lambda^x, data=dat,
           start=list(alpha=1, beta=1, lambda=0.9))
summary(nlm)
nlm_fn <- predict(nlm, newdata=dat$x)
lines(dat$x, nlm_fn, col=6, lty=2)
dugongGauss

Gauss-Newton solution for the dugongs growth curve

Formula: Y ~ alpha - beta * lambda^x

Parameters:
      Estimate Std. Error t value Pr(>|t|) 
alpha  2.65807    0.06151   43.21 < 2e-16 ***
beta   0.96352    0.06968   13.83 6.3e-13 ***
lambda 0.87146    0.02460   35.42 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.09525 on 24 degrees of freedom

Number of iterations to convergence: 6 
Achieved convergence tolerance: 3.574e-06

If you compare the posterior modes from Carlin & Gelfand (1991), the results are almost identical:

> exp(0.975)
[1] 2.651167
> exp(-0.014)
[1] 0.9860975
> inv.logit(1.902)
[1] 0.8701177

The high posterior density (HPD) regions are asymmetric, due to the reparameterisation of the model.

Now let’s look at a slightly more complex problem:
\frac{dY}{dt} = rY \left( 1 - \frac{Y}{L} \right)
where r is a rate parameter and L is the limit (horizontal asymptote). The solution to this ODE for an initial value Y_0 is:
f(t) = \frac{Y_0 L e^{rt}}{L + Y_0 (e^{rt} - 1)}
Using these equations, we can simulate data from a heteroskedastic, truncated normal distribution where the variance is equal to the gradient of the mean:
Y_t \sim \mathcal{N}\left( f(t), \frac{\mathrm{d}f(t)}{\mathrm{d}t} \right)\mathbb{I}_{(0 \le Y_t \le L)}

library(truncnorm)
t <- seq(-4,4,by=0.05)
ft <- function(t, r, y0, tC, L){
         y0*L*exp(r*(t-tC))/(L + y0*(exp(r*(t-tC)) - 1))
      }
plot(t,ft(t,2,4,0,12),type='l',ylim=c(0,12),col=4, ylab='Y')
abline(v=0,lty=3)
abline(h=12,lty=3,col=2)
abline(h=0,lty=3)
dfdt <- function(t, r, tC, L){
         r*L*exp(r*(t+tC))/((exp(r*tC) + exp(r*t))^2)
      }
simt <- rnorm(25, sd=1.5)
simy <- matrix(nrow=length(simt), ncol=20)
for (i in 1:length(simt)) {
simy[i,] <- rtruncnorm(ncol(simy), a=0, b=12,
                       mean=ft(simt[i],2,4,0,12),
                       sd = sqrt(dfdt(simt[i],2,0,12)))
points(rep(simt[i], ncol(simy)), simy[i,], pch='*')
}
simpleLogistic

Heteroskedastic data generated from a simple logistic curve

The Gauss-Newton method underestimates the true parameter value by almost 4 standard errors:

dat <- list(Y=as.vector(simy), t=rep(simt,ncol(simy)),
            y0=4, maxY=12, tcrit=0)
nlm <- nls(Y ~ y0*maxY*exp(r*(t-tcrit))/(maxY + y0*(exp(r*(t-tcrit)) - 1)),
           data=dat, start=list(r=1))
summary(nlm)
nlm_fn <- predict(nlm, newdata=list(t=t))
lines(t, nlm_fn, col=6, lty=2, lwd=2)
Formula: Y ~ y0 * maxY * exp(r * (t - tcrit))/(maxY + y0 * (exp(r * (t - 
 tcrit)) - 1))

Parameters:
  Estimate Std. Error t value Pr(>|t|) 
r  1.76053    0.04219   41.73   <2e-16 ***

nlm_logistic

Although we were unable to recover the true parameter value using nls(), the approximation (magenta curve) is nevertheless quite close to the true function (in blue).  With 500 observations at 25 different timepoints, this suggests that the data are not very informative about the rate parameter. This problem is exacerbated when we introduce additional parameters with the aim of obtaining a more flexible function. Next time, we will look at analysing this data using Stan.

Further Reading

Bates & Watts (1988) “Nonlinear Regression Analysis and Its Applications.” Wiley-Interscience

Carlin & Gelfand (1991) “An iterative Monte Carlo method for nonconjugate Bayesian analysis.” Stat. Comp. 1(2), 119-128.

Carpenter, Gelman, Hoffman, Lee, Goodrich, Betancourt, Brubaker, Guo, Li & Riddell (2017) “Stan: A Probabilistic Programming Language.” J. Stat. Soft. 76(1)

Ratkowsky (1983) “Nonlinear Regression Modeling: A Unified Practical Approach.” Marcel Dekker

Richards (1959) “A Flexible Growth Function for Empirical Use.” J. Experimental Botany 10 (2): 290–300.

Thomas, Best, Lunn & Spiegelhalter (2012) “The BUGS Book: A Practical Introduction to Bayesian Analysis.” Chapman & Hall/CRC Press

Advertisements

From → Functional Data, R

One Comment

Trackbacks & Pingbacks

  1. Fitting nonlinear functions in Stan | Matt Moores

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Richard Everitt's blog

Computational Bayesian statistics

Let's Look at the Figures

David Firth's blog

Nicholas Tierney

Computational Bayesian statistics

One weiRd tip

Computational Bayesian statistics

Series B'log

discussion blog for JRSS Series B papers

Mad (Data) Scientist

Musings, useful code etc. on R and data science

R-bloggers

R news and tutorials contributed by (750) R bloggers

Another Astrostatistics Blog

The random musings of a reformed astronomer ...

Darren Wilkinson's research blog

Statistics, computing, data science, Bayes, stochastic modelling, systems biology and bioinformatics

CHANCE

Computational Bayesian statistics

StatsLife - Significance magazine

Computational Bayesian statistics

(badness 10000)

Computational Bayesian statistics

Igor Kromin

Computational Bayesian statistics

Statisfaction

I can't get no

Xi'an's Og

an attempt at bloggin, nothing more...

Sam Clifford

Postdoctoral Fellow, Bayesian Statistics, Aerosol Science

Bayesian Research & Applications Group

Frontier Research in Bayesian Methodology & Computation

%d bloggers like this: