generic likelihood methods in r
play

Generic likelihood methods in R Peter Dalgaard Department of - PowerPoint PPT Presentation

Generic likelihood methods in R Peter Dalgaard Department of Biostatistics University of Copenhagen Wednesday coffee, February 2008 1 / 15 Overview Pre-history/motivation Current capabilities in R Ideas for extensions 2 / 15


  1. Generic likelihood methods in R Peter Dalgaard Department of Biostatistics University of Copenhagen Wednesday coffee, February 2008 1 / 15

  2. Overview ◮ Pre-history/motivation ◮ Current capabilities in R ◮ Ideas for extensions 2 / 15

  3. Overview ◮ Pre-history/motivation ◮ Current capabilities in R ◮ Ideas for extensions 2 / 15

  4. Overview ◮ Pre-history/motivation ◮ Current capabilities in R ◮ Ideas for extensions 2 / 15

  5. Origin ◮ Ideas about generic likelihood software have been around for a long time, e.g. in the “Lexical scoping” paper (Gentleman and Ihaka, JCGS 2000) ◮ The concrete occasion was a question on R-help by Vincent Philion in July 2003 ◮ Implementation of mle() in c.2003-2004 + fixups 3 / 15

  6. Origin ◮ Ideas about generic likelihood software have been around for a long time, e.g. in the “Lexical scoping” paper (Gentleman and Ihaka, JCGS 2000) ◮ The concrete occasion was a question on R-help by Vincent Philion in July 2003 ◮ Implementation of mle() in c.2003-2004 + fixups 3 / 15

  7. Origin ◮ Ideas about generic likelihood software have been around for a long time, e.g. in the “Lexical scoping” paper (Gentleman and Ihaka, JCGS 2000) ◮ The concrete occasion was a question on R-help by Vincent Philion in July 2003 ◮ Implementation of mle() in c.2003-2004 + fixups 3 / 15

  8. Philion’s problem (lightly edited) Hello and thank you for your interest in this problem. "real life data" would look like this: x y 0 28 0.03 21 0.1 11 0.3 15 ... 100 0 Where X is dose and Y is response. the relation is linear for log(resp) = b log(dose) + intcpt Response for dose 0 is a "control" = Ymax. So, What I want is the dose for 50 percent response. For instance, in example 1: Ymax = 28 (this is also an observation with Poisson error) So I want dose for response = 14 = approx. 0.3 4 / 15

  9. Model What Philion effectively suggested was ◮ Y i ∼ Pois ( λ ( x i )) ◮ λ ( 0 ) = y max ◮ log λ ( x ) = α + β log x for x > 0 ◮ This is standard log-linear and can be fitted with glm() ◮ . . . but it is a strange model! 5 / 15

  10. Model What Philion effectively suggested was ◮ Y i ∼ Pois ( λ ( x i )) ◮ λ ( 0 ) = y max ◮ log λ ( x ) = α + β log x for x > 0 ◮ This is standard log-linear and can be fitted with glm() ◮ . . . but it is a strange model! 5 / 15

  11. Model What Philion effectively suggested was ◮ Y i ∼ Pois ( λ ( x i )) ◮ λ ( 0 ) = y max ◮ log λ ( x ) = α + β log x for x > 0 ◮ This is standard log-linear and can be fitted with glm() ◮ . . . but it is a strange model! 5 / 15

  12. Model What Philion effectively suggested was ◮ Y i ∼ Pois ( λ ( x i )) ◮ λ ( 0 ) = y max ◮ log λ ( x ) = α + β log x for x > 0 ◮ This is standard log-linear and can be fitted with glm() ◮ . . . but it is a strange model! 5 / 15

  13. Model What Philion effectively suggested was ◮ Y i ∼ Pois ( λ ( x i )) ◮ λ ( 0 ) = y max ◮ log λ ( x ) = α + β log x for x > 0 ◮ This is standard log-linear and can be fitted with glm() ◮ . . . but it is a strange model! 5 / 15

  14. Alternative model ◮ Avoid strange behaviour as x → 0 ◮ E.g.: λ ( x ) = y max / ( 1 + x / k ) ◮ alias 1 /λ ( x ) = 1 / y max + 1 / ky max × x ◮ I.e., this is an inverse-linear Poisson model ◮ glm(y x, poisson("inverse")) + reparameterization 6 / 15

  15. Alternative model ◮ Avoid strange behaviour as x → 0 ◮ E.g.: λ ( x ) = y max / ( 1 + x / k ) ◮ alias 1 /λ ( x ) = 1 / y max + 1 / ky max × x ◮ I.e., this is an inverse-linear Poisson model ◮ glm(y x, poisson("inverse")) + reparameterization 6 / 15

  16. Alternative model ◮ Avoid strange behaviour as x → 0 ◮ E.g.: λ ( x ) = y max / ( 1 + x / k ) ◮ alias 1 /λ ( x ) = 1 / y max + 1 / ky max × x ◮ I.e., this is an inverse-linear Poisson model ◮ glm(y x, poisson("inverse")) + reparameterization 6 / 15

  17. Alternative model ◮ Avoid strange behaviour as x → 0 ◮ E.g.: λ ( x ) = y max / ( 1 + x / k ) ◮ alias 1 /λ ( x ) = 1 / y max + 1 / ky max × x ◮ I.e., this is an inverse-linear Poisson model ◮ glm(y x, poisson("inverse")) + reparameterization 6 / 15

  18. Alternative model ◮ Avoid strange behaviour as x → 0 ◮ E.g.: λ ( x ) = y max / ( 1 + x / k ) ◮ alias 1 /λ ( x ) = 1 / y max + 1 / ky max × x ◮ I.e., this is an inverse-linear Poisson model ◮ glm(y x, poisson("inverse")) + reparameterization 6 / 15

  19. Problem ◮ λ ( x ) = y max / ( 1 + x / k ) ◮ For large x this is proportional to x − 1 ◮ . . . but not an arbitrary inverse power law ( x − β )as in the original question ◮ Possible fix: λ ( x ) = y max / ( 1 + x / k ) β ◮ . . . but this is no longer a generalized linear model 7 / 15

  20. Problem ◮ λ ( x ) = y max / ( 1 + x / k ) ◮ For large x this is proportional to x − 1 ◮ . . . but not an arbitrary inverse power law ( x − β )as in the original question ◮ Possible fix: λ ( x ) = y max / ( 1 + x / k ) β ◮ . . . but this is no longer a generalized linear model 7 / 15

  21. Problem ◮ λ ( x ) = y max / ( 1 + x / k ) ◮ For large x this is proportional to x − 1 ◮ . . . but not an arbitrary inverse power law ( x − β )as in the original question ◮ Possible fix: λ ( x ) = y max / ( 1 + x / k ) β ◮ . . . but this is no longer a generalized linear model 7 / 15

  22. Problem ◮ λ ( x ) = y max / ( 1 + x / k ) ◮ For large x this is proportional to x − 1 ◮ . . . but not an arbitrary inverse power law ( x − β )as in the original question ◮ Possible fix: λ ( x ) = y max / ( 1 + x / k ) β ◮ . . . but this is no longer a generalized linear model 7 / 15

  23. Problem ◮ λ ( x ) = y max / ( 1 + x / k ) ◮ For large x this is proportional to x − 1 ◮ . . . but not an arbitrary inverse power law ( x − β )as in the original question ◮ Possible fix: λ ( x ) = y max / ( 1 + x / k ) β ◮ . . . but this is no longer a generalized linear model 7 / 15

  24. This is silly! ◮ Why do we let the existence of standard model families interfere with a sensible choice of model? ◮ When all you have is a hammer. . . ◮ But we have many other tools! ◮ E.g. quite powerful optimizers like optim ◮ Why not just write out the likelihood and maximize it? 8 / 15

  25. This is silly! ◮ Why do we let the existence of standard model families interfere with a sensible choice of model? ◮ When all you have is a hammer. . . ◮ But we have many other tools! ◮ E.g. quite powerful optimizers like optim ◮ Why not just write out the likelihood and maximize it? 8 / 15

  26. This is silly! ◮ Why do we let the existence of standard model families interfere with a sensible choice of model? ◮ When all you have is a hammer. . . ◮ But we have many other tools! ◮ E.g. quite powerful optimizers like optim ◮ Why not just write out the likelihood and maximize it? 8 / 15

  27. This is silly! ◮ Why do we let the existence of standard model families interfere with a sensible choice of model? ◮ When all you have is a hammer. . . ◮ But we have many other tools! ◮ E.g. quite powerful optimizers like optim ◮ Why not just write out the likelihood and maximize it? 8 / 15

  28. This is silly! ◮ Why do we let the existence of standard model families interfere with a sensible choice of model? ◮ When all you have is a hammer. . . ◮ But we have many other tools! ◮ E.g. quite powerful optimizers like optim ◮ Why not just write out the likelihood and maximize it? 8 / 15

  29. The mle function ◮ Basically just a wrapper for optim ( "BFGS" method by default. ◮ Supply − log L and the routine does the rest (hopefully). ◮ Nice methods for summary display, likelihood profiling √ ( LRT, possibly signed), and approximate confidence intervals 9 / 15

  30. The mle function ◮ Basically just a wrapper for optim ( "BFGS" method by default. ◮ Supply − log L and the routine does the rest (hopefully). ◮ Nice methods for summary display, likelihood profiling √ ( LRT, possibly signed), and approximate confidence intervals 9 / 15

  31. The mle function ◮ Basically just a wrapper for optim ( "BFGS" method by default. ◮ Supply − log L and the routine does the rest (hopefully). ◮ Nice methods for summary display, likelihood profiling √ ( LRT, possibly signed), and approximate confidence intervals 9 / 15

  32. How to use it > library(stats4) > mll <- function(ymax, k, la) + with(e1, + -sum(dpois(response, + ymax / (1 + dose/k)^exp(la), + log=TRUE))) > fit <- mle(mll, start=list(ymax=28, k=.3, la=0)) 10 / 15

  33. Output > summary(fit) Maximum likelihood estimation Call: mle(minuslogl = mll, start = list(ymax = 28, k = 0.3, la = 0)) Coefficients: Estimate Std. Error ymax 24.4735208 4.7498215 k 0.1884905 0.2735927 la -0.2285072 0.4696129 -2 log L: 33.80755 11 / 15

  34. Profiling > par(mfrow=c(3,1)) > plot(profile(fit, del=.05)) 2.0 z 1.0 0.0 15 20 25 30 35 40 ymax 2.0 z 1.0 0.0 0.0 0.5 1.0 1.5 k 2.0 z 1.0 0.0 −1.0 −0.5 0.0 0.5 1.0 la 12 / 15

  35. Confidence intervals > confint(profile(fit, del=.05, maxsteps=500)) 2.5 % 97.5 % ymax 17.35158789 36.428935 k 0.01226197 3.562976 la -0.94484692 1.131715 13 / 15

Recommend


More recommend