an overdispersion model with covariates
play

An overdispersion model with covariates Chun-Yip Yau and Li Song - PowerPoint PPT Presentation

How many Xs do you know? An overdispersion model with covariates Chun-Yip Yau and Li Song December 10, 2007 How many Xs do you know? Overview Outline of the presentation Some background of the problem. The model set up. 1.


  1. How many X’s do you know? Model specification Original model specification Poisson model with overdispersion ◮ y ik ∼ Poisson( λ ik ) ◮ λ ik = a i b k g ik = e α i + β k + γ ik 1 ◮ g ik = e γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out γ ’s, y ik ∼ Neg-binomial( e α i + β k , ω k ) ◮ Prior assumptions: 1. α i ∼ N( µ α , σ 2 α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 ◮ Renormalization during fitting.

  2. How many X’s do you know? Model specification Original model specification Poisson model with overdispersion ◮ y ik ∼ Poisson( λ ik ) ◮ λ ik = a i b k g ik = e α i + β k + γ ik 1 ◮ g ik = e γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out γ ’s, y ik ∼ Neg-binomial( e α i + β k , ω k ) ◮ Prior assumptions: 1. α i ∼ N( µ α , σ 2 α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 ◮ Renormalization during fitting.

  3. How many X’s do you know? Model specification Original model specification Poisson model with overdispersion ◮ y ik ∼ Poisson( λ ik ) ◮ λ ik = a i b k g ik = e α i + β k + γ ik 1 ◮ g ik = e γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out γ ’s, y ik ∼ Neg-binomial( e α i + β k , ω k ) ◮ Prior assumptions: 1. α i ∼ N( µ α , σ 2 α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 ◮ Renormalization during fitting.

  4. How many X’s do you know? Model specification Original model specification Poisson model with overdispersion ◮ y ik ∼ Poisson( λ ik ) ◮ λ ik = a i b k g ik = e α i + β k + γ ik 1 ◮ g ik = e γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out γ ’s, y ik ∼ Neg-binomial( e α i + β k , ω k ) ◮ Prior assumptions: 1. α i ∼ N( µ α , σ 2 α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 ◮ Renormalization during fitting.

  5. How many X’s do you know? Model specification Original model specification Poisson model with overdispersion ◮ y ik ∼ Poisson( λ ik ) ◮ λ ik = a i b k g ik = e α i + β k + γ ik 1 ◮ g ik = e γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out γ ’s, y ik ∼ Neg-binomial( e α i + β k , ω k ) ◮ Prior assumptions: 1. α i ∼ N( µ α , σ 2 α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 ◮ Renormalization during fitting.

  6. How many X’s do you know? Model specification Original model specification Poisson model with overdispersion ◮ y ik ∼ Poisson( λ ik ) ◮ λ ik = a i b k g ik = e α i + β k + γ ik 1 ◮ g ik = e γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out γ ’s, y ik ∼ Neg-binomial( e α i + β k , ω k ) ◮ Prior assumptions: 1. α i ∼ N( µ α , σ 2 α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 ◮ Renormalization during fitting.

  7. How many X’s do you know? Model specification Original model specification Poisson model with overdispersion ◮ y ik ∼ Poisson( λ ik ) ◮ λ ik = a i b k g ik = e α i + β k + γ ik 1 ◮ g ik = e γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out γ ’s, y ik ∼ Neg-binomial( e α i + β k , ω k ) ◮ Prior assumptions: 1. α i ∼ N( µ α , σ 2 α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 ◮ Renormalization during fitting.

  8. How many X’s do you know? Model specification Original model specification Poisson model with overdispersion ◮ y ik ∼ Poisson( λ ik ) ◮ λ ik = a i b k g ik = e α i + β k + γ ik 1 ◮ g ik = e γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out γ ’s, y ik ∼ Neg-binomial( e α i + β k , ω k ) ◮ Prior assumptions: 1. α i ∼ N( µ α , σ 2 α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 ◮ Renormalization during fitting.

  9. How many X’s do you know? Model specification Model modification Putting covariates in the original model ◮ Put covariates in the individual level parameters. We also want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion. p ◮ α i = C α + � φ j x ij + ˜ α i j =1 q k ◮ γ ik = C ψ + � ψ kl x il + ˜ γ ik l =1 1 ◮ e ˜ γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out ˜ γ ’s, qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il y ik ∼ Neg-binomial , ω k  e j =1 l =1 

  10. How many X’s do you know? Model specification Model modification Putting covariates in the original model ◮ Put covariates in the individual level parameters. We also want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion. p ◮ α i = C α + � φ j x ij + ˜ α i j =1 q k ◮ γ ik = C ψ + � ψ kl x il + ˜ γ ik l =1 1 ◮ e ˜ γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out ˜ γ ’s, qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il y ik ∼ Neg-binomial , ω k  e j =1 l =1 

  11. How many X’s do you know? Model specification Model modification Putting covariates in the original model ◮ Put covariates in the individual level parameters. We also want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion. p ◮ α i = C α + � φ j x ij + ˜ α i j =1 q k ◮ γ ik = C ψ + � ψ kl x il + ˜ γ ik l =1 1 ◮ e ˜ γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out ˜ γ ’s, qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il y ik ∼ Neg-binomial , ω k  e j =1 l =1 

  12. How many X’s do you know? Model specification Model modification Putting covariates in the original model ◮ Put covariates in the individual level parameters. We also want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion. p ◮ α i = C α + � φ j x ij + ˜ α i j =1 q k ◮ γ ik = C ψ + � ψ kl x il + ˜ γ ik l =1 1 ◮ e ˜ γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out ˜ γ ’s, qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il y ik ∼ Neg-binomial , ω k  e j =1 l =1 

  13. How many X’s do you know? Model specification Model modification Putting covariates in the original model ◮ Put covariates in the individual level parameters. We also want to make our model as ”flexible” as possible. Meanwhile, we want to explain some of the overdispersion. p ◮ α i = C α + � φ j x ij + ˜ α i j =1 q k ◮ γ ik = C ψ + � ψ kl x il + ˜ γ ik l =1 1 ◮ e ˜ γ ik ∼ Γ(1 , ω k − 1 ) ◮ Integrate out ˜ γ ’s, qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il y ik ∼ Neg-binomial , ω k  e j =1 l =1 

  14. How many X’s do you know? Model specification Model modification Prior Assumptions qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il ◮ Neg-binomial  e j =1 l =1 , ω k  ◮ Prior assumptions: α i ∼ N( µ α , σ 2 1. ˜ α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 4. p ( ψ lk ) ∝ 1 5. φ j ∼ N( µ φ , σ 2 φ )

  15. How many X’s do you know? Model specification Model modification Prior Assumptions qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il ◮ Neg-binomial  e j =1 l =1 , ω k  ◮ Prior assumptions: α i ∼ N( µ α , σ 2 1. ˜ α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 4. p ( ψ lk ) ∝ 1 5. φ j ∼ N( µ φ , σ 2 φ )

  16. How many X’s do you know? Model specification Model modification Prior Assumptions qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il ◮ Neg-binomial  e j =1 l =1 , ω k  ◮ Prior assumptions: α i ∼ N( µ α , σ 2 1. ˜ α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 4. p ( ψ lk ) ∝ 1 5. φ j ∼ N( µ φ , σ 2 φ )

  17. How many X’s do you know? Model specification Model modification Prior Assumptions qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il ◮ Neg-binomial  e j =1 l =1 , ω k  ◮ Prior assumptions: α i ∼ N( µ α , σ 2 1. ˜ α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 4. p ( ψ lk ) ∝ 1 5. φ j ∼ N( µ φ , σ 2 φ )

  18. How many X’s do you know? Model specification Model modification Prior Assumptions qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il ◮ Neg-binomial  e j =1 l =1 , ω k  ◮ Prior assumptions: α i ∼ N( µ α , σ 2 1. ˜ α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 4. p ( ψ lk ) ∝ 1 5. φ j ∼ N( µ φ , σ 2 φ )

  19. How many X’s do you know? Model specification Model modification Prior Assumptions qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il ◮ Neg-binomial  e j =1 l =1 , ω k  ◮ Prior assumptions: α i ∼ N( µ α , σ 2 1. ˜ α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 4. p ( ψ lk ) ∝ 1 5. φ j ∼ N( µ φ , σ 2 φ )

  20. How many X’s do you know? Model specification Model modification Prior Assumptions qk  p  C α + � φ j x ij +˜ α i + β k + C ψ + � ψ kl x il ◮ Neg-binomial  e j =1 l =1 , ω k  ◮ Prior assumptions: α i ∼ N( µ α , σ 2 1. ˜ α ) 2. β k ∼ N( µ β , σ 2 β ) 3. p (1 /ω k ) ∝ 1 4. p ( ψ lk ) ∝ 1 5. φ j ∼ N( µ φ , σ 2 φ )

  21. How many X’s do you know? Model specification Model modification Normalization step ◮ Re-normalizing: 1. Renormalizing between ˜ α i ’s and β k ’s (see T. Zheng et la JASA 2006 ) 2. Renormalizing φ j ’s using constant C α making  p  � C α + φ j x ij  = 1. E  e j =1 3. Renormalizing ψ j ’s using constant C ψ making   qk C ψ + � ψ kl x il  = 1. E  e l =1

  22. How many X’s do you know? Model specification Model modification Normalization step ◮ Re-normalizing: 1. Renormalizing between ˜ α i ’s and β k ’s (see T. Zheng et la JASA 2006 ) 2. Renormalizing φ j ’s using constant C α making  p  � C α + φ j x ij  = 1. E  e j =1 3. Renormalizing ψ j ’s using constant C ψ making   qk C ψ + � ψ kl x il  = 1. E  e l =1

  23. How many X’s do you know? Model specification Model modification Normalization step ◮ Re-normalizing: 1. Renormalizing between ˜ α i ’s and β k ’s (see T. Zheng et la JASA 2006 ) 2. Renormalizing φ j ’s using constant C α making  p  � C α + φ j x ij  = 1. E  e j =1 3. Renormalizing ψ j ’s using constant C ψ making   qk C ψ + � ψ kl x il  = 1. E  e l =1

  24. How many X’s do you know? Model specification Model modification Normalization step ◮ Re-normalizing: 1. Renormalizing between ˜ α i ’s and β k ’s (see T. Zheng et la JASA 2006 ) 2. Renormalizing φ j ’s using constant C α making  p  � C α + φ j x ij  = 1. E  e j =1 3. Renormalizing ψ j ’s using constant C ψ making   qk C ψ + � ψ kl x il  = 1. E  e l =1

  25. How many X’s do you know? Fitting Algorithm Gibbs-Metropolis Algorithm ◮ Fitting procedure: 1. For each i , update α i using Metropolis with jumping i ∼ N ( α ( t − 1) , jump 2 distribution α ∗ α i ). i 2. For each k , update β k using Metropolis with jumping k ∼ N ( β ( t − 1) , jump 2 distribution β ∗ β k ). k 3. For each j , update φ j using Metropolis with jumping j ∼ N ( φ ( t − 1) , jump 2 distribution φ ∗ φ j ). j 4. For each l , k , update ψ lk using Metropolis with jumping lk ∼ N ( ψ ( t − 1) , jump 2 distribution ψ ∗ ψ lk ). lk 5. Use Gibbs to update µ α , σ 2 α , µ β , σ 2 β , µ φ , σ 2 φ respectively. 6. For each k , update ω k using Metropolis with jumping k ∼ N ( ω ( t − 1) , jump 2 distribution ω ∗ ω k ). k 7. Renormalizing at the end of each iteration.

  26. How many X’s do you know? Fitting Algorithm Gibbs-Metropolis Algorithm ◮ Fitting procedure: 1. For each i , update α i using Metropolis with jumping i ∼ N ( α ( t − 1) , jump 2 distribution α ∗ α i ). i 2. For each k , update β k using Metropolis with jumping k ∼ N ( β ( t − 1) , jump 2 distribution β ∗ β k ). k 3. For each j , update φ j using Metropolis with jumping j ∼ N ( φ ( t − 1) , jump 2 distribution φ ∗ φ j ). j 4. For each l , k , update ψ lk using Metropolis with jumping lk ∼ N ( ψ ( t − 1) , jump 2 distribution ψ ∗ ψ lk ). lk 5. Use Gibbs to update µ α , σ 2 α , µ β , σ 2 β , µ φ , σ 2 φ respectively. 6. For each k , update ω k using Metropolis with jumping k ∼ N ( ω ( t − 1) , jump 2 distribution ω ∗ ω k ). k 7. Renormalizing at the end of each iteration.

  27. How many X’s do you know? Fitting Algorithm Gibbs-Metropolis Algorithm ◮ Fitting procedure: 1. For each i , update α i using Metropolis with jumping i ∼ N ( α ( t − 1) , jump 2 distribution α ∗ α i ). i 2. For each k , update β k using Metropolis with jumping k ∼ N ( β ( t − 1) , jump 2 distribution β ∗ β k ). k 3. For each j , update φ j using Metropolis with jumping j ∼ N ( φ ( t − 1) , jump 2 distribution φ ∗ φ j ). j 4. For each l , k , update ψ lk using Metropolis with jumping lk ∼ N ( ψ ( t − 1) , jump 2 distribution ψ ∗ ψ lk ). lk 5. Use Gibbs to update µ α , σ 2 α , µ β , σ 2 β , µ φ , σ 2 φ respectively. 6. For each k , update ω k using Metropolis with jumping k ∼ N ( ω ( t − 1) , jump 2 distribution ω ∗ ω k ). k 7. Renormalizing at the end of each iteration.

  28. How many X’s do you know? Fitting Algorithm Gibbs-Metropolis Algorithm ◮ Fitting procedure: 1. For each i , update α i using Metropolis with jumping i ∼ N ( α ( t − 1) , jump 2 distribution α ∗ α i ). i 2. For each k , update β k using Metropolis with jumping k ∼ N ( β ( t − 1) , jump 2 distribution β ∗ β k ). k 3. For each j , update φ j using Metropolis with jumping j ∼ N ( φ ( t − 1) , jump 2 distribution φ ∗ φ j ). j 4. For each l , k , update ψ lk using Metropolis with jumping lk ∼ N ( ψ ( t − 1) , jump 2 distribution ψ ∗ ψ lk ). lk 5. Use Gibbs to update µ α , σ 2 α , µ β , σ 2 β , µ φ , σ 2 φ respectively. 6. For each k , update ω k using Metropolis with jumping k ∼ N ( ω ( t − 1) , jump 2 distribution ω ∗ ω k ). k 7. Renormalizing at the end of each iteration.

  29. How many X’s do you know? Fitting Algorithm Gibbs-Metropolis Algorithm ◮ Fitting procedure: 1. For each i , update α i using Metropolis with jumping i ∼ N ( α ( t − 1) , jump 2 distribution α ∗ α i ). i 2. For each k , update β k using Metropolis with jumping k ∼ N ( β ( t − 1) , jump 2 distribution β ∗ β k ). k 3. For each j , update φ j using Metropolis with jumping j ∼ N ( φ ( t − 1) , jump 2 distribution φ ∗ φ j ). j 4. For each l , k , update ψ lk using Metropolis with jumping lk ∼ N ( ψ ( t − 1) , jump 2 distribution ψ ∗ ψ lk ). lk 5. Use Gibbs to update µ α , σ 2 α , µ β , σ 2 β , µ φ , σ 2 φ respectively. 6. For each k , update ω k using Metropolis with jumping k ∼ N ( ω ( t − 1) , jump 2 distribution ω ∗ ω k ). k 7. Renormalizing at the end of each iteration.

  30. How many X’s do you know? Fitting Algorithm Gibbs-Metropolis Algorithm ◮ Fitting procedure: 1. For each i , update α i using Metropolis with jumping i ∼ N ( α ( t − 1) , jump 2 distribution α ∗ α i ). i 2. For each k , update β k using Metropolis with jumping k ∼ N ( β ( t − 1) , jump 2 distribution β ∗ β k ). k 3. For each j , update φ j using Metropolis with jumping j ∼ N ( φ ( t − 1) , jump 2 distribution φ ∗ φ j ). j 4. For each l , k , update ψ lk using Metropolis with jumping lk ∼ N ( ψ ( t − 1) , jump 2 distribution ψ ∗ ψ lk ). lk 5. Use Gibbs to update µ α , σ 2 α , µ β , σ 2 β , µ φ , σ 2 φ respectively. 6. For each k , update ω k using Metropolis with jumping k ∼ N ( ω ( t − 1) , jump 2 distribution ω ∗ ω k ). k 7. Renormalizing at the end of each iteration.

  31. How many X’s do you know? Fitting Algorithm Gibbs-Metropolis Algorithm ◮ Fitting procedure: 1. For each i , update α i using Metropolis with jumping i ∼ N ( α ( t − 1) , jump 2 distribution α ∗ α i ). i 2. For each k , update β k using Metropolis with jumping k ∼ N ( β ( t − 1) , jump 2 distribution β ∗ β k ). k 3. For each j , update φ j using Metropolis with jumping j ∼ N ( φ ( t − 1) , jump 2 distribution φ ∗ φ j ). j 4. For each l , k , update ψ lk using Metropolis with jumping lk ∼ N ( ψ ( t − 1) , jump 2 distribution ψ ∗ ψ lk ). lk 5. Use Gibbs to update µ α , σ 2 α , µ β , σ 2 β , µ φ , σ 2 φ respectively. 6. For each k , update ω k using Metropolis with jumping k ∼ N ( ω ( t − 1) , jump 2 distribution ω ∗ ω k ). k 7. Renormalizing at the end of each iteration.

  32. How many X’s do you know? Fitting Algorithm Gibbs-Metropolis Algorithm ◮ Fitting procedure: 1. For each i , update α i using Metropolis with jumping i ∼ N ( α ( t − 1) , jump 2 distribution α ∗ α i ). i 2. For each k , update β k using Metropolis with jumping k ∼ N ( β ( t − 1) , jump 2 distribution β ∗ β k ). k 3. For each j , update φ j using Metropolis with jumping j ∼ N ( φ ( t − 1) , jump 2 distribution φ ∗ φ j ). j 4. For each l , k , update ψ lk using Metropolis with jumping lk ∼ N ( ψ ( t − 1) , jump 2 distribution ψ ∗ ψ lk ). lk 5. Use Gibbs to update µ α , σ 2 α , µ β , σ 2 β , µ φ , σ 2 φ respectively. 6. For each k , update ω k using Metropolis with jumping k ∼ N ( ω ( t − 1) , jump 2 distribution ω ∗ ω k ). k 7. Renormalizing at the end of each iteration.

  33. How many X’s do you know? Model selection Full model Full model ◮ Put all the covariates in without any pre-selection. ◮ Advantages: 1. Easy to start with. 2. More appropriate to do backward selection. ◮ Disadvantages: 1. Likelihood may go wild during the fitting. 2. It would take a long time to converge. 3. It is more complicated to re-normalize. 4. The fitting results exhibit more variability.

  34. How many X’s do you know? Model selection Full model Full model ◮ Put all the covariates in without any pre-selection. ◮ Advantages: 1. Easy to start with. 2. More appropriate to do backward selection. ◮ Disadvantages: 1. Likelihood may go wild during the fitting. 2. It would take a long time to converge. 3. It is more complicated to re-normalize. 4. The fitting results exhibit more variability.

  35. How many X’s do you know? Model selection Full model Full model ◮ Put all the covariates in without any pre-selection. ◮ Advantages: 1. Easy to start with. 2. More appropriate to do backward selection. ◮ Disadvantages: 1. Likelihood may go wild during the fitting. 2. It would take a long time to converge. 3. It is more complicated to re-normalize. 4. The fitting results exhibit more variability.

  36. How many X’s do you know? Model selection Full model Full model ◮ Put all the covariates in without any pre-selection. ◮ Advantages: 1. Easy to start with. 2. More appropriate to do backward selection. ◮ Disadvantages: 1. Likelihood may go wild during the fitting. 2. It would take a long time to converge. 3. It is more complicated to re-normalize. 4. The fitting results exhibit more variability.

  37. How many X’s do you know? Model selection Full model Full model ◮ Put all the covariates in without any pre-selection. ◮ Advantages: 1. Easy to start with. 2. More appropriate to do backward selection. ◮ Disadvantages: 1. Likelihood may go wild during the fitting. 2. It would take a long time to converge. 3. It is more complicated to re-normalize. 4. The fitting results exhibit more variability.

  38. How many X’s do you know? Model selection Full model Full model ◮ Put all the covariates in without any pre-selection. ◮ Advantages: 1. Easy to start with. 2. More appropriate to do backward selection. ◮ Disadvantages: 1. Likelihood may go wild during the fitting. 2. It would take a long time to converge. 3. It is more complicated to re-normalize. 4. The fitting results exhibit more variability.

  39. How many X’s do you know? Model selection Full model Full model ◮ Put all the covariates in without any pre-selection. ◮ Advantages: 1. Easy to start with. 2. More appropriate to do backward selection. ◮ Disadvantages: 1. Likelihood may go wild during the fitting. 2. It would take a long time to converge. 3. It is more complicated to re-normalize. 4. The fitting results exhibit more variability.

  40. How many X’s do you know? Model selection Full model Full model ◮ Put all the covariates in without any pre-selection. ◮ Advantages: 1. Easy to start with. 2. More appropriate to do backward selection. ◮ Disadvantages: 1. Likelihood may go wild during the fitting. 2. It would take a long time to converge. 3. It is more complicated to re-normalize. 4. The fitting results exhibit more variability.

  41. How many X’s do you know? Model selection Full model Full model ◮ Put all the covariates in without any pre-selection. ◮ Advantages: 1. Easy to start with. 2. More appropriate to do backward selection. ◮ Disadvantages: 1. Likelihood may go wild during the fitting. 2. It would take a long time to converge. 3. It is more complicated to re-normalize. 4. The fitting results exhibit more variability.

  42. How many X’s do you know? Model selection Pre-selected model Pre-selected model ◮ Choose covariates using empirical methods to put into the model. ◮ Advantages: 1. Speed up the program. 2. In most cases, it is efficient. ◮ Disadvantages: 1. The important covariates may be excluded in the pre-selection.

  43. How many X’s do you know? Model selection Pre-selected model Pre-selected model ◮ Choose covariates using empirical methods to put into the model. ◮ Advantages: 1. Speed up the program. 2. In most cases, it is efficient. ◮ Disadvantages: 1. The important covariates may be excluded in the pre-selection.

  44. How many X’s do you know? Model selection Pre-selected model Pre-selected model ◮ Choose covariates using empirical methods to put into the model. ◮ Advantages: 1. Speed up the program. 2. In most cases, it is efficient. ◮ Disadvantages: 1. The important covariates may be excluded in the pre-selection.

  45. How many X’s do you know? Model selection Pre-selected model Pre-selected model ◮ Choose covariates using empirical methods to put into the model. ◮ Advantages: 1. Speed up the program. 2. In most cases, it is efficient. ◮ Disadvantages: 1. The important covariates may be excluded in the pre-selection.

  46. How many X’s do you know? Model selection Pre-selected model Pre-selected model ◮ Choose covariates using empirical methods to put into the model. ◮ Advantages: 1. Speed up the program. 2. In most cases, it is efficient. ◮ Disadvantages: 1. The important covariates may be excluded in the pre-selection.

  47. How many X’s do you know? Model selection Pre-selected model Pre-selected model ◮ Choose covariates using empirical methods to put into the model. ◮ Advantages: 1. Speed up the program. 2. In most cases, it is efficient. ◮ Disadvantages: 1. The important covariates may be excluded in the pre-selection.

  48. How many X’s do you know? Model selection Pre-selected model Pre-selection ◮ Fit a model without any covariates. Compute individual degrees and residuals. K � 1. ˜ y i = y ik k =1 2. r ik = √ y ik − � a i ˆ ˆ b k ◮ Choose φ ’s: 1. Regress ˜ y i ’s (or log(˜ y i )’s) against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the φ part of the model. ◮ Choose ψ ’s: 1. For each k, regress r ik ’s against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the ψ part of the model.

  49. How many X’s do you know? Model selection Pre-selected model Pre-selection ◮ Fit a model without any covariates. Compute individual degrees and residuals. K � 1. ˜ y i = y ik k =1 2. r ik = √ y ik − � a i ˆ ˆ b k ◮ Choose φ ’s: 1. Regress ˜ y i ’s (or log(˜ y i )’s) against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the φ part of the model. ◮ Choose ψ ’s: 1. For each k, regress r ik ’s against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the ψ part of the model.

  50. How many X’s do you know? Model selection Pre-selected model Pre-selection ◮ Fit a model without any covariates. Compute individual degrees and residuals. K � 1. ˜ y i = y ik k =1 2. r ik = √ y ik − � a i ˆ ˆ b k ◮ Choose φ ’s: 1. Regress ˜ y i ’s (or log(˜ y i )’s) against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the φ part of the model. ◮ Choose ψ ’s: 1. For each k, regress r ik ’s against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the ψ part of the model.

  51. How many X’s do you know? Model selection Pre-selected model Pre-selection ◮ Fit a model without any covariates. Compute individual degrees and residuals. K � 1. ˜ y i = y ik k =1 2. r ik = √ y ik − � a i ˆ ˆ b k ◮ Choose φ ’s: 1. Regress ˜ y i ’s (or log(˜ y i )’s) against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the φ part of the model. ◮ Choose ψ ’s: 1. For each k, regress r ik ’s against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the ψ part of the model.

  52. How many X’s do you know? Model selection Pre-selected model Pre-selection ◮ Fit a model without any covariates. Compute individual degrees and residuals. K � 1. ˜ y i = y ik k =1 2. r ik = √ y ik − � a i ˆ ˆ b k ◮ Choose φ ’s: 1. Regress ˜ y i ’s (or log(˜ y i )’s) against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the φ part of the model. ◮ Choose ψ ’s: 1. For each k, regress r ik ’s against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the ψ part of the model.

  53. How many X’s do you know? Model selection Pre-selected model Pre-selection ◮ Fit a model without any covariates. Compute individual degrees and residuals. K � 1. ˜ y i = y ik k =1 2. r ik = √ y ik − � a i ˆ ˆ b k ◮ Choose φ ’s: 1. Regress ˜ y i ’s (or log(˜ y i )’s) against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the φ part of the model. ◮ Choose ψ ’s: 1. For each k, regress r ik ’s against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the ψ part of the model.

  54. How many X’s do you know? Model selection Pre-selected model Pre-selection ◮ Fit a model without any covariates. Compute individual degrees and residuals. K � 1. ˜ y i = y ik k =1 2. r ik = √ y ik − � a i ˆ ˆ b k ◮ Choose φ ’s: 1. Regress ˜ y i ’s (or log(˜ y i )’s) against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the φ part of the model. ◮ Choose ψ ’s: 1. For each k, regress r ik ’s against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the ψ part of the model.

  55. How many X’s do you know? Model selection Pre-selected model Pre-selection ◮ Fit a model without any covariates. Compute individual degrees and residuals. K � 1. ˜ y i = y ik k =1 2. r ik = √ y ik − � a i ˆ ˆ b k ◮ Choose φ ’s: 1. Regress ˜ y i ’s (or log(˜ y i )’s) against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the φ part of the model. ◮ Choose ψ ’s: 1. For each k, regress r ik ’s against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the ψ part of the model.

  56. How many X’s do you know? Model selection Pre-selected model Pre-selection ◮ Fit a model without any covariates. Compute individual degrees and residuals. K � 1. ˜ y i = y ik k =1 2. r ik = √ y ik − � a i ˆ ˆ b k ◮ Choose φ ’s: 1. Regress ˜ y i ’s (or log(˜ y i )’s) against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the φ part of the model. ◮ Choose ψ ’s: 1. For each k, regress r ik ’s against all the covariates. 2. Use stepwise AIC procedure to choose the important covariates to enter the ψ part of the model.

  57. How many X’s do you know? Model selection Model comparison Model selection using DIC ◮ The usual AIC, BIC criteria may not be easily applied in this case. With the difficulty that the ”number of parameters” is not so clearly defined. ◮ The DIC criterion (Deviance Information; A. Gelman et la 2007 ): 1. An analog of AIC. DIC = mean ( deviance ) + 2 p D 2. deviance = − 2 ∗ log likelihood ; p D = var ( deviance ) / 2 ◮ The DIC is just suggestive rather than definitive.

  58. How many X’s do you know? Model selection Model comparison Model selection using DIC ◮ The usual AIC, BIC criteria may not be easily applied in this case. With the difficulty that the ”number of parameters” is not so clearly defined. ◮ The DIC criterion (Deviance Information; A. Gelman et la 2007 ): 1. An analog of AIC. DIC = mean ( deviance ) + 2 p D 2. deviance = − 2 ∗ log likelihood ; p D = var ( deviance ) / 2 ◮ The DIC is just suggestive rather than definitive.

  59. How many X’s do you know? Model selection Model comparison Model selection using DIC ◮ The usual AIC, BIC criteria may not be easily applied in this case. With the difficulty that the ”number of parameters” is not so clearly defined. ◮ The DIC criterion (Deviance Information; A. Gelman et la 2007 ): 1. An analog of AIC. DIC = mean ( deviance ) + 2 p D 2. deviance = − 2 ∗ log likelihood ; p D = var ( deviance ) / 2 ◮ The DIC is just suggestive rather than definitive.

  60. How many X’s do you know? Model selection Model comparison Model selection using DIC ◮ The usual AIC, BIC criteria may not be easily applied in this case. With the difficulty that the ”number of parameters” is not so clearly defined. ◮ The DIC criterion (Deviance Information; A. Gelman et la 2007 ): 1. An analog of AIC. DIC = mean ( deviance ) + 2 p D 2. deviance = − 2 ∗ log likelihood ; p D = var ( deviance ) / 2 ◮ The DIC is just suggestive rather than definitive.

  61. How many X’s do you know? Model selection Model comparison Model selection using DIC ◮ The usual AIC, BIC criteria may not be easily applied in this case. With the difficulty that the ”number of parameters” is not so clearly defined. ◮ The DIC criterion (Deviance Information; A. Gelman et la 2007 ): 1. An analog of AIC. DIC = mean ( deviance ) + 2 p D 2. deviance = − 2 ∗ log likelihood ; p D = var ( deviance ) / 2 ◮ The DIC is just suggestive rather than definitive.

  62. How many X’s do you know? Results Overdispersion of pre-selected model Groups Overdispersion Groups Overdispersion 1 1.21.41.61.8 2 2.22.4 2 4 6 8 10 12 Michael Amer. Indians Christina New Birth Christopher Widow(er) Jacqueline Dialysis James Postal Worker Jennifer Pilot Anthony Jaycees Kimberly Diabetic Robert Business Stephanie Gun Dealer HIV positive David Nicole AIDS Adoption Homeless Twin Rape Suicide Prison Auto Accident Homicide

  63. How many X’s do you know? Results Reduction of overdispersion 10 Full model with all covariates Reduced model with pre−selected covariates Reduced model with post−selected covariates 8 Overdispersion with covariates 6 4 2 2 4 6 8 10 Overdispersion without covariates

  64. How many X’s do you know? Results DIC of different models Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 0 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

  65. How many X’s do you know? Results DIC of different models Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 0 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

  66. How many X’s do you know? Results DIC of different models Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 0 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

  67. How many X’s do you know? Results DIC of different models Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 0 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

  68. How many X’s do you know? Results DIC of different models Models Relative DIC Ranking All covariates model 2072.8 5 Post-selected model from all covariates 716.05 3 Pre-selected model 0 1 Post-selected model from pre-selected 422.46 2 Original model 958.67 4

  69. How many X’s do you know? Further developments and conclusions Further developments ◮ Future work: 1. Interpretation of the estimates of covariates. (To be appeared in the report.) 2. Alternative methods: Generalized Linear Mixed Models. Likelihood is hard to deal with. 3. Apply the model to the censored data. p n 4. Other ways to re-normalize. (i.e. set � φ j = 0; set � α i = 0) ˜ j =1 i =1 ◮ Conclusion: 1. The pre-selection we use is a very good method to choose the right covariates. 2. DIC could be a good criterion for model comparison. It also shows that the pre-selected model is the best among the easy models we can think of. 3. Putting covariates in the model in the right way can reduce the overdispersion.

  70. How many X’s do you know? Further developments and conclusions Further developments ◮ Future work: 1. Interpretation of the estimates of covariates. (To be appeared in the report.) 2. Alternative methods: Generalized Linear Mixed Models. Likelihood is hard to deal with. 3. Apply the model to the censored data. p n 4. Other ways to re-normalize. (i.e. set � φ j = 0; set � α i = 0) ˜ j =1 i =1 ◮ Conclusion: 1. The pre-selection we use is a very good method to choose the right covariates. 2. DIC could be a good criterion for model comparison. It also shows that the pre-selected model is the best among the easy models we can think of. 3. Putting covariates in the model in the right way can reduce the overdispersion.

  71. How many X’s do you know? Further developments and conclusions Further developments ◮ Future work: 1. Interpretation of the estimates of covariates. (To be appeared in the report.) 2. Alternative methods: Generalized Linear Mixed Models. Likelihood is hard to deal with. 3. Apply the model to the censored data. p n 4. Other ways to re-normalize. (i.e. set � φ j = 0; set � α i = 0) ˜ j =1 i =1 ◮ Conclusion: 1. The pre-selection we use is a very good method to choose the right covariates. 2. DIC could be a good criterion for model comparison. It also shows that the pre-selected model is the best among the easy models we can think of. 3. Putting covariates in the model in the right way can reduce the overdispersion.

  72. How many X’s do you know? Further developments and conclusions Further developments ◮ Future work: 1. Interpretation of the estimates of covariates. (To be appeared in the report.) 2. Alternative methods: Generalized Linear Mixed Models. Likelihood is hard to deal with. 3. Apply the model to the censored data. p n 4. Other ways to re-normalize. (i.e. set � φ j = 0; set � α i = 0) ˜ j =1 i =1 ◮ Conclusion: 1. The pre-selection we use is a very good method to choose the right covariates. 2. DIC could be a good criterion for model comparison. It also shows that the pre-selected model is the best among the easy models we can think of. 3. Putting covariates in the model in the right way can reduce the overdispersion.

  73. How many X’s do you know? Further developments and conclusions Further developments ◮ Future work: 1. Interpretation of the estimates of covariates. (To be appeared in the report.) 2. Alternative methods: Generalized Linear Mixed Models. Likelihood is hard to deal with. 3. Apply the model to the censored data. p n 4. Other ways to re-normalize. (i.e. set � φ j = 0; set � α i = 0) ˜ j =1 i =1 ◮ Conclusion: 1. The pre-selection we use is a very good method to choose the right covariates. 2. DIC could be a good criterion for model comparison. It also shows that the pre-selected model is the best among the easy models we can think of. 3. Putting covariates in the model in the right way can reduce the overdispersion.

  74. How many X’s do you know? Further developments and conclusions Further developments ◮ Future work: 1. Interpretation of the estimates of covariates. (To be appeared in the report.) 2. Alternative methods: Generalized Linear Mixed Models. Likelihood is hard to deal with. 3. Apply the model to the censored data. p n 4. Other ways to re-normalize. (i.e. set � φ j = 0; set � α i = 0) ˜ j =1 i =1 ◮ Conclusion: 1. The pre-selection we use is a very good method to choose the right covariates. 2. DIC could be a good criterion for model comparison. It also shows that the pre-selected model is the best among the easy models we can think of. 3. Putting covariates in the model in the right way can reduce the overdispersion.

  75. How many X’s do you know? Further developments and conclusions Further developments ◮ Future work: 1. Interpretation of the estimates of covariates. (To be appeared in the report.) 2. Alternative methods: Generalized Linear Mixed Models. Likelihood is hard to deal with. 3. Apply the model to the censored data. p n 4. Other ways to re-normalize. (i.e. set � φ j = 0; set � α i = 0) ˜ j =1 i =1 ◮ Conclusion: 1. The pre-selection we use is a very good method to choose the right covariates. 2. DIC could be a good criterion for model comparison. It also shows that the pre-selected model is the best among the easy models we can think of. 3. Putting covariates in the model in the right way can reduce the overdispersion.

Recommend


More recommend