statistical models for sequencing data from experimental
play

Statistical Models for sequencing data: from Experimental Design to - PowerPoint PPT Presentation

Best practices in the analysis of RNA-Seq and CHiP-Seq data 4 th -5 th May 2017 University of Cambridge, Cambridge, UK Statistical Models for sequencing data: from Experimental Design to Generalized Linear Models Oscar M. Rueda Breast Cancer


  1. Design matrix for ANOVA models ! $ ! $ S 1 1 0 0 # & # & ! $ T A S 2 0 1 0 Sample Treatment # & # & # & # & # & 0 0 1 S 3 T B # & Sample1 Treatment A = # & # & 1 0 0 S 4 # & Sample 2 Treatment B # & C # & # & " % 0 1 0 S 5 # & # & Sample 3 Control # & 0 0 1 # & S 6 " % " % Sample 4 Treatment A Sample 5 Treatment B ! $ ! $ S 1 1 1 0 Sample 6 Control # & # & ! $ β 0 S 2 1 0 1 # & # & # & # & # & S 3 1 0 0 a # & = # & # & 1 1 0 S 4 # & b Control = Baseline # & # & # & " % T A = Baseline + a 1 0 1 S 5 # & # & # & T B = Baseline + b 1 0 0 # & S 6 " % " % 31

  2. Baseline levels The model with intercept always take one level as a baseline: The baseline is treatment A, the coefficients are comparisons against it! By default, R uses the first level as baseline ! $ ! $ S 1 1 0 0 # & # & ! $ β 0 S 2 1 1 0 # & # & # & # & # & S 3 1 0 1 b # & = # & # & 1 0 0 S 4 # & c # & # & # & " % 1 1 0 S 5 # & # & # & 1 0 1 # & S 6 " % " % 32

  3. R code R code: > Treatment <- rep(c(“TreatmentA”, “TreatmentB”, “Control”), 2) > design.matrix <- model.matrix(~ Treatment) (model with intercept) > design.matrix <- model.matrix(~ -1 + Treatment) (model without intercept) > design.matrix <- model.matrix(~ 0 + Treatment) (model without intercept) 33

  4. Exercise Build contrast matrices for all pairwise comparisons for this design: ! $ ! $ S 1 1 0 0 ! $ ˆ # & # & ! $ T A ! $ T A S 2 0 1 0 # & # & # & # & # & # & ˆ # & # & S 3 0 0 1 T B T B # & # & = # & # & # & # & 1 0 0 S 4 # & ˆ # & " % C # & # & C # & " % 0 1 0 S 5 # & " % # & # & 0 0 1 # & S 6 " % " % 34

  5. Exercise Build contrast matrices for all pairwise comparisons for these designs: ! $ ! $ S 1 1 0 0 ! $ ˆ # & # & ! $ T A " % T A S 2 0 1 0 # & 1 0 − 1 # & # & # & $ ' # & ˆ # & # & S 3 0 0 1 0 1 − 1 T B T B # & $ ' = # & # & # & $ ' 1 0 0 S 4 # & 1 − 1 0 ˆ # & # & C # & # & C # & " % 0 1 0 S 5 # & " % # & # & 0 0 1 # & S 6 " % " % 35

  6. Exercise Build contrast matrices for all pairwise comparisons for these designs: ! $ ! $ S 1 1 1 0 # & # & ! $ ! $ ˆ β 0 S 2 β 0 1 0 1 ! $ # & # & # & # & # & # & # & S 3 1 0 0 # & a # & ˆ a = # & # & # & # & 1 1 0 # & S 4 b ˆ # & # & # & b # & # & " % " % 1 1 1 S 5 # & " % # & # & 1 0 0 # & S 6 " % " % 36

  7. Exercise Build contrast matrices for all pairwise comparisons for these designs: ! $ ! $ S 1 1 1 0 # & # & ! $ ! $ ˆ β 0 S 2 β 0 1 0 1 " % # & # & # & # & 0 1 0 $ ' # & # & S 3 1 0 0 # & a # & ˆ a = 0 0 1 $ ' # & # & # & 1 1 0 # & S 4 b ˆ $ ' # & 0 1 − 1 # & b # & # & # & " % 1 1 1 S 5 # & " % # & # & 1 0 0 # & S 6 " % " % 37

  8. Models with 2 factors Sample Treatment ER status Treatme Sample1 Treatment A + Sample 2 No Treatment + Sample 3 Treatment A + Sample 4 No Treatment + Sample 5 Treatment A - Sample 6 No Treatment - Sample 7 Treatment A - Sample 8 No Treatment - Number of samples: 8 Number of factors: 2 Treatment: Number of levels: 2 ER: Number of levels: 2 38

  9. Understanding Interac:ons Tr eat x ER positive Treat x ER negative interaction interaction Treat x ER effects Treat x ER effects No Treat Treat A ER - S6, S8 S5, S7 Treat + ER + S2, S4 S1, S3 ER effects Treat Both ER Both ER Treat effect effects effect effects effect effect Adapted from Natalie Thorne, Nuno L. Barbosa Morais 39

  10. Models with 2 factors and no interac:on Model with no interac:on: only main effects Number of coefficients (parameters): Intercept + ( ♯ levels Treat -1) + ( ♯ levels ER -1) = 3 If we remove the intercept, the addi:onal parameter comes from the missing level in one of the variables, but in models with more than 1 factor it is a good idea to keep the intercept. 40

  11. Models with 2 factors (no interac:on) R code: > design.matrix <- model.matrix(~Treatment+ER) (model with intercept) ! $ S 1 ' * In R, the baseline for each 1 1 1 # & ) , variable is the first level. S 2 # & 1 0 1 ) , # & S 3 ) , ! $ 1 1 1 β 0 # & ) , # & S 4 1 0 1 # & a = ) , # & S 5 # & 1 1 0 ) , # & er + # & S 6 # & No Treat Treat A ) , " % 1 0 0 # & ) , ER - S6, S8 S5, S7 S 7 1 1 0 # & ) , ) , ER + S2, S4 S1, S3 # & 1 0 0 S 8 ( + " % 41

  12. Models with 2 factors (no interac:on) R code: > design.matrix <- model.matrix(~Treatment+ER) (model with intercept) ! $ S 1 ' * 1 1 1 # & ) , S 2 # & 1 0 1 ) , # & S 3 ) , ! $ 1 1 1 β 0 # & ) , # & S 4 1 0 1 # & a = ) , # & S 5 # & 1 1 0 ) , # & er + # & S 6 # & ) , " % 1 0 0 # & No Treat Treat A ) , S 7 1 1 0 # & ) , ER - S6, S8 S5, S7 ) , # & 1 0 0 S 8 ( + " % ER + S2, S4 S1, S3 42

  13. Models with 2 factors and interac:on Model with interac:on: main effects + interac/on Number of coefficients (parameters): Intercept + ( ♯ levels Treat -1) + ( ♯ levels ER -1) + (( ♯ levels Treat -1) * ( ♯ levels ER -1)) = 4 43

  14. Models with 2 factors (interac:on) R code: > design.matrix <- model.matrix(~Treatment*ER) (model with intercept) ! $ Y 1 ' * # & 1 1 1 1 ) , Y 2 # & 1 0 1 0 ! $ ) , # & “Extra effect” of Treatment A on Y 3 # & ) , 1 1 1 1 # & ER+ samples β 0 # & ) , # & Y 4 1 0 1 0 # & a = ) , # & 1 1 0 0 # & Y 5 ) , # & er + # & ) , # & 1 0 0 0 Y 6 # & a . er + ) , # & " % No Treat Treat A 1 1 0 0 Y 7 ) , # & ) , ER - S6, S8 S5, S7 1 0 0 0 ( + # & Y 8 ER + S2, S4 S1, S3 " % 44

  15. Models with 2 factors (interac:on) R code: > design.matrix <- model.matrix(~Treatment*ER) (model with intercept) ! $ Y 1 ' * # & 1 1 1 1 ) , Y 2 # & 1 0 1 0 ! $ ) , # & “Extra effect” of Treatment A on Y 3 # & ) , 1 1 1 1 # & ER+ samples β 0 # & ) , # & Y 4 1 0 1 0 # & a = ) , # & 1 1 0 0 # & Y 5 ) , # & er + # & ) , # & 1 0 0 0 Y 6 # & a . er + ) , # & " % No Treat Treat A 1 1 0 0 Y 7 ) , # & ) , ER - S6, S8 S5, S7 1 0 0 0 ( + # & Y 8 ER + S2, S4 S1, S3 " % 45

  16. 2 by 3 factorial experiment • Identify DE genes that have different time profiles between different mutants. α = time effect, β = strains, αβ = interaction effect Exp Exp α > 0 α =0 Strain A Strain A β = 0 β > 0 Strain B αβ =0 αβ =0 Strain B 0 12 24 0 12 24 time time α > 0 Strain A Exp Exp β > 0 αβ >0 Strain B Strain B αβ =0 Strain A 0 12 24 0 12 24 time time 46 Slide by Natalie Thorne, Nuno L. Barbosa Morais

  17. Paired Designs Sample Type Sample Type Sample 1 Tumour Sample 1 Tumour Sample 2 Matched Normal Sample 1 Matched Normal Sample 3 Tumour Sample 2 Tumour Sample 4 Matched Normal Sample 2 Matched Normal Sample 5 Tumour Sample 3 Tumour Sample 6 Matched Normal Sample 3 Matched Normal Sample 7 Tumour Sample 4 Tumour Sample 8 Matched Normal Sample 4 Matched Normal Number of samples: 8 Number of samples: 4 Number of factors: 1 Number of factors: 2 Type: Number of levels: 2 Sample: Number of levels: 4 Type: Number of levels: 2 47

  18. Design matrix for Paired experiments We can gain precision in our es:mates with a paired design, because individual variability is removed when we compare the effect of the treatment within the same sample. R code: > design.matrix <- model.matrix(~-1 +Type) (unpaired; model without intercept) > design.matrix <- model.matrix(~-1 +Sample+Type) (paired; model without intercept) These effects only reflect biological differences not ! $ Sample Type Y 1 related to tumour/normal # & ' * 1 0 0 0 1 Sample 1 Tumour Y 2 # & ) , effect. 1 0 0 0 0 Sample 1 Matched Normal ! $ ) , # & S 1 Y 3 # & # & ) , 0 1 0 0 1 Sample 2 Tumour S 2 # & ) , # & Y 4 0 1 0 0 0 Sample 2 Matched Normal # & S 3 # & = ) , Y 5 0 0 1 0 1 Sample 3 Tumour # & ) , # & S 4 # & # & ) , 0 0 1 0 0 Sample 3 Matched Normal Y 6 t # & ) , # & " % 0 0 0 1 1 Sample 4 Tumour Y 7 # & ) , ) , 0 0 0 1 0 Sample 4 Matched Normal # & ( + Y 8 " % 48

  19. Analysis of covariance (Models with categorical and con:nuous variables) Sample ER Dose Sample 1 + 37 Sample 2 - 52 Sample 3 + 65 Sample 4 - 89 Sample 5 + 24 Sample 6 - 19 Sample 7 + 54 Sample 8 - 67 Number of samples: 8 Number of factors: 2 ER: Number of levels: 2 Dose: Con:nuous 49

  20. Analysis of covariance (Models with categorical and con:nuous variables) R code: > design.matrix <- model.matrix(~ ER + dose) ! $ Y 1 # & ' * 1 1 37 Y 2 ) , # & If we consider the effect of dose 1 0 52 # & ) , linear we use 1 coefficient (degree Y 3 ) , ! $ # & 1 1 65 β 0 of freedom). We can also model it ) , # & # & Y 4 1 0 89 as non-linear (using splines, for er + = ) , # & # & example). Y 5 1 1 24 ) , # & # & d # & # & ) , " % 1 0 19 Y 6 Sample ER Dose ) , # & 1 1 54 Sample 1 + 37 Y 7 ) , # & ) , Sample 2 - 52 1 0 67 ( + # & Sample 3 + 65 Y 8 " % Sample 4 - 89 Sample 5 + 24 Sample 6 - 19 Sample 7 + 54 50 Sample 8 - 67

  21. Analysis of covariance (Models with categorical and con:nuous variables) Interac:on: Is it the effect of dose equal in ER + and ER -? R code: > design.matrix <- model.matrix(~ ER * dose) ! $ Y 1 # & ' * 1 1 37 37 Y 2 ) , # & If the interac:on is significant, the 1 0 52 0 # & ) , effect on the dose is different Y 3 ! $ ) , β 0 # & 1 1 65 65 depending on the levels of ER. # & ) , # & Y 4 1 0 89 0 er + # & = ) , # & # & Y 5 1 1 24 24 d ) , # & # & # & ) , 1 0 19 0 er + . d Y 6 " % Sample ER Dose ) , # & 1 1 54 54 Sample 1 + 37 Y 7 ) , # & ) , Sample 2 - 52 1 0 67 0 ( + # & Sample 3 + 65 Y 8 " % Sample 4 - 89 Sample 5 + 24 Sample 6 - 19 Sample 7 + 54 51 Sample 8 - 67

  22. Time Course experiments Sample Time Sample 1 0h Main ques:on: how does Sample 1 1h expression change over :me? Sample 1 4h Sample 1 16h Sample 2 0h If we model :me as categorical, we don’t make assump:ons Sample 2 1h about its effect, but we use too Sample 2 4h many degrees of freedom. Sample 2 16h If we model :me as con:nuous, Number of samples: 2 we use less degrees of freedom Number of factors: 2 but we have to make assump:ons Sample: Number of levels: 2 about the type of effect. Time: Con:nuous or categorical? Intermediate solu:on: splines 52

  23. Time Course experiments: no assump:ons R code: > design.matrix <- model.matrix(~Sample + factor(Time)) ! $ Y 1 # & ' * We can use 1 0 0 0 0 Y 2 # & ) , contrasts to test ! $ 1 0 1 0 0 S 1 ) , # & differences at :me # & Y 3 ) , # & 1 0 0 1 0 S 2 points. # & # & ) , Y 4 1 0 0 0 1 # & T = ) , # & # & 1 Y 5 0 1 0 0 0 ) , # & # & T 4 # & ) , 0 1 1 0 0 Y 6 # & ) , # & T Sample Time # & 0 1 0 1 0 " % 16 Y 7 ) , # & Sample 1 0h ) , 0 1 0 0 1 ( + Sample 1 1h # & Y 8 Sample 1 4h " % Sample 1 16h Sample 2 0h Sample 2 1h Sample 2 4h Sample 2 16h 53

  24. Time Course experiments Sample Time Sample 1 0h Sample 1 1h R code: > design.matrix <- model.matrix(~Sample + Time) Sample 1 4h Sample 1 16h Sample 2 0h ! $ Y Sample 2 1h 1 # & ' * 1 0 0 Sample 2 4h Y 2 # & ) , Sample 2 16h 1 0 1 ) , # & We are assuming a Y 3 ) , ! $ Big coef x # & 1 0 4 S 1 linear effect on # & ) , # & Y 4 1 0 16 :me S 2 = ) , # & # & Y 5 0 1 0 ) , # & # & X # & # & ) , " % 0 1 1 Y 6 small coef x ) , # & 0 1 4 Y 7 ) , # & ) , 0 1 16 # & ( + time Y 8 " % Intermediate models are possible: splines Large neg coef x 54

  25. Mul: factorial models • We can fit models with many variables • Sample size must be adequate to the number of factors • Same rules for building the design matrix must be used: • There will be one column in design matrix for the intercept • Con:nuous variables with a linear effect will need one column in the design matrix • Categorical variable will need ♯ levels -1 columns • Interac:ons will need ( ♯ levels -1) x ( ♯ levels -1) • It is possible to include interac:ons of more than 2 variables, but the number of samples needed to accurately es:mate those interac:ons is large. 55

  26. Generalized linear models

  27. Sta:s:cal models – We want to model the expected result of an outcome (dependent variable) under given values of other variables (independent variables) Arbitrary func:on (any shape) Expected value of variable y A set of k independent variables (also called E ( Y ) = f ( X ) factors) Y = f ( X ) + ε This is the variability around the expected mean of y 57

  28. Fixed vs Random effects – If we consider an independent variable X i as fixed, that is the set of observed values has been fixed by the design, then it is called a fixed factor. – If we consider an independent variable X j as random, that is the set of observed values comes from a realiza:on of a random process, it is called a random factor. – Models that include random effects are called MIXED MODELS. – In this course we will only deal with fixed factors. 58

  29. Linear models – The observed value of Y is a linear combina:on of the effects of the independent variables Arbitrary number of independent variables Polynomials are valid E ( Y ) = β 0 + β 1 X 1 + β 2 X 2 + ... + β k X k E ( Y ) = β 0 + β 1 X 1 + β 2 X 2 1 + ... + β p X p 1 E ( Y ) = β 0 + β 1 log( X 1 ) + β 2 f ( X 2 ) + ... + β k X k We can use func:ons Smooth func:ons: not exactly the same as of the variables if the the so-called addi/ve models effects are linear – If we include categorical variables the model is called General Linear Model 59

  30. Model Es:ma:on We use least squares esImaIon 12 ● ● ● 0.6 ● ● ● 10 ● ● ● 0.4 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 6 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 ● ● ● ● ● ● ● ● ● ● ● Y Y ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 4 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● − 0.2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● − 0.4 ● ● ● ● ● ● ● ● 0 ● ● ● ● − 0.6 ● − 2 ● ● 0 20 40 60 80 100 Male Female X X Given n observa:ons (y 1 ,..y n , x 1 ,.. x n ) minimize the differences between the observed and the predicted values 60

  31. Model Es:ma:on Y = β X + ε β Parameter of interest (effect of X on Y) ˆ β EsImator of the parameter of interest se ( ˆ β ) Standard Error of the es:mator of the parameter of interest ˆ β = ( X T X ) − 1 X T Y se ( ˆ β i ) = σ c i where c i is the i th diagonal element of X T X − 1 ( ) y = ˆ ˆ β x Fived values (predicted by the model) e = y − ˆ y 61 Residuals (observed errors)

  32. Model Assump:ons In order to conduct sta:s:cal inferences on the parameters on the model, some assump:ons must be made: • The observa:ons 1,..,n are independent • Normality of the errors: ε i ~ N (0, σ 2 ) • Homoscedas:city: the variance is constant. • Linearity. 62

  33. Generalized linear models – Extension of the linear model to other distribu:ons and non-linearity in the structure (to some degree) Link func:on g ( E ( Y )) = X β – Y must follow a probability distribu:on from the exponen:al family (Bernoulli, Binomial, Poisson, Gamma, Normal,…) – Parameter es:ma:on must be performed using an itera:ve method (IWLS). 63

  34. Example: Logis:c Regression – We want to study the rela:onship between the presence of an amplifica:on in the ERBB2 gene and the size of the tumour in a specific type of breast cancer. – Our dependent variable Y, takes two possible values: “AMP”, “NORMAL” (“YES”, “NO”) – X (size) takes con:nuous values. 64

  35. Example: Logis:c Regression YES ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● It is very difficult to see the relationship. Let’s model the ERBB2 Amplification “probability of success” : in this case, the probability of amplification NO ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 5 10 15 20 Size 65

  36. Example: Logis:c Regression 1.0 ● ● ● ● ● ● 0.8 ● ● ● ● ● 0.6 Prob.Amplification ● ● Some ● 0.4 predictions ● are out of 0.2 the possible range for a probability 0.0 ● ● ● ● ● ● ● 0 5 10 15 20 66 Size

  37. Example: Logis:c Regression We can transform the probabilities to a scale that goes from –Inf to Inf using log odds ● ● ● ● ● ● ● ● ● ● ● ● ● ● − 2 ● − 4 " % p log odds Amplification log odds = log $ ' − 6 1 − p # & − 8 − 10 ● ● ● ● ● ● ● 67 0 5 10 15 20 Size

  38. Example: Logis:c Regression How does this relate to the generalized linear model? • Y follows a Bernoulli distribu:on; it can take two values (YES or NO) • The expecta:on of Y, p is the probability of YES (EY=p) • We assume that there is a linear rela:onship between size and a func:on of the expected value of Y: the log odds (the link func:on) log odds ( prob . amplif ) = β 0 + β 1 Size g ( EY ) = β X 68

  39. Binomial Distribu:on • It is the distribu:on of the number of events in a series of n independent Bernoulli experiments, each with a probability of success p . Binomial distribution. n=10, p=0.3 0.25 • Y can take integer 0.20 values from 0 to n 0.15 • EY=np 0.10 • VarY= np(1-p) 0.05 0.00 0 1 2 3 4 5 6 7 8 9 10 69

  40. Poisson Distribu:on • Let Y ~ B(n,p). If n is large and p is small then Y can be approximated by a Poisson Distribu:on ( Law of rare events ) Poisson Distribution λ = 2 • Y ~ P(λ) 0.25 • EY=λ 0.20 • VarY=λ 0.15 0.10 0.05 0.00 70 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

  41. Nega:ve Binomial Distribu:on • Let Y ~ NB(r,p) • Represents the number of successes in a Bernoulli experiment un:l r failures occur. • It is also the distribu:on of a con:nuous mixture of Poisson distribu:ons where λ follows a Gamma distribu:on. • It can be seen as a overdispersed Poisson distribu:on. Negative Binomial distribution. r=10, p=0.3 p = µ Overdispersion parameter 0.04 σ 2 0.03 µ 2 r = Loca:on parameter σ 2 − µ 0.02 0.01 71 0.00

  42. Hypothesis tes:ng • Everything starts with a biological ques:on to test: – What genes are differenIally expressed under one treatment? – What genes are more commonly amplified in a class of tumours? – What promoters are methylated more frequently in cancer? • We must express this biological ques:on in terms of a parameter in a model. • We then conduct an experiment, obtain data and es:mate the parameter. • How do we take into account uncertainty in order to answer our ques:on based on our es:mate? 72

  43. Sampling and tes:ng Random sample of 10 Discrete balls from the box observations #red = 3 When do I think that I am not sampling from this box anymore? How many reds could I expect to 10% red balls and get just by chance alone! 90% blue balls 73 Slide by Natalie Thorne, Nuno L. Barbosa Morais

  44. Sample Random sample of 10 Discrete balls from the box observations #red = 3 Test statistic Rejection criteria (based on 10% red balls and your observed sample, 90% blue balls do you have evidence to reject the hypothesis Null hypothesis that you sampled from (about the population the null population) that is being sampled) 74 Slide by Natalie Thorne, Nuno L. Barbosa Morais

  45. Hypothesis tes:ng • Null Hypothesis : Our population follows a (known) distribution defined by a set of parameters: H 0 : X ~ f( θ 1 ,… θ k ) • Take a random sample (X 1 ,…X n ) = (x 1 ,…x n ) and observe test statistic T(X 1 ,…X n )= t(x 1 ,…x n ) • The distribution of T under H 0 is known (g(.)) • p-value : probability under H 0 of observing a result as extreme as t(x 1 ,…x n ) 75

  46. observed z- score High probability of gezng a more extreme score just by chance P-value is high! 76 Slide by Natalie Thorne, Nuno L. Barbosa Morais

  47. observed z- score Low probability of gezng a more extreme score just by chance P-value is low Reject null hypothesis 77 Slide by Natalie Thorne, Nuno L. Barbosa Morais

  48. Type I and Type II errors • Type I error: probability of rejec:ng the null hypothesis when it is true. Usually, it is the significance level of the test. It is denoted as α • Type II error: probability of not rejec:ng the null hypothesis when it is false It is denoted as β • Decreasing one type of error increases the other, so in prac:ce we fix the type I error and choose the test that minimizes type II error. 78

  49. The power of a test With enough sample size, we can detect any • The power of a test is the alterna:ve hypothesis (if the es:mator is consistent , its standard error converges to zero probability of rejec:ng the as the sample size increases) t − test: true diff:0.1 std=1 sig.lev=0.05 null hypothesis at a given significance level when a 1.0 specific alterna:ve is true • For a given significance level 0.8 and a given alterna:ve hypothesis in a given test, the power is a func:on of 0.6 Power the sample size • What is the difference 0.4 between sta:s:cal significance and biological 0.2 significance? 0 1000 2000 3000 4000 5000 79 Sample Size

  50. The Likelihood Ra:o Test (LRT) • We are working with models, therefore we would like to do hypothesis tests on coefficients or contrasts of those models • We fit two models M 1 without the coefficient to test and M 2 with the coefficient. • We compute the likelihoods of the two models (L 1 and L 2 ) and obtain LRT=-2log(L 1 /L 2 ) that has a known distribu:on under the null hypothesis that the two models are equivalent. This is also known as model selec/on . 80

  51. Mul:ple tes:ng problem • In High throughput experiments we are fizng one model for each gene/exon/sequence of interest, and therefore performing thousands of tests. • Type I error is not equal to the significance level of each test. • Mul:ple test correc:ons try to fix this problem (Bonferroni, FDR,…) 81

  52. Distribu:on of p-values If the null hypothesis is true, the p-values from the repeated experiments come from a Uniform(0,1) distribution. Histogram of p 5000 4000 3000 Frequency 2000 1000 0 0.0 0.2 0.4 0.6 0.8 1.0 p Slide by Alex Lewin, Ernest Turro, Paul O’Reilly 82

  53. Controlling the number of errors N = number of hypothesis tested R = number of rejected hypothesis n 0 = number of true hypothesis Null Hypothesis True AlternaIve Total Hypothesis True Not Significant ♯ False Nega:ve ♯ True Nega:ve N- ♯ Rejec:ons (don’t reject) (Type II error) Significant (Reject) ♯ False posi:ve ♯ True posi:ve ♯ Total Rejec:ons (Type I error) Total n 0 N-n 0 N

  54. Bonferroni Correc:on If the tests are independent: P(at least one false posi:ve | all null hypothesis are true) = P(at least one p-value < α| all null hypothesis are true) = 1 – (1-α) m Usually, we set a threshold at α/ n. Bonferroni correc:on: reject each hypothesis at α/ N level It is a very conserva:ve method

  55. False Discovery Rate (FDR) N = number of hypothesis tested R = number of rejected hypothesis n 0 = number of true hypothesis Null Hypothesis True AlternaIve Total Hypothesis True Not Significant ♯ False Nega:ve ♯ True Nega:ve N- ♯ Rejec:ons (don’t reject) (Type II error) Significant (Reject) V= ♯ False posi:ve ♯ True posi:ve R= ♯ Total (Type I error) Rejec:ons Total n 0 N-n 0 N Family Wise Error Rate: FWER = P(V≥1) False Discovery Rate: FDR = E(V/R | R>0) P(R>0) FDR aims to control the set of false posi:ves among the rejected null hypothesis.

  56. Benjamini & Hochberg (BH) step-up method to control FDR Benjamini & Hochberg proposed the idea of controlling FDR, and used a step-wise method for controlling it. Step 1: compare largest p-value to the specified significance level α : If p ord > α then don’t reject corresponding null hypothesis m Step 2: compare second largest p-value to a modified threshold: If p ord m − 1 > α ∗ ( m − 1) / m then don’t reject corresponding null hypothesis Step 3: If p ord m − 2 > α ∗ ( m − 2) / m then don’t reject corresponding null hypothesis . . . Stop when a p-value is lower than the modified threshold: All other null hypotheses (with smaller p-values) are rejected. Slide by Alex Lewin, Ernest Turro, Paul O’Reilly 86

  57. Adjusted p-values for BH FDR The final threshold on p-values below which all null hypotheses are rejected is α j ∗ / m where j ∗ is the index of the largest such p-value. BH: compare mp i / j ∗ to α compare p i to α j ∗ / m ← → Can define ’adjusted p-values’ as mp i / j ∗ But these ’adjusted p-values’ tell you the level of FDR which is being controlled (as opposed to the FWER in the Bonferroni and Holm cases). Slide by Alex Lewin, Ernest Turro, Paul O’Reilly 87

  58. Mul:ple power problem • We have another problem related to the power of each test. Each unit tested has a different test sta:s:c that depends on the variance of the distribu:on. This variance is usually different for each gene/transcript,… • This means that the probability of detec:ng a given difference is different for each gene; if there is low variability in a gene we will reject the null hypothesis under a smaller difference • Methods that shrinkage variance (like the empirical Bayes in limma for microarrays) deal with this problem. 88

  59. Models for coun:ng data

  60. Microarray expression data Data are color intensi:es RG densities RG densities 0.00020 0.20 0.15 Density Density 0.00010 0.10 0.05 0.00000 0.00 0 10000 30000 50000 4 6 8 10 12 14 16 Intensity Intensity 2 ) log y ij ~ N ( µ j , σ j Adapted from slides by Benilton Carvalho 90

  61. Sequencing data Gene Sample 1 Sample 2 ERBB2 0 45 MYC 14 23 • Transcript (or sequence, or methyla:on) i in ESR1 56 2 sample j is generated at a rate λ ij • A fragment avaches to the flow cell with a 0.6 probability of p ij (small) 0.5 • The number of observed tags y ij follows a 0.4 Poisson distribu:on with a rate that is Density propor:onal to λ ij p ij. 0.3 The variance in a Poisson 0.2 distribu:on is equal to the mean 0.1 0.0 91 0 20 40 60 Adapted from slides by Benilton Carvalho, Tom Hardcastle Number of reads

  62. Extra variability Squared coefficient of varia:on Adapted from slides by Benilton Carvalho 92

  63. Nega:ve binomial model for sequencing data Adapted from slides by Benilton Carvalho 93

  64. Es:ma:ng Overdispersion with edgeR • edgeR (Robinson, McCarthy, Chen and Smyth) • Total CV 2 =Technical CV 2 + Biological CV 2 Decreases with Variability in gene sequencing depth abundance between replicates • Borrows informa:on from all genes to es:mate BCV. – Common dispersion for all tags – Empirical Bayes to shrink each dispersion to the common dispersion. 94

  65. Es:ma:ng Overdispersion with DESeq • DESeq (Anders, Huber) • Var = sμ + αs 2 μ 2 Expected Dispersion of the normalized count gene Size factor for the value sample • es:mateDispersions() 1. Dispersion value for each gene 2. Fits a curve through the es:mates 3. Each gene gets an es:mate between (1) and (2). 95

  66. Reproducibility 96 Slide by Wolfgang Huber

  67. A few number of genes get most of the reads Slide by Wolfgang Huber 97

  68. Effec:ve library sizes • Also called normaliza:on (although the counts are not changed!!!) • We must es:mate the effec:ve library size of each sample, so our counts are comparable between genes and samples • Gene lengths? • This library sizes are included in the model as an offset (a parameter with a fixed value) 98

  69. Es:ma:ng library size with edgeR • edgeR (Robinson, McCarthy, Chen and Smyth) • Adjust for sequencing depth and RNA composi:on (total RNA output) • Choose a set of genes with the same RNA composi:on between samples (with the log fold change of normalised counts) ager trimming • Use the total reads of that set as the es:mate. 99

  70. Es:ma:ng library size with DESeq • DESeq (Anders, Huber) • Adjust for sequencing depth and RNA composi:on (total RNA output) • Compute the ra:o between the log counts in each gene and each sample and the log mean for that gene on all samples. • The median on all genes is the es:mated library size. 100

Recommend


More recommend