hypotheses testing p values type i and type ii errors
play

Hypotheses testing, p-values, Type I and Type II Errors Statistics - PowerPoint PPT Presentation

Hypotheses testing, p-values, Type I and Type II Errors Statistics are not substitute for judgment. Henry Clay (US Senator) Formal hypotheses testing population A Is this a difference due B to random chance? Mean height A B sample


  1. Hypotheses testing, p-values, Type I and Type II Errors β€œStatistics are not substitute for judgment.” Henry Clay (US Senator)

  2. Formal hypotheses testing population A Is this a difference due B to random chance? Mean height A B sample Population sample 𝐼 𝑝 : 𝑦 𝐡 = 𝑦 𝐢 If actual p <  , reject null hypothesis ( 𝐼 𝑝 ) and accept alternative 𝐼 1 : 𝑦 𝐡 β‰  𝑦 𝐢 hypothesis ( 𝐼 1 )

  3. How to convert between scales Original units 𝑦 (π‘€π‘π‘šπ‘£π‘“ βˆ’ 𝑦 )/𝑇𝐹 𝑦 ( t- π‘€π‘π‘šπ‘£π‘“ βˆ— 𝑇𝐹 𝑦 ) + 𝑦 T-value (standard error) -3 -2 -1 0 1 2 3 π‘Ÿπ‘’(𝛽, 𝑒𝑔) p 𝑒( tβˆ’ π‘€π‘π‘šπ‘£π‘“, 𝑒𝑔) Test p-value Test p-value  -level Significant Not Significant P-value (percentiles, 0.999 0.001 0.50 probabilities)

  4. A B β€œIs this difference due to random chance?” Mean height In other words : β€œIs random chance a plausible explanation?” Population sample P-value – the probability the observed value or larger is due to random chance Theory : We can never really prove if the 2 samples are truly different or the same – only ask if what we observe (or a greater difference) is due to random chance How to interpret p-values: P-value = 0.05 – β€œYes, 1 out of 20 times.” P-value = 0.01 – β€œYes, 1 out of 100 times.” The lower the probability a difference is due to random chance – the more likely is the result of an effect (what we test for)

  5. Null hypothesis is true Alternative hypothesis is true Incorrect Fail to reject hypothesis   Type I Error – reject the null hypothesis (H 0 ) when Correct Decision the null Decision False Negative it is actually true Type II Error Incorrect Type II Error – failing to reject the null hypothesis Reject the hypothesis   Decision null Correct Decision (H 0 ) when it is not true False Positive Type I Error Remember rejection or acceptance of a p-value ( and therefore the chance you will make an error ) depends on the arbitrary  -level you choose  -level will probability of making a Type I Error , but this the β€’ probability of making a Type II Error The  -level you choose is completely up to you ( typically it is set at 0.05), however, it should be chosen with consideration of the consequences of making a Type I or a Type II Error . Based on your study, would you rather err on the side of false positives or false negatives?

  6. Example: Will current forests adequately protect genetic resources under climate change? H O : Range of the current climate for the BMW protected area = Range of the BMW protected area under climate change H a : Range of the current climate for the BMW protected area β‰  Range of the BMW protected area under climate change If we reject H O : Climates ranges are different, therefore genetic resources are not adequately protected and new Birch Mountain Wildlands protected areas need to be created Consequences if I make: β€’ Type I Error: Climates are actually the same and genetic resources are indeed adequately protected in the BMW protected area – we created new parks when we didn’t need to β€’ Type II Error : Climates are different and genetic resources are vulnerable – we didn’t create new protected areas and we should have From an ecological standpoint it is better to make a Type I Error, but from an economic standpoint it is better to make a Type II Error Which standpoint should I take?

  7. Statistical Power Power is your ability to reject the null hypothesis when it is false (i.e. your ability to detect an effect when there is one). There are many ways to increase power: 1. Increase your sample size (sample more of the population) Given you are testing whether or not what you observed or greater is due to random chance, more data gives you a better understanding of what is truly happening within the population, therefore sample size will the probability of making a Type 2 Error 2. Increase your alpha value (e.g. from 0.01 to 0.05) – watch for Type I Error! 3. Use a one-tailed test (you know the direction of the expected effect) 4. Use a paired test (control and treatment are same sample)

Recommend


More recommend