Undesirable Optimality Results in Multiple Testing? Charles Lewis - - PowerPoint PPT Presentation

undesirable optimality results in multiple testing
SMART_READER_LITE
LIVE PREVIEW

Undesirable Optimality Results in Multiple Testing? Charles Lewis - - PowerPoint PPT Presentation

Undesirable Optimality Results in Multiple Testing? Charles Lewis Dorothy T. Thayer 1 Intuitions about multiple testing: - Multiple tests should be more conservative than individual tests. - Controlling per comparison error rate is not


slide-1
SLIDE 1

1

Undesirable Optimality Results in Multiple Testing?

Charles Lewis Dorothy T. Thayer

slide-2
SLIDE 2

2

Intuitions about multiple testing:

  • Multiple tests should be more

conservative than individual tests.

  • Controlling per comparison error rate

is not enough. Need control of a familywise error rate or, better, FDR.

slide-3
SLIDE 3

3

Multiple testing for multilevel models

  • applying Bayesian ideas in a

sampling theory context. Examples: Shaffer (1999), Gelman & Tuerlincks (2000), Lewis & Thayer (2004), and Sarkar & Zhou (2008).

slide-4
SLIDE 4

4

One-way random effects ANOVA setup (Treat as known.) , , , . , .

slide-5
SLIDE 5

5

Consider all pairwise comparisons , , , for .

slide-6
SLIDE 6

6

Decision theory framework (Based on early work of Lehmann) For each , take action . : declare to be positive, : declare to be negative, : unable to determine sign of .

slide-7
SLIDE 7

7

Two components for loss functions if the signs of and disagree and 0 otherwise; used to indicate wrong sign declarations if and 0 otherwise; used to indicate signs not determined.

slide-8
SLIDE 8

8

Per comparison loss function for declaring sign of . Bayesian decision theory identifies the

  • ptimal decision rule,

such that is minimized.

slide-9
SLIDE 9

9

Finding the posterior expected loss (Some helpful notation) If , define and ; if , define and . It then follows that

slide-10
SLIDE 10

10

If , we have Therefore, the Bayes rule declares the sign of , namely , iff ; otherwise it takes .

slide-11
SLIDE 11

11

Since the posterior expected loss for is always less than or equal to , it follows that the Bayes risk for is also less than or equal to : .

slide-12
SLIDE 12

12

Consequently, . This expectation is the (random effects) probability of incorrectly declaring the sign of using the decision rule : the per comparison wrong sign rate for .

slide-13
SLIDE 13

13

Explicit expression for

slide-14
SLIDE 14

14

For the usual (fixed effects) per comparison test, , so . Define a fixed effects decision rule by , iff ;

  • therwise we have

.

slide-15
SLIDE 15

15

Since is based on the distribution of , we may write , and so .

slide-16
SLIDE 16

16

Conclusion: the Bayesian random effects rule and the fixed effects rule both control the random effects per comparison wrong sign rate at , but the Bayesian rule is more conservative than the fixed effects rule.

slide-17
SLIDE 17

17

Extend definition of the per comparison loss function to the set of comparisons

slide-18
SLIDE 18

18

Interpretation of This new loss function equals the proportion of comparisons whose signs are incorrectly declared using a, plus times the proportion of comparisons whose signs are not determined using a.

slide-19
SLIDE 19

19

Family of optimal action vectors Order the so that . Define for as Take .

slide-20
SLIDE 20

20

The Bayesian decision rule for the loss function , where is the largest value of k such that , or if .

slide-21
SLIDE 21

21

Posterior expected loss for if , and if .

slide-22
SLIDE 22

22

Since , the posterior expected loss for the Bayesian decision function must be less than or equal to , and the Bayes risk for must also be less than or equal to : .

slide-23
SLIDE 23

23

Consequently, . This expectation is the (random effects) per comparison wrong sign rate for using the Bayes rule .

slide-24
SLIDE 24

24

Rewriting the bound on the posterior expected loss for given , we have .

slide-25
SLIDE 25

25

Consequently, we may write , so .

slide-26
SLIDE 26

26

Since this inequality gives an upper bound on the posterior expectation, a corresponding upper bound holds for the unconditional expectation: .

slide-27
SLIDE 27

27

This quantity (evaluated for any decision rule ) is referred to by Sarkar and Zhou (2008) as the Bayesian directional false discovery rate, or BDFDR, for . The result that controls the BDFDR was given by Lewis and Thayer (2004). Having a per comparison rule control a version of the FDR is counterintuitive!

slide-28
SLIDE 28

28

Sarkar and Zhou (2008) propose another decision rule (here labeled ) that also controls the BDFDR and maximizes the posterior per comparison power rate.

slide-29
SLIDE 29

29

Specifically, , where is the largest value of k such that , or if . Thus controls the BDFDR at .

slide-30
SLIDE 30

30

Sarkar and Zhou (2008) also proved that, among (non-randomized) rules that control the BDFDR, maximizes the posterior per comparison power rate:

slide-31
SLIDE 31

31

Too much power? Not only does have more power than the Bayes rule , it may also have more power than the fixed effects rule . In other words, will sometimes declare a sign for even when . This is counterintuitive!

slide-32
SLIDE 32

32

To summarize, in a multilevel model like random effects ANOVA, Bayesian ideas have sampling interpretations. In particular, we may define a Bayesian (or random effects) version of the FDR: The average (over both levels) proportion of declared signs for a set of comparisons that are incorrectly declared.

slide-33
SLIDE 33

33

  • 1. A Bayesian per comparison decision

rule turns out to provide control of this FDR, even though it was only designed to minimize an expected per comparison loss function.

  • 2. And a rule designed to control this

FDR may have more power than a conventional per comparison rule.

slide-34
SLIDE 34

34

References

Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B, 57, 289-300. Gelman, A., & Tuerlinckx, F. (2000). Type S error rates for classical and Bayesian single and multiple comparison

  • procedures. Computational Statistics, 15, 373-390.

Jones, L. V., & Tukey, J. W. (2000). A sensible formulation

  • f the significance test. Psychological Methods, 5, 411-

414. Lehmann, E. L. (1950). Some principles of the theory of testing hypotheses. The Annals of Mathematical Statistics, 21, 1-26.

slide-35
SLIDE 35

35

Lehmann, E. L. (1957a). A theory of some multiple decision

  • problems. I. The Annals of Mathematical Statistics, 28, 1-

25. Lehmann, E. L. (1957b). A theory of some multiple decision

  • problems. II. The Annals of Mathematical Statistics, 28,

547-572. Lewis, C., & Thayer, D. T. (2004). A loss function related to the FDR for random effects multiple comparisons. Journal of Statistical Planning and Inference, 125, 49-58. Sakar, S. K., & Zhou, T. (2008). Controlling directional Bayesian false discovery rate in random effects model. Journal of Statistical Planning and Inference, 138, 682- 693.

slide-36
SLIDE 36

36

Shaffer, J. P. (1999). A semi-Bayesian study of Duncan’s Bayesian multiple comparison procedure. Journal of Statistical Planning and Inference, 82, 197-213. Shaffer, J. P. (2002). Multiplicity, directional (Type III) errors and the null hypothesis. Psychological Methods, 7, 356- 369. Williams, V. S. L., Jones, L. V., & Tukey, J. W. (1999). Controlling error in multiple comparisons, with examples from state-to-state differences in educational

  • achievement. Journal of Educational and Behavioral

Statistics, 24, 42-69.