operational trials
play

Operational Trials: Objective and Design Issues Wendy Bergerud - PowerPoint PPT Presentation

Operational Trials: Objective and Design Issues Wendy Bergerud Research Branch BC Min. of Forests March 2003 Adaptive Management Cycle Assess Design Adjust Evaluate Implement Monitor WAB Research trials vs Operational trials


  1. Operational Trials: Objective and Design Issues Wendy Bergerud Research Branch BC Min. of Forests March 2003

  2. Adaptive Management Cycle Assess Design Adjust Evaluate Implement Monitor WAB

  3. Research trials vs Operational trials � Operational trials are not the research trials’ “poor cousin”. � Operational trials must be designed and implemented with great care and forethought if results are to be accurate and useful in making better management decisions. WAB

  4. Research trials vs Operational trials � Research trials are designed to find treatment differences while controlling or accounting for as many other sources of variation as possible. � Operational trials are designed to test if treatments “work” when applied under operational conditions. WAB

  5. “Statistical analysis and interpretation are the least critical aspects of experimentation, in that if purely statistical or interpretative errors are made, the data can be reanalyzed. On the other hand, the only complete remedy for design or execution errors is repetition of the experiment .” (Hurlbert, 1984, p 189) WAB

  6. Trial Life Cycle � Steps: � Documentation: � Problem Analysis � 1) Identify need for study � Working Plan � 2) Design � Establishment � 3) Establish Report � 4) Collect data and � Project Diary maintain site(s) � 5) Analyse data � Progress and/or � 6) Communicate results Final Report(s) � 7) Wrap-up and Extension WAB

  7. The Role of Statistics and Statisticians � The statistician’s expertise is particularly relevant during: � Design of the operational trial (step 2) � Analysis and interpretation of collected data (steps 5 and 6) � Research scientists can often provide this expertise at a basic level. WAB

  8. Questions we ask about design: � What questions are we trying to answer with this trial? What is the objective? � How can we design this trial to answer these questions and yet do so within the given logical and resource restraints? � What internal and external threats could undermine our confidence in the final results? How can we mitigate against these threats? WAB

  9. Questions we ask about analysis and interpretation : � How do we analyse this data to answer the questions? � What assumptions are necessary to do this analysis and make these conclusions? � And how well do we know these conclusions? (e.g. confidence limits around a mean). WAB

  10. “Statistical designs always involve compromises between the desirable and the possible”. - Leslie Kish � Some assumptions and simplifications must always be made. � We must understand their consequences: � Will others agree with them? � How do they weaken our ability to generalize the results? � How do they weaken any cause and effect statements that we might like to make? WAB

  11. Trial Objective � Should be specific and detailed. � Should describe the population under study. � If looking for differences, should describe the minimum difference of practical importance. � Consider a range of possible outcomes and what that means for the design and data analysis and interpretation. Q1 WAB

  12. Population Definition � To what population do we want our trial results to be relevant? � That is, what material do we want to study? � What population CAN we study? How does this limit our study results? Q2 WAB

  13. Populations WAB

  14. What are the treatments and what controls will we have? � Controls are very important for determining if treatments have had an effect. � Consider if controls for spatial and/or temporal variation are required. Q3 WAB

  15. Effect Size � How big must the difference between the treatments or treatment and control be in order for us to change our management practice? � If our trial can’t detect a difference this small then our study can’t provide the required evidence to change management practice [sample size requirements and power]. Q4 WAB

  16. Consider possible outcomes � What are some of the possible outcomes and how do they affect the design and possible data analysis and interpretation? Q5 WAB

  17. Nuisance variables � What are some of the ‘nuisance’ variables that will affect our results? For each one, we need to determine whether to: � randomize it out; or � include it in the design as a factor in the study; or � include it as an independent continuous variable in the statistical model. Q6 WAB

  18. Study Unit Definition and Selection � How are we going to select members from the population to study? These members are our study units. � Can we select them in a random and/or unbiased manner?? Q7 WAB

  19. Replication � How many study units will we use? Do our resources limit the amount of replication possible? � Know how to recognize pseudo- replication. Q8 WAB

  20. Treatment Assignment � How will the treatment and ‘nuisance’ factors be assigned to the study units? � Can we assign them in a random and unbiased manner? or � Are they characteristics of the study units that can only be observed (observational factors)? Q9 WAB

  21. Response Variables � What variables measure the response we are interested in and how will we measure them � Can the variables we are interested in be directly measured or must we use a proxy or surrogate measure? Q10 WAB

  22. Subsampling of study units � Must we subsample the study units to obtain a study unit response or can we directly get a response for the whole unit? � If subsampling is required can we use simple random sampling within each study unit? Or do we need some more complicated sampling scheme? Q11 WAB

  23. Proposed Analysis � What form of analysis do we expect to run on the data? � What design requirements does this analysis have? Q12 WAB

  24. Example: Objective � Objective is to compare site preparation with duff-planting. � Seems like a simple and straightforward objective, but is it? WAB

  25. Example: Setting � Forest company must manage a large number of cutblocks that need to be planted each year (the population). � Duff-planting is cheaper than site-prep. � Company would prefer to use site-prep only when necessary. WAB

  26. Example: Setting � Various attributes of cutblocks in the population are known (Factor C). � Based on these attributes, we know that some cutblocks will do just fine with duff-planting and that others will need site-prep. � The question applies to those cutblocks in the ‘gray area’. WAB

  27. Possible Outcomes: for C = 1 Growth or � Site-prep No real difference � Duff-planting in response Survival C =1 C=2 C=3 C=4 C=5 Factor C WAB

  28. Possible Outcomes: for C = 5 Growth or � Site-prep � � Duff-planting Survival Gain using site-prep � C =1 C=2 C=3 C=4 C=5 Factor C WAB

  29. Possible Outcomes: for other C Growth or � Question is: � Survival � What happens in this gray area? � C =1 C=2 C=3 C=4 C=5 Factor C WAB

  30. Example: Objective � How large must the gain be for management to switch practice from duff-planting to site-prep? � You must determine threshold values. � The question we are really interested in is: Can we use some cutblock attributes to predict when the difference in response between duff-planting and site-prep is large enough that site-prep is preferred? WAB

  31. Example: Decision Risks � Study can help increase percent of ‘right’ decisions, but 100% ‘right’ decisions are unattainable. � Industry might prefer to err by duff- planting unless really necessary. � Ministry might prefer to err by requiring site-prep unless loss due to duff-planting is demonstrated to be negligible. WAB

  32. Abbreviations: � RB - Randomized Block � CR - Completely Randomized � FRD - Factor Relationship Diagram � df - Degrees of freedom WAB

  33. RB Design - 5 “Different” Sites Factor C: C=1 C=2 C=3 C=4 C=5 Site 1 Site 2 Site 3 Site 4 Site 5 Design 1 WAB

  34. RB Design - 5 “Different” Sites 1 2 3 4 5 Factor C 1 2 3 4 5 Site/Block Treatment A B A B A B A B A B Plot 1 2 3 4 5 6 7 8 9 10 � Treatment is replicated. � Factor C is not replicated and is confounded with site/blocks. � A good “screening” design if nothing is known about the levels of Factor C. Design 1 WAB

  35. Analysis of Variance Table Matches the FRD Final Report Source df Error Source df Test? Factor C 0 B(C) Site/Block B(C) 4 Plots(CBT) Block B 4 -- Treatment T 1 T x B(C) Treatment T 1 Yes T x C 0 T x B(C) T x B(C) 4 Plots(CBT) T x B 4 -- Plots(CBT) 0 -- � Many sources cannot be estimated but the most interesting test is available. Design 1 WAB

  36. RB Design - 5 “Similar” Sites Factor C: C=3 Site 1 Site 2 Site 3 Site 4 Site 5 Design 2 WAB

  37. RB Design - 5 “Similar” sites 3 Factor C 1 2 3 4 5 Site/Block Treatment A B A B A B A B A B Plot 1 2 3 4 5 6 7 8 9 10 � Treatments are replicated within one level of Factor C. � No information on different levels of Factor C - inference is limited. Design 2 WAB

Recommend


More recommend