Session A: Supersaturated Design (Wednesday, March 4, 8:30AM-10:00AM) Searching for Powerful Supersaturated Designs David Edwards, Virginia Commonwealth University An important property of any experimental design is its ability to detect active factors. For supersaturated designs, in which factors outnumber experimental runs, power is even more critical. In this talk, we consider several popular supersaturated design construction criteria, propose several of our own, and discuss the performance of an extensive simulation study to evaluate these construction criteria in terms of power. We use two analysis methods - forward selection and the Dantzig selector - and find that although the latter clearly outperforms the former, most supersaturated design construction methods are indistinguishable in terms of power. We demonstrate further, however, that when the sign of each main effect can be correctly specified in advance, supersaturated designs obtained by minimizing the variance of all squared pairwise inner products of the information matrix (subject to a constraint on the average of these off-diagonal elements) have significantly higher power to detect active factors when compared to standard criteria. Benefits and Fast Construction of Efficient Two-Level Foldover Designs Anna Errore, University of Minnesota Recent work in two-level screening experiments has demonstrated the advantages of using small foldover designs, even when such designs are not orthogonal for the estimation of main effects. In this paper, we provide further support for this argument and develop a fast algorithm for constructing efficient two-level foldover (EFD) designs. We show that these designs have equal or greater efficiency for estimating the main effects model versus competitive designs in the literature and that our algorithmic approach allows the fast construction of designs with many more factors and/or runs. Our compromise algorithm allows the practitioner to choose among many designs making a trade-off between efficiency of the main effect estimates and correlation of the two-factor interactions. Using our compromise approach practitioners can decide just how much efficiency they are willing to sacrifice to avoid confounded two-factor interactions as well as lowering an omnibus measure of correlation among the two-factor interactions. E(s 2 ) and UE(s 2 ) – Optimal Supersaturated designs C.S. Cheng, Academia Sinica The popular E(s 2 )-criterion for choosing two-level supersaturated designs minimizes the sum of squares of the entries of the information matrix over the designs in which the two levels of each factor appear the same number of times. Jones and Majumdar (2014) proposed the UE(s 2 )-criterion which is essentially the same as the E(s 2 )-criterion except that the requirement of factor-level balance is dropped. We compare UE(s2)-optimal designs and the traditional E(s 2 )-optimal designs based on the average efficiencies over lower-dimensional projections. Since the requirement of level-balance is bypassed, usually there are many UE( s 2 )-optimal designs with diverse performances when other things are considered. Jones and Majumdar (2014) mentioned the maximization of the number of level-balanced factors as a possible secondary criterion. We show by example that this does not work well from a projective point of view and
propose a more appropriate secondary criterion. We also identify several families of designs that are both E(s 2 )-and UE(s 2 )-optimal. This is a joint work with Pi-Wen Tsai. Session B: Covering Arrays (Wednesday, March 4, 10:30AM-12:00PM) Covering Arrays: Applications, algorithms, and challenges Joseph Morgan, SAS Institute Inc. A homogenous covering array CA ( N; t, k, v ), is an N x k array on v symbols such that any t column projection contains all v t level combinations at least once. In this talk we will describe key generalizations to this basic homogenous covering array model, show the link to orthogonal arrays, and explain why these constructs are increasingly viewed by the software engineering community as an important tool for software validation. In the process we will provide an overview of algorithms and construction methods and discuss some of the challenges that remain. Covering arrays avoiding forbidden edges Elizabeth Maltais, University of Ottawa Covering arrays avoiding forbidden edges (CAFEs) are combinatorial designs with applications to the design of test suites in the following scenario: all required interactions between pairs of components are covered by at least one test, while a specified list of forbidden interactions is avoided by all tests. When no interactions are forbidden, CAFEs are simply covering arrays of strength two. In this talk, we survey some important introductory results on CAFEs, including their relationship to edge clique covers, and computational complexity results. We also discuss further generalizations of CAFEs which allow for optional interactions as well as forbidden and required interactions. Column Replacement and Covering Arrays Charles J. Colbourn, Arizona State University The construction of covering arrays with many factors is a challenging problem, particularly for larger strengths. Computational methods have proved very effective for strengths two and three, to produce covering arrays with tens or even hundreds of factors. For larger strengths and for more factors, certain algebraic constructions furnish examples. Nevertheless, the primary methods for producing such covering arrays are recursive constructions, which make "large" covering arrays from "small" ingredient arrays. Two main classes of recursive constructions have been developed, the "cut-and-paste" constructions and the "column replacement" constructions. Column replacement methods use pattern matrices to select columns from ingredient matrices. Combinatorial requirements on the pattern matrices lead to arrays known as hash families, while those on the ingredients lead to variants of covering arrays. In this talk we outline how these methods can be used to produce covering arrays for large factor spaces that often have the fewest rows. More importantly, we focus on the generality with which these column replacement methods can be applied.
Session C: Blocking (Wednesday, March 4, 1:00PM-2:30PM) Optimal Regular Graph Designs Sera Cakiroglu, Cancer Research UK A typical problem in optimal design theory is finding an experimental design that is optimal with respect to some criteria in a class of designs. The most popular criteria include the A- and D-criteria. In 1977, John and Mitchell conjectured that if an incomplete block design is D-optimal (or A-optimal), then it is an RGD (if any RGDs exist). The conjecture is wrong in general but holds if the number of blocks is large enough. Using a graph theoretical representation of the A- and D-optimality criteria, we capitalized on the power of symbolic computing with Mathematica and performed an exact computer search for the best regular graph designs in large systems for up to 20 points. I will present computational and theoretical results including examples that support some open conjectures and an example that shows that A- and D-optimality is not equivalent even among regular graph designs. Optimal semi-Latin squares Leonard Soicher, Queen Mary University of London An (n x n)/k semi-Latin square is a block design for nk treatments in blocks of size k, with the blocks arranged to form an n by n array, such that each treatment occurs exactly once in each row and exactly once in each column of the array. Semi-Latin squares have applications in areas including the design of agricultural experiments, consumer testing, and via their duals, human-machine interaction. I will survey some recent, and some not so recent, results and constructions for optimal semi-Latin squares, including results on A-, D-, E-, and Schur-optimality. I will also mention some open problems. Block designs with very low replication Rosemary Bailey, University of St Andrews In the early stages of testing new varieties, it is common that there are only small quantities of seed of many new varieties. In the UK (and some other countries with centuries of agriculture on the same land) variation within a field can be well represented by a division into blocks. Even when that is not the case, subsequent phases (such as testing for milling quality, or evaluation in a laboratory) have natural blocks, such as days or runs of a machine. I will discuss how to arrange the varieties in a block design when the average replication is less than two.
Recommend
More recommend