anatomy of a scheduling competition
play

Anatomy of a Scheduling Competition Marco Benedetti, Federico Pecora - PowerPoint PPT Presentation

Anatomy of a Scheduling Competition Marco Benedetti, Federico Pecora and Nicola Policella University of Orl eans, marco.benedetti@univ-orleans.fr Inst. for Cognitive Science and Technology, federico.pecora@istc.cnr.it European Space Agency,


  1. Anatomy of a Scheduling Competition Marco Benedetti, Federico Pecora and Nicola Policella University of Orl´ eans, marco.benedetti@univ-orleans.fr Inst. for Cognitive Science and Technology, federico.pecora@istc.cnr.it European Space Agency, nicola.policella@esa.int ICAPS Workshop on “Scheduling a Scheduling Compoetition” Providence (RI), September 22nd, 2007

  2. 1 Preamble • Broadly speaking, scheduling deals with allocating activities (or tasks, jobs) over time • Activities can be modeled as having start time, durations, end times, . . . • Allocation must be subject to – temporal constraints (e.g., generalized precedence constraints, . . . ) – non-temporal constraints (e.g., limited capacity resources, load, . . . ) • Activities can be known before hand or they may be provided on-line • Problems may have admissible solutions or not (oversubscribed scheduling) F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

  3. 2 Preamble • The development of high-performance automated scheduling systems has both theoretical and applications appeal – drives research in Artificial Intelligence, Operations Research, Constraint Programming, Management Science – fosters development of decision support systems (e.g., production planning, supply chain management) • Comparative evaluations of scheduling systems/algorithms regularly appear in the literature (e.g., [Beck and Fox, 2000], [Godard et al., 2005], [Barbulescu et al., 2006], . . . ) – paper-specific evaluations usually ad-hoc and limited to the scope of the paper – difficult to see the big picture , easy to duplicate results F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

  4. 3 Preamble • Aims of the “Scheduling a Scheduling Competition” workshop – to collectively discuss the issue of creating a common forum for comparatively evaluating different approaches to scheduling – in other words: why so we need a scheduling competition? what is it? how to organize it? • Scope of this talk – puts forth a series of specific questions regarding the scheduling competition – outline some tentative answers over the backdrop of current competitions in Computer Science (CS) • Expected outcome of the workshop – a blueprint for a scheduling competition which is well-formed and operative , and which is inclusive with respect to the broad scope of the scheduling community F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

  5. 4 Current CS Competitions • We have looked at 16 CS competitions to understand the common issues, advantages, pitfalls, and trends • The analysis has lead us to single out seven criteria as meaningful aspects underpinning the establishment of a competition – A. Motivation: motivation underlying the organization of the competition. May be purely academic (promoting the comparison of specific algorithms, methods or approaches to better understand the theoretical aspects of the computational problem); may include industry-oriented aspects (e.g., fitness for real-world problems, usability, impact on existing processes, etc.) – B. Knowledge Representation: the formal representation (or lack thereof) in which competition benchmarks ( in ) and competitor results ( out ) are expressed F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

  6. 5 – C. Tracks: the organization of the competition into tracks. Tracks defined as competition sub-divisions which are determined by differences in how the problem is approached and/or how the problem is defined. – D. Benchmarks: the nature of the benchmark source. Indicates whether problems are contributed by the participants ( contrib ), taken from a community-maintained repository ( library ), or disclosed “on-line” during the competition through a competition server . – E. Measure: the evaluation criteria employed to determine system ranking. f ( σ, τ, φ, ω ) , where: ∗ σ : the degree to which a system solves the given benchmark(s) → number of solved problems in SAT competition ∗ τ : the amount of time taken by the system to complete the benchmark(s) → CPU time to find a plan in IPC ∗ φ : a measure related to the quality of solutions found → number of satisfied soft constraints in CSP competition ∗ ω : other measures related to the use of the participating system → system’s ease of use, portability, etc. in ICKEPS – F. Disclosure: whether or not systems need to be completely disclosed in order to participate. Three “degrees” of disclosure: source (complete source code is required for F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

  7. 6 submission, and therefore made public), binary (source code submission not required, binaries made public), remote (systems are run on participants’ computational resources and/or accessed remotely during the competition). – G. Participation: the type and number of participants. Indicates whether the current state-of-the-art is conceived purely for academic evaluation ( AC ) and/or if the technology has industrial potential ( IND ). F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

  8. 7 Current CS Competitions Competition KR Bench. Tracks Measure Part. Disc. Description/Motivation To stimulate Automated Theorem Proving (ATP) research and system 5 CASC divisions, development, and to expose ATP in , out 1996 f ( σ, τ, φ ) ac (20) library source systems within and beyond the ATP 13 (11) categories community (held in conjunction with CADE). To stimulate research in the area of multi-agent systems by CLIMA identifying key problems and 2005 N/A N/A N/A f ( σ, ω ) ac (6) remote collecting suitable benchmarks that (3) can serve as milestones for testing new approaches and techniques from computational logics. To improve understanding of the CSP sources of Constraint Satisfaction 5 library , 2005 in , out f ( σ, τ, φ ) ac (21) Problem (CSP) solver efficiency, binary categories contrib (2) and the options that should be considered in crafting solvers. F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

  9. 8 Competition KR Bench. Tracks Measure Part. Disc. Description/Motivation To assess state-of-the-art in General Game Playing (GGP) systems, in , i.e., automated systems which can GGP out = accept a formal description of an 2005 { WIN, N/A f ( σ ) server ac (12) remote arbitrary game and, without further (3) LOOSE } human interaction, can play the game effectively. A $10,000 prize is awarded to the winning team. The International Computer Games Association (ICGA) was founded by computer chess programmers in 1977 to organise championship ICGA 32 events for computer programs. The 1977 N/A N/A f ( σ ) ac / ind (60) remote games ICGA Tournament aims to facilitate (30) contacts between Computer Science and Commercial Organisations, as well as the International Chess Federation. F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

  10. 9 Competition KR Bench. Tracks Measure Part. Disc. Description/Motivation To promote the knowledge-based and domain modeling aspects of Planning and Scheduling (P&S), to accelerate knowledge engineering ICKEPS research in AI P&S, to encourage 2005 N/A server N/A f ( σ, φ, ω ) ac (7) remote the development and sharing (2) of prototype tools or software platforms that promise more rapid, accessible, and effective ways to construct reliable and efficient P&S systems. To provide a forum for empirical comparison of planning systems, to highlight challenges to the community in the form of problems IPC at the edge of current capabilities, 2 parts, in , out 1998 f ( σ, τ, φ ) ac (12) binary to propose new directions for library 3 tracks (5) research and to provide a core of common benchmark problems and a representation formalism to aid the comparison and evaluation of planning systems. F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

  11. 10 Competition KR Bench. Tracks Measure Part. Disc. Description/Motivation The International Timetabling Competition was designed in order to promote research into automated methods for timetabling. It was ITC not designed as a comparison of 2003 in , out library N/A f ( σ, φ ) ac (11) binary methods, and discourages drawing (1) strict scientific conclusions from the A prize of ¤ 300 + free results. registration to PATAT 2004 was awarded to the winner. in , PB- The goal of the Pseudo-Boolean out = Eval (PB) Evaluation is to assess the N/A f ( σ, τ, φ ) ac (10) library binary { YES, 2005 state of the art in the field of PB NO, ? } (3) solvers. in , QBF library , Assessing the state of the art in the out = 2007 f ( σ, τ ) ac (12) field of QBF solvers and QBF-based contrib 3 tracks binary { YES, (5) applications. NO, ? } F. Pecora Anatomy of a Scheduling Competition Sept. 22nd, 2007

Recommend


More recommend