july 14 2010 floc sat 10 edinburgh scotland uk what is
play

July 14, 2010 FLoC/SAT10 Edinburgh, Scotland, UK What is - PowerPoint PPT Presentation

July 14, 2010 FLoC/SAT10 Edinburgh, Scotland, UK What is SAT-Race? Competition for sequential/parallel SAT solvers Only industrial/application category benchmarks (no handcrafted or random) Short run-times (15 minutes timeout


  1. July 14, 2010 FLoC/SAT’10 – Edinburgh, Scotland, UK

  2. What is SAT-Race?  Competition for sequential/parallel SAT solvers  Only industrial/application category benchmarks (no handcrafted or random)  Short run-times (15 minutes timeout per instance)  Mixture of satisfiable / unsatisfiable instances (thus not suitable for local-search solvers)  „Black-box“ solvers permitted  3 tracks:  Main Track: Sequential CNF  Special Track 1: Parallel CNF  Special Track 2: Sequential AIG

  3. Organizers  Chair  Carsten Sinz (Karlsruhe Institute of Technology, Germany)  Advisory Panel  Aarti Gupta (NEC Labs America, USA)  Youssef Hamadi (Microsoft Research, UK)  Himanshu Jain (Synopsys, USA)  Daniel Le Berre (Université d'Artois, France)  Panagiotis Manolios (Northeastern University, USA)  Yakov Novikov (OneSpin Solutions, Germany)  Technical Management  Florian Merz (Karlsruhe Institute of Technology, Germany)

  4. Entrants  Received 32 solvers by 23 submitters from 9 nations  SAT-Race 2008: 43 solvers by 36 submitters from 16 nations  SAT-Race 2006: 29 solvers by 23 submitters from 13 nations Germany 4 Australia � 1 � Iran 1 � Austria � 4 � China � 1 � Spain � 1 � France � 7 � Sweden � 4 � France / UK � 4 � USA � 5 �  2 industrial solvers, 27 academic, 3 mixed  21 solvers in Main Track, 8 in Parallel Track, 3 in AIG Track

  5. Qualification  To ascertain solver correctness and efficiency  One qualification round  100 benchmark instances (SAT-Race 2008)  Successful participation required to participate in finals  Qualification round took place in May

  6. Results Qualification Round  Main Track  19 solvers qualified (out of 21) by solving at least 70 out of 100 instances (no solver produced errors)  2 solvers produced wrong results during finals  Parallel Track  6 solvers qualified (out of 8) by solving at least 70 out of 100 instances (1 solver had produced wrong results and was withdrawn)  1 solver produced wrong results during finals  AIG Track:  All 3 solvers qualified by solving more than 50 out of 100 instances  Overall result: 28 (out of 32) solvers participated in finals  17 in Main Track (plus 3 parallel solvers running in sequential mode), 5 in Parallel Track, 3 in AIG Track  One solver withdrawn, 3 solvers with wrong results during finals

  7. Solvers Participating in Finals: Main Track Solver Affiliation Solver Affiliation Barcelogic TU Catalonia, Spain oprailleur CRIL-CNRS, France borg-sat U Texas, USA PicoSAT JKU Linz, Austria CircleSAT Donghua U, China PrecoSAT JKU Linz, Austria CryptoMiniSat INRIA, France riss TU Dresden, Germany glucose CRIL, France rcl CRIL-CNRS, France glucosER CRIL-CNRS, France SApperloT U Tübingen, Germany lingeling JKU Linz, Austria SAT-Power U Isfahan, Iran LySAT INRIA-Microsoft JC, SATHYS CRIL-CNRS, France France MiniSat Sörensson R&D, Sweden red: new solvers

  8. Solvers Participating in Finals: Special Tracks Parallel Track: AIG Track: Solver Affiliation Solver Affiliation antom U Freiburg, Germany kw_aig Oepir, Sweden ManySAT 1.1 INRIA-Microsoft JC, MiniSat++ Sörensson R&D, France Sweden ManySAT 1.5 INRIA-Microsoft JC, NFLSAT CMU, USA France plingeling JKU Linz, Austria SArTagnan U Tübingen, Germany

  9. Benchmark Instances: CNF  Corpus of 490 instances  Hardware verification / software verification / cryptography / mixed  Mainly from former SAT Competitions/Races  Additional software verification instances from NEC  Selected 100 instances randomly  30 hardware verification (IBM, Velev, Manolios)  30 software verification (Babic, Bitverif, Fuhs, NEC, Post)  15 cryptography (desgen, md5gen, Mironov-Zhang)  25 mixed (Anbulagan, Bioinformatics, Diagnosis, …)  Up to 10,950,109 variables, 32,697,150 clauses  Smallest instance: 1694 variables, 5726 clauses

  10. Sizes of CNF Benchmark Instances 1e+08 1e+07 #clauses 1e+06 100000 10000 Hardware Verification Software Verification Cryptanalysis Mixed 1000 100 1000 10000 100000 1e+06 1e+07 1e+08 #variables

  11. Benchmark Instances: AIG  Corpus of 538 instances  9 Groups of Benchmark Sets (Anbulagan / Babic / c32sat / Mironov-Zhang / IBM / Intel / Manolios / Palacios / Mixed)  Selected 100 instances randomly

  12. Parallel Track: Special Rules  Solver can use all 8 cores of a machine (2x Intel Xeon Quad-Core)  Measured wall-clock time instead of CPU usage time  Run-times for multi-threaded solvers can have high deviations (especially for satisfiable instances)  3 runs for each solver on each instance  Instance considered solved, if solved in first run (SAT-Race 2008: at least 1 out of 3 runs)

  13. Scoring  Main criterion: number of solved instances  Average run-time on solved instances to break ties

  14. Computing Environment  Linux-Cluster at Karlsruhe Institute of Technology (KIT)  20 compute nodes  2 Intel Xeon E5430 Processors (Quad-Core, 2.66 GHz) per node  32 GB of main memory per node  Both 32-bit and 64-bit binaries supported  Sequential/AIG Track: only one core per solver  Parallel Track: 8 cores per solver

  15. Results

  16. Special Track 2 (AIG Sequential) 1 2 3 54 solved instances 58 solved instances 53 solved instances

  17. Runtime Comparison: AIG Track 900 MiniSat++ 800 kw_aig NFLSAT 700 runtime (sec.) 600 500 400 300 200 100 0 10 20 30 40 50 60 70 80 90 #solved instances

  18. Special Track 1 (CNF Parallel) 1 2 3 75 solved instances 78 solved instances 72 solved instances next best solver: 70 solved

  19. Runtime Comparison: Parallel Track 900 plingeling 800 ManySAT-1.5 ManySAT-1.1 SArTagnan 700 antom runtime (sec.) 600 500 400 300 200 100 0 10 20 30 40 50 60 70 80 90 #solved instances

  20. Main Track (CNF Sequential) 1 2 3 73 solved instances 74 solved instances 71 solved instances next best solver: 69 solved

  21. Runtime Comparison: Main Track 900 CryptoMiniSat 800 lingeling SAT-Power PrecoSAT 700 MiniSat runtime (sec.) Barcelogic LySAT rcl 600 borg-sat CircleSAT ManySAT-1.1 SApperloT 500 antom PicoSAT glucose 400 SATHYS ManySAT-1.5 glucosER riss 300 orpailleur 200 100 0 10 20 30 40 50 60 70 80 #solved instances

  22. Runtime Comparison: CNF Seq.+Par. 900 plingeling 800 ManySAT-1.5_par CryptoMiniSat lingeling ManySAT-1.1_par 700 SAT-Power SArTagnan runtime (sec.) antom_par PrecoSAT 600 MiniSat Barcelogic LySAT 500 rcl borg-sat CircleSAT ManySAT-1.1_seq 400 SApperloT antom_seq PicoSAT glucose 300 SATHYS ManySAT-1.5_seq glucosER 200 riss orpailleur 100 0 10 20 30 40 50 60 70 80 90 #solved instances

  23. Student Prize  Special prize for a solver submitted by a (team of) (PhD) student(s)  Two prizes:  Main Track: SAT-Power by Abdorrahim Bahrami (3 rd place in Main Track)  Parallel Track: SArTagnan by Stephan Kottler (4 th place in Parallel Track)

  24. Conclusion  Any Progress compared to SAT-Competition 2009?  SAT-Race 2010 winner can solve 5 more instances than SAT-Competition 2009 winner (SAT+UNSAT Application Category) on our benchmark set  3 solvers (plus 4 parallel solvers) outperform SAT- Competition 2009 winner  Parallel solvers gain importance; improved robustness (only small differences on 3 runs)  Many new solvers and participants

  25. SAT-Race 2010 on the Web: http://baldur.iti.kit.edu/sat-race-2010

Recommend


More recommend