atlas worldwide distributed computing atlas worldwide
play

ATLAS Worldwide Distributed Computing ATLAS Worldwide Distributed - PowerPoint PPT Presentation

ATLAS Worldwide Distributed Computing ATLAS Worldwide Distributed Computing Zhongliang Ren 03 May 2006 ISGC 2006, Academia Sinica, Taipei, Taiwan Agenda of My Talk Agenda of My Talk Why the LHC and ATLAS ? The Standard Model The


  1. ATLAS Worldwide Distributed Computing ATLAS Worldwide Distributed Computing Zhongliang Ren 03 May 2006 ISGC 2006, Academia Sinica, Taipei, Taiwan

  2. Agenda of My Talk Agenda of My Talk � Why the LHC and ATLAS ? The Standard Model – The basic questions remain to be answered – � What are the LHC and ATLAS ? The Large Hadron Collider (P-P or Pb-Pb collisions) – The ATLAS experiment at the LHC – The data rate of ATLAS � Computing Grid! – � The ATLAS Data Rate and Computing Model � The ATLAS Production Systems on Three Grids � The Worldwide Computing Operations Tier-0 operation and data distribution to T1s – MC data production at T2s – Data reprocessing at T1s – � Summary & Outlook Z. Ren ISGC 2006, Academia 03/05/06 2 Sinica, Taipei, Taiwan

  3. Why LHC and ATLAS ? Why LHC and ATLAS ? � Our current knowledge is the Standard Model: The ElectroWeak(EW) unification theory – The Quantum ChromoDynamics(QCD) – � The Standard Model has been successful with precision tests Results from LEP(Large Electron Positron Collider), etc. – � However it introduces a fifth force, the Higgs field Which has never been observed so far in any experiment! – � Many basic questions remain to be answered! Z. Ren ISGC 2006, Academia 03/05/06 3 Sinica, Taipei, Taiwan

  4. Basic Questions Basic Questions � What is the origin of the mass of particles? � Can the electroweak and the strong forces be unified? � What are “dark matter” made of? � Why are there three generations of particles? � Where did the antimatter go? � … Z. Ren ISGC 2006, Academia 03/05/06 4 Sinica, Taipei, Taiwan

  5. What is the LHC? What is the LHC? � The LHC is the Large Hadron Collider: Being built at CERN, Geneva across the French-Swiss border – 27km circumference underground tunnel of ~100-150m depth – 1232 dipole superconducting magnets(max.8.3 Tesla, 15m long) – Head-on proton-proton(14 TeV) or Pb-Pb collisions (~30 GeV/fm 3 ) – Designed beam luminosity of 10 34 protons/sec/cm 2 (10 9 Hz interaction – rate!) Beam luminosity in 2007: 10 33 protons/sec/cm 2 New particle up to 5 TeV/c 2 mass can be produced & studied � Z. Ren ISGC 2006, Academia 03/05/06 5 Sinica, Taipei, Taiwan

  6. Magnet Installation Magnet Installation Interconnection of the dipoles and connection to the cryoline are the real challenges now in the installation process Transport of dipoles in the tunnel with an optical guided vehicle Z. Ren ISGC 2006, Academia 03/05/06 6 Sinica, Taipei, Taiwan

  7. Schedule of LHC Installation Schedule of LHC Installation � All key objectives have been reached for end of 2005. � Magnet installation rate is now 20/week, with more than 200 installed. This, together with the interconnect work, will remain the bottleneck until the end of installation. � Main objectives: Terminate installation in Feb. 2007 – First beam collisions in summer 2007 – Z. Ren ISGC 2006, Academia 03/05/06 7 Sinica, Taipei, Taiwan

  8. What Is the ATLAS Experiment ? What Is the ATLAS Experiment ?

  9. Basic HEP Det ect or Component s Basic HEP Det ect or Component s Electromagnetic Hadronic Muon Tracking calorimeter calorimeter chambers detector Photons e ± muons Π ± , p neutrons Innermost Layer …Outermost Layer Z. Ren ISGC 2006, Academia 03/05/06 9 Sinica, Taipei, Taiwan

  10. ATLAS : A Toroidal LHC ApparatuS General purpose particle detector (coverage up to | η |=5, L=10 34 cm -2 s -1 ) p, 7TeV p, 7TeV Tracking (| η |<2.5, B=2T): Si pixels and strips TRT(e/ π separation) Calorimetry (| η |<4.9): LAr EM Calo: (| η |<3.2) HAD Calo: (| η |<4.9) Diameter : 25 m Scintillator-Tile (central), LAr (fwd) Barrel toroid length: 26 m Muon Spectrometer (| η |<2.7): Length: 46 m Air-core toroids with muon chambers Overall weight: 7000 Tons Z. Ren ISGC 2006, Academia 03/05/06 10 Sinica, Taipei, Taiwan

  11. Scale of ATLAS � ATLAS superimposed to the 5 floors of building 40 � ATLAS assembled 92 m under ground at CERN 92 m Z. Ren ISGC 2006, Academia 03/05/06 11 Sinica, Taipei, Taiwan

  12. I nner Det ect or (I D) ) I nner Det ect or (I D The Inner Detector (ID) has three sub-systems: (0.8 10 8 channels) Pixels SemiConductor Tracker (SCT) 6m long, 1.1 m radius (6 10 6 channels) Transition Radiation Tracker (TRT) (4 10 5 channels) Radiation tracker : TRT Si strips tracker : SCT Pixels Precision Tracking: Pixel and SCT Cont inuous Tracking and e ident if icat ion: TRT I D inside 2 Tesla solenoid f ield Z. Ren ISGC 2006, Academia 03/05/06 12 Sinica, Taipei, Taiwan

  13. All four completed SCT barrel cylinders have been integrated in their thermal enclosure Contribution from Taiwan: Optical links for Pixel and SCT detectors developed by the team Of Inst. of Physics, Academia Sinica Z. Ren ISGC 2006, Academia 03/05/06 13 Sinica, Taipei, Taiwan

  14. November 4 th : Barrel Toroid view after removal of the central support platform (ATLAS Cavern) Z. Ren ISGC 2006, Academia 03/05/06 14 Sinica, Taipei, Taiwan

  15. The ATLAS Data Rate and Computing The ATLAS Data Rate and Computing Model Model

  16. Types and sizes of event data, processing times and operation parameters Raw Data Size MB 1.6 ESD Size MB 0.5 AOD Size KB 100 TAG Size KB 1 Simulated Data Size MB 2.0 Simulated ESD Size MB 0.5 Time for Reconstruction kSI2k-sec/event 15 Time for Simulation kSI2k-sec/event 100 Time for Analysis kSI2k-sec/event 0.5 Raw event data rate from online DAQ Hz 200 Operation time seconds/day 50000 Operation time days/year 200 Operation time (2007) days/year 50 10 7 Event statistics events/day Event statistics (from 2008 onwards) events/year 2·10 9 Z. Ren ISGC 2006, Academia 03/05/06 16 Sinica, Taipei, Taiwan

  17. ATLAS Computing Model ATLAS Computing Model � Worldwide distributed, computing grid based Tier structure � Two copies of raw and ESD data � Tier-0 at CERN: Archiving and distribution of raw data, calibration and first pass processing, raw, ESD and AOD data exportation to T1s � 10 Ter-1s: Reprocessing of raw and MC data, data storage of raw(1/10), ESD and AOD. Storage of MC data produced at T2s. AOD data replication to T2s � Tier-2s: MC production, analysis, calibration, etc. � Tier-3s: Local user data analysis and storage, etc. � Requires MC data equivalent to 20% of raw data events Z. Ren ISGC 2006, Academia 03/05/06 17 Sinica, Taipei, Taiwan

  18. The ATLAS Production Systems on Three Grids The ATLAS Production Systems on Three Grids � Globablly distributed production using three computing grids: LCG, OSG and NorduGrid – � With 4 different production systems: LCG-Lexor (3 running instances, 2 in Italy, 1 in Spain) – LCG-CG (2 running instances in Canada, Spain, 1 in France, 2 more planned at – DESY and CERN) OSG-PANDA (1 instance) – NorduGrid – Dulcinea (2 instances) – DQ2 Distributed Data Management (DDM) system � Integrated in OSG-PANDA, LCG-CG and CERN T0 operations – – Being integrated in both LCG and NG now, ready to test soon � DDM operations & production software integration – ATLAS VO-boxes & DQ2 servers ready at CERN and 10 T1s (ASGC, BNL, CNAF, FZK, CC-IN2P3, NG, PIC, RAL, SARA and TRIUMF). FTS channel configurations done. Task remains to configure FTS channels for all T2s! – Need to know T1-T2 associations – Z. Ren ISGC 2006, Academia 03/05/06 20 Sinica, Taipei, Taiwan

  19. LCG World View LCG World View Currently 51 sites with ATLAS SW installed +9300 CPUs (shared with other VOs) Z. Ren ISGC 2006, Academia 03/05/06 21 Sinica, Taipei, Taiwan

  20. OSG World View OSG World View Currently ~50 sites, ~5000 CPU’s ATLAS dedicated ~1000 CPU’s Z. Ren ISGC 2006, Academia 03/05/06 22 Sinica, Taipei, Taiwan

  21. NorduGrid World View NorduGrid World View 13 sites, 789 CPUs available for ATLAS now Z. Ren ISGC 2006, Academia 03/05/06 23 Sinica, Taipei, Taiwan

  22. The Worldwide Computing Operations The Worldwide Computing Operations

  23. Requirements & Deliverables Requirements & Deliverables ATLAS CTDR assumes: � 200 Hz and 320 MB/sec – – 50,000 second effective data-taking, 10 million events per day – MC data production equivalent to 20% of raw data rate 2 million MC events per day – � ATLAS data-taking at full efficiency: 17.28 million events per day – ~3.5 million MC events per day – Computing operation deliverables in 2007-2008: � T0 operation – Detector calibrations finish and event reconstruction starts in 24 hours after start of data-taking � 15 kSI2K CPU seconds per event � 3,000 CPUs in total � Effects from RAW data streaming w.r.t. luminosity blocks � Worldwide distributed MC production (at T2s) – 50 events per job for physics events, 100 events/job for single particle events � Central MC productions ~100K jobs/day � User analysis & productions up to ~1 M jobs/day (job rate 12Hz) � Additional capacity for calibration, alignment and re-processing at T1s – Global operation with more than 100 sites – – Needs 7x24 stable services Z. Ren ISGC 2006, Academia 03/05/06 25 Sinica, Taipei, Taiwan

Recommend


More recommend