data management in atlas
play

Data Management in ATLAS Angelos Molfetas on behalf of the ATLAS DQ2 - PowerPoint PPT Presentation

Angelos Molfetas Data Management in ATLAS Angelos Molfetas on behalf of the ATLAS DQ2 team 1 ATLAS DDM COLLABORATION A.Molfetas (CERN), F.Barreiro (CERN), A.Tykhonov (Jo ef Stefan Institute), V.Garonne (CERN), S.Campana (CERN), M.Lassnig


  1. Angelos Molfetas Data Management in ATLAS Angelos Molfetas on behalf of the ATLAS DQ2 team 1

  2. ATLAS DDM COLLABORATION A.Molfetas (CERN), F.Barreiro (CERN), A.Tykhonov (Jo ž ef Stefan Institute), V.Garonne (CERN), S.Campana (CERN), M.Lassnig (CERN), M.Barisits (Vienna University of Technology), D.Zang (Institute of high energy physics, Chinese Academy of Sciences), C.Serfon (LMU Munich), P.Calfayan (LMU Munich), D.Oleynik (Joint Institute for Nuclear Research), D.Kekelidze (Joint Institute for Nuclear Research) , A.Petrosyan (Joint Institute for Nuclear Research), S.Jezequel (IN2P3), I.Ueda (University of Tokyo), Gancho Dimitrov (Deutsches Elektronen- Synchrotron), Florbela Tique Aires Viegas (CERN) 2

  3. Angelos Molfetas Introduction ¤ Presentation intended for a general audience ¤ Current issues & trends ¤ Covers some of the issues we are facing in ATLAS Distributed Data Management (DDM) ¤ ATLAS grid: ¤ Over 800 end points ¤ Petabytes of data managed on the grid ¤ System responsible for this is DQ2 middleware 3

  4. The ATLAS Computing Model Tier-0 ¤ Grid Sites are organised in Tiers RAW Online filter farm ¤ Tier-0 RAW RAW ESD Reconstruction farm AOD ¤ record RAW detector data ESD AOD RAW ¤ distributed data to Tier-1s ESD AOD ¤ calibration and first-pass Tier-1 reconstruction ESD, AOD ¤ Tier-1s Analysis farm RAW ESD ¤ permanent storage AOD Re-reconstruction farm MC RAW ¤ capacity for reprocessing and bulk analysis ¤ Tier-2s Tier-2 MC ¤ Monte-Carlo simulation Analysis farm SelSD, AOD ¤ user analysis ESD, AOD Monte Carlo farm

  5. The ATLAS Computing Model ¤ Sites are also organised in clouds ¤ not the “computer science” definition of clouds, though! ¤ Every cloud has a major Tier-1 and associated Tier-2s NG PIC RAL ¤ Mostly geographical SARA CNAF and/or political T3 FR Cloud CERN TWT2 ¤ support CCPM LYON ASGC GRIF LYON ¤ deployment Tokyo LPC Melbourne Roumanie ¤ funding BNL Clermont FZK Pékin TRIUMF NET2 BNL LAPP NW GL SW BNL Cloud SLAC

  6. DQ2 (Don Quijote 2) ¤ DQ2 enforces Physics Production Analysis Data Export Interactive Metadata dataset DQ2 ¤ placement DQ2 Clients & API ¤ replication Centrals Catalogs ¤ deletion Repository, Content ¤ access Location, Database Accounting, Common ¤ consistency Subscription Modular Framework ¤ monitoring Tracer ¤ accounting Site Services Transfer Consistency Deletion WLCG LHC COMPUTING GRID OPEN SCIENCE GRID NORDUGRID

  7. Angelos Molfetas Managing Heterogeneous resources ¤ Users need to be able to: ¤ Download/Upload data from the grid ¤ Transfer data between sites ¤ User should not need to know about each storage system ¤ Many different mass storage systems are used - we need a simplified interface that hides the grid’s heterogeneity. ¤ Not trivial ¤ In ATLAS this is done by DQ2 middleware and abstraction layers like SRM ¤ For example: ¤ User downloads dataset by CLI: “dq2-get user.angelos.xxxxxxx” ¤ No specific knowledge is required about castor, dcache, xrootd, etc. 7

  8. Angelos Molfetas Catalogs ¤ Maintain global state of data (central catalog of all datasets on the grid) ¤ This has to scale ¤ Central point of failure ¤ In ATLAS we have Local File Catalogs (LFC) which also have to be maintained. ¤ For example, uploading data to the grid: ¤ Dq2-put –s files_location user.angelos.xxxxxxx ¤ Has to handle different storage systems ¤ Has to register files in central catalogs ¤ Has to register files in LFC ¤ Not trivial. E.g. order of operations in dq2-put can create dark data 8

  9. Angelos Molfetas Maintaining Consistency ¤ Consistency service for identifying data corruption on the grid ¤ Have to maintain awareness of changing datasets on the grid. For example, if we replicate dataset user.angelos.xxxxx to site A, B, and C, and then this dataset changes, the changes have to propagated ¤ At the ATLAS scale we need to enforce concept of dataset immutability 9

  10. Angelos Molfetas Replication policy ¤ Replication largely driven by the ATLAS Computing Model ¤ Datasets are marked as: ¤ Primary – mandated by the Computing Model ¤ Secondary - in excess of the Computing Model ¤ Secondary replicas reduced by popularity ¤ Determining popularity of datasets ¤ Collecting traces ¤ Aggregating traces ¤ Problems with the current approach – dynamic approaches 10

  11. Angelos Molfetas Scalability ¤ At the grid level, scalability is a primary File Events on the Grid concern 8.00E+07 6.00E+07 ¤ New technologies 4.00E+07 2.00E+07 ¤ Seven fold increase of file 0.00E+00 events over the year ¤ Disk I/O is the bottle neck ¤ Parkinson's Law 11

  12. Angelos Molfetas Trends ¤ Moving towards meta data driven model, rather than hierarchical container -> dataset -> file ¤ Increased emphasis on searching by meta data ¤ Simplification of services, consolidation (e.g. consolidation of LFCs) ¤ Optimisation by simulation ¤ Move to open protocols 12

  13. Angelos Molfetas Summary ¤ Major Issues: ¤ Scalability ¤ Consistency ¤ Replication policy ¤ Heterogeneity ¤ Trends ¤ Addressing scalability ¤ Metadata ¤ Simplification of services ¤ Simulation 13

  14. Angelos Molfetas Backup slides 14

  15. SRM and Space Tokens ¤ Storage systems implement a common interface REQUESTS ¤ Storage Resource Manager (SRM) ¤ gridftp as common transfer protocol ¤ storage specific access protocols SRM ¤ Space Tokens ¤ partitioning of storage resources CASTOR / according to activities dCache DPM / StoRM ¤ Each ATLAS site is identified by a BestMAN site name and according space token ¤ DESY-ZN_PRODDISK gridFTP local access ¤ 'srm': 'token:ATLASPRODDISK:srm://lcg-se0.ifh.de:8443/srm/managerv2?SFN=/pnfs/ifh.de/data/atlas/ atlasproddisk/' �

Recommend


More recommend