LHC ‐ CMS Tier2 facility at TIFR http://indiacms.res.in Kajari Mazumdar Department of High Energy Physics Department of High Energy Physics Tata Institute of Fundamental Research Mumbai India Mumbai, India. Plan • Introduction • Introduction • Quick tour of LHC and the computing grid • Issues related to network and CMS-T2 at Mumbai EU-India Grid meeting, APAN conference, Delhi August 24, 2011
e-Science and e-Research • Collaborative research that is made possible by sharing resources across ll b h h d bl b h the internet (data, computation, people’s expertise...) – Crosses organisational , national and International boundaries – Often very compute intensive and/or data intensive – CERN-LHC project is an excellent example of all the above CERN-LHC project is an excellent example of all the above – HEP-LHC has been a driving force for GRI D technology – Worldwide LHC Computing GRI D (WLCG) is a natural evolution of internet technology WWW was born in CERN to satisfy the needs of previous WWW was born in CERN to satisfy the needs of previous generation HEP experiments
Large Hadron Collider (LHC) Largest ever scientific project 20 years to plan, build 20 years to work with • 27 km circumference • at 1.9 K • at 10 -13 Torr t 10 13 T • at 50-175 m below surface • > 10K magnets 4 big experiments: >10K scientists, students,engineers. Operational since 2009 Q4 Operational since 2009, Q4 � excellent performance � fast reap of science!
LHC: ~ 10 -12 12 seconds ( LHC: ~ 10 seconds (p p- -p p) ) ~ 10 -6 seconds ( 10 6 ~ 10 10 seconds (Pb d ( d (Pb Pb Pb Pb-Pb Pb) Pb) Big Bang Big Bang WMAP WMAP (2001) COBE( COBE(1989) Today Today Experiments in Astrophysics & Cosmology Experiments in Astrophysics & Cosmology ~300‘000 years ~300‘000 years
Enter a New Era in Fundamental Science Enter a New Era in Fundamental Science LHCb LHCb CMS CMS ATLAS ATLAS Exploration of a new energy frontier Exploration of a new energy frontier in p p and in in in p-p and and Pb and Pb Pb-Pb Pb Pb Pb collisions Pb collisions collisions collisions ALICE ALICE ALICE ALICE LHC ring: LHC ring: 27 km circumference 27 km circumference
What happens in LHC experiment Summer, 2011 Proton ‐ Proton 1400 bunch/beam / Protons/bunch 2. 10 11 Beam energy 3.5 TeV (1TeV = 10 12 eV) Luminosity 2.10 33 /cm 2 /s Crossing rate 20 MHz Collisions 10 8 Hz Mammoth detectors register signals for Mammoth detectors register signals for energetic, mostly (hard) inelastic collisions involving large momentum transfer.
Motivation of LHC experiments LHC is meant to resolve some of the most puzzling issues in physics: • Nature of elementary particles and interactions shortly after Big Bang y p y g g � how many interactions when the universe was much hotter � which elementary particles existed with what properties? • we recreate conditions of very early universe at LHC we recreate conditions of very early universe at LHC. • Origin of the mass � mass patterns among different particles in today’s universe � mass patterns among different particles in today s universe � why photon is massless, while carriers of weak interaction are massive? � if the symmetry is broken spontaneously, what is the signature? � if th t i b k t l h t i th i t ? � existence of the God Particle? • The Higgs boson yet to be discovered, but coming soon, stay tuned! LHC is at the threshold of discovery � likely to change the way we are y y g y used to think of Nature!
Grand menu from LHC • Nature of dark matter: we know only 4% of the constituent of the universe • A good 25% of the rest is observed � massive enough to dictate the motion of galaxies � non luminous and hence “dark” � non-luminous, and hence dark � LHC can tell us the nature of this dark matter! LHC LHC will also shed light on: ill l h d li h • why there is only matter and no antimatter today. Expected from visible • properties of the 4 th state of matter: p p Distribution of matter Distribution of matter Quark-Gluon-Plasma which existed 1 pico sec. after the big bang, before formation of neutrons and protons. …. All this is possible because LHC is essentially a microscope All thi i ibl b LHC i ti ll i AND a telescope as well!
To begin at the end: To begin at the end: • the operations of the LHC machine and the experiments have been a great success • the experiments produced fantastic results, often only days after the data was taken � more than 200 publications in < 2 years data was taken � more than 200 publications in < 2 years • • This is partly because of the long lead time experiments had This is partly because of the long lead ‐ time experiments had – This has had implications in the analysis patterns • But there have been lessons to be learned But there have been lessons to be learned – And we have just started on a treadmill, which will require continual development LHC Computing Grid is the backbone of the success story
3170 scientists and engineers Example of a modern detector including ~800 students 169 institutes in 39 countries 169 institutes in 39 countries India
Data rates @ CMS as foreseen for design parameters Presently event size ~ 1MB (beam flux lower than design ) Presently event size 1MB (beam flux lower than design ) data collection rate ~ 400 Hz
Challenges •Versatile experiments, equipped with very specialized detectors. ~10 7 electronic channels per experiment, ready every 25 ns to collect information y y of debris from violent collisions. � Reconstruct 20K charged tracks in a Charged tracks from heavy single event (lead lead collisions at LHC) single event (lead-lead collisions at LHC) ion collision vertex ion collision vertex • Event size related to flux/intensity • 1.5 Billion events recorded in 2010 in 2010 • > 2B events, much more complicated, to be recorded during 2011 d d d i 2011 � resource utilization to be prioritized by carefully throwing soft, mundane 10 vertices in a single p-p collision, to be collisions. discriminated from the interesting process
The GRID Computing Goal • Science without borders S i i h b d • Provide Resources and Services to store/serve O(10) PB data/year • Provide access to all interesting physics events to O(1500) collaborators g p y ( ) around the world • Minimize constraints due to user localisation and resource variety • Decentralize control and costs of computing infrastructure • Share resources with other LHC experiments • Share resources with other LHC experiments � Solution through Worldwide LfHC Computing GRID • Today >140 sites Today >140 sites � Delivery of physics should be fast • ~250k CPU cores � Workhorse for production data handling • ~100 PB disk
Layered Structure of CMS GRID � connecting computers across globe Several Tier 0 Tier 0 CERN CERN Petabytes/sec . Online data computer recording Experimental centre, site Geneva 10 Gbps Tier 1 ASIA S Germany Germany CERN CERN USA USA N National centres i l Italy France (Taiwan) 1-2.5 Gbps Tier 2 Regional groups in India China Pakistan Korea a continent/nation Taiwan I di Indiacms T2_IN_TIFR Panjab Delhi BARC Univ. Univ. Different Universities, TIFR TIFR Institutes in a country Individual scientist’s PC,laptop, ..
CMS in Total: 1 Tier-0 at CERN (GVA) 1 Ti 0 CERN (GVA) 7 Tier-1s on 3 continents 50 Tier-2s on 4 continents CMS T2 in India : one of the 5 in Asia-Pacific region Today : 6 collaborating institutes in CMS , ~ 50 scientists +students 2.1% of signing authors in publication, 2 1% f i i th i bli ti Contributing to computing resource of CMS ~ 3%
Quick description of LHC grid tiers First job of the offline system is to process and monitor data at T0 First job of the offline system is to process and monitor data at T0 Processing time depends on the experiment (less than anticipated for CMS) T0: 1 M jobs/day + test-jobs. traffic : 4Gbps input, > 13 Gbps served / ff G G CERN Tier 0 moves ~ 1 PB data per day, automated subscription T1 processes data further several times a year, coordinates with T2s p y , Challenges at T2 T2s Hosts specific data streams for physics analysis T2s Hosts specific data streams for physics analysis As local resources demand, data is “tidied up” at T2 Gets main data from T1s, recently more communications among T2s Real workhorses of the system with growing roles Analysis system/paths working well y y p g Site readiness is at high level (lot of testing), Availability keeps improving (with lot of effort!) Typically 100k analysis jobs/day/expt Typically 100k analysis jobs/day/expt
Recommend
More recommend