status of grid computing in india
play

Status of Grid Computing in India P.S.Dhekne, India Coordinator LCG - PowerPoint PPT Presentation

Status of Grid Computing in India P.S.Dhekne, India Coordinator LCG May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 1 Mega Science Projects Todays science is based on worldwide collaborations by sharing computations, data and


  1. Status of Grid Computing in India P.S.Dhekne, India Coordinator LCG May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 1

  2. Mega Science Projects – Today’s science is based on worldwide collaborations by sharing computations, data and equipments – India is participating in LHC, STAR, PHENIX experiments – Researchers need more accurate & precise solutions to their problems in shortest possible time – Related computational problems are so complex that it can not be solved even on a most powerful (single) computing center in the world May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 2

  3. Why High Performance Computing? • Mega-Science projects need – Huge Computations – Good collaborative Tools – Reliable, Robust, Fault Tolerant System • User wants cost effective solution & Computing power good enough to run 4-5 runs a day • Grid Computing may satisfy users demand May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 3

  4. ���������������������������� Resource sharing and coordinated problem solving in dynamic, multiple R&D units : Millions of users, Thousand Organizations, Many Countries �������� User ������� ��������� ������ ������� Visualization ������������������� Making Information Technology ������������������������������������ ������������������������������������ (IT) as easy to use as plugging �������������������������������������� into electrical or TV socket ����������������� May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 4

  5. New Opportunities “ Resource sharing and coordinated problem solving in dynamic, multiple R&D units, virtual organizations” � ������������������������������������������������������� ����������������������������������������������������������� ����������������������������� � ��������������� May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 5

  6. ���������������������������� ����!��� With improved Web Services ( SOAP, WSDL, UDDI,WSFL), COM technology it is easy to develop loosely coupled distributed applications Collaborative Tools A Libraries - Chat Laboratory - Email without - Video walls Conferencing - VR - White Boards - - Web Portal May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 6

  7. LHC Computing • LHC (Large Hadron Collider) will begin taking data in 2006-2007 at CERN, Geneva. • Data rates per experiment of >100 Mbytes/sec. • >1 Pbytes/year of storage for raw data per experiment. • World-wide collaborations and analysis. – Desirable to share computing and analysis throughout the world – Computing requirement is so huge that it can’t be met by a single Computing Centre May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 7

  8. LHC requirements • Computing challenges in LHC lies in the real time storage of the huge amount of data, re-construction of tracks of particles released during collision and computational simulation for physics experiments. • Performance required for most rudimentary simulations is about 20 Teraflop sustained speed, which is equivalent to 40,000 personal computers • Storage requirements are about a million times the presently available storage on desktop personal computers. May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 8

  9. Data Grids for HEP ������������������������������������� #2&���3��� !��"#$���������9������&��0�(((� %������$&���� !((�12&���3��� $���"��:0��;��5������ %�������#���������'���� ������������-�������������� �5��&��0������/ �(��"#$ ����������!((����������� ���������� !((�12&���3��� )���������������5������� !�12&��������6� Tier 0 Tier 0 �)*+���������������� 4���1-���3����������������������������������������� ���8���'������������������� Tier 1 Tier 1 '������*�������� ������&�*�������� "���&�*�������� '����,�-� .��"#$ ������� ������� ������� 4���1-���3��� ������������������������� ������������� ������������� ������������� ������������� Tier 2 Tier 2 !��"#$ !��"#$ !��"#$ !��"#$ !��"#$ 4���1-���3��� Tier 3 Tier 3 "��������� "�������� "�������� "�������� (/�0�"#$ #�&��������������������&��������������/ )���������������������5�� !(���&������������������������������� #�&��������������� !�12&���3��� ��������7��������������������������������-���������-&����� �������������5�� Tier 4 Tier 4 #�&������������������� May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 9

  10. International Collaboration • India became a CERN observer state in 2002 • Large Hadron Collider (LHC) Grid Software Development, DAE- CERN Protocol agreement on computing for LHC data analysis, a DATA Grid called LCG ~10 people working in India for 5 years amounting to 7.5 MSWF • BARC developed software is deployed at LCG, CERN - Co-relation Engine, Fabric management - Problem Tracking System (SHIVA) - Grid Operations (GRID VIEW) - Quattor enhancements a system administration toolkit May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 10

  11. � ����!"#$%&!�����&'�"&()�# • Development of LCG software agreement signed in 2002: CERN 10 DAE people working in India for 5 years (7.5MCHF) • Tier 2/3 Centers in India Tier 0/1 Centre Tier 2 and 622 100 VECC Tier 2 Centre Alice users Mbps Mbps and CMS users 622 TIFR Software 10 Mbps Mbps Internet developed by BARC for CERN SHIVA 100 Mbps 10 Mbps GRIDVIEW Tier 3 and Fabric Mgmt. BARC CMS Users CAT Corelation Engine DAE/DST/ERNET: Geant, Garuda: C-DAC national Grid May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 11

  12. ERNET – GEANT Connectivity • A 45 Mbps IPLC based connectivity is planned between ERNET and GEANT. • It is a program funded by European Union through DANTE and Govt. of India through ERNET, TIFR & DST. • 10 research institutes/universities will use the link for collaborative research in High Energy Physics. • We would run IPv6 on this link. May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 12

  13. ERNET Connectivity with European Grid Univ. of Jammu Panjab Univ. Chandigarh Delhi (DU) Univ. of Raj. IIT AMU Jaipur IIT Kanpur Guwahati CAT VECC Kolkat Indore a 622 Mbps IPLC Mumbai IUCAA IOP Bhubaneshwar Pune (TIFR,BARC) Univ. of Hyderabad • Multi-Gigabit pan-European Research Network IIT Chennai • Connecting 32 European Countries and 28 NRENs ERNET PoPs IISC Banglore Universities / R&D Institutions • Backbone capacity in the range of: 622 Mb/s-10Gb/s proposed to be connected in I st Phase Austria Denmark Croatia Lithuania Poland AT DK HR LT PL Additional Links Proposed Belgium Estonia Hungary Luxembourg Portugal BE EE HU LU PT Switzerland Spain Ireland Latvia Romania ERNET Backbone Links CH ES IE LV RO Cyprus Finland Israel Malta Sweden CY FI IL MT SE Czech Republic France Iceland Netherlands Slovenia Turkey TR CZ FR IS NL SI United Kingdom Germany Greece Italy Norway Slovakia UK DE GR IT NO SK May 2, 2006 P.S.Dhekne, BARC Talk at ISGS 2006 Taiwan 13

Recommend


More recommend