PDP focus session summary
Main outcomes of Vista25-NG Essential for Nikhef’s impact Stoomboot E s s e n Specifically as related to Advanced-Beta & Vendor Collaboration t i a l long-term benefit to physics computing F u P t e parallel: FPGA, GPU, Xeon Phi … Machine/Deep Learning u r r c Specific Expertise e e i algorithms / HP programming Training for N v e e d e tension demands vs Moore PhD Students d Infrastructure G o h for Tier-1 o a d v e Collaboration t o Nikhef contribution to experiments valorisation
Organized by links & funding AARC EGI AENEAS (SKA) SURF EU Funding LHC Roadmap Infrastructure for DNI Operations Collaboration DNI Including XENON & VIRGO Tier-1 Excluding the Stoomboot Advanced Beta & Vendor Collaboration “future perceived need” layer for now Essential Large Discounts (=funding)
Conclusions this part • Participants agreed with vision incl importance of Tier-1 • Theory: stoomboot too small and too slow (grants?) • Auger wants a “NL computing contribution”
Main outcomes of Vista25-NG parallel: FPGA, GPU, Xeon Phi … Machine/Deep Learning F u P t e u r r c Specific Expertise e e i N v e e d e d algorithms / HP programming Training for tension demands vs Moore PhD Students
Main outcomes of Vista25-NG parallel: FPGA, GPU, Xeon Phi … Machine/Deep Learning F u P t e u r r c Specific Expertise e e i N v e e d e d algorithms / HP programming Training for tension demands vs Moore PhD Students we do this in collaboration with existing training (Verkerke C++ course eg)
Main outcomes of Vista25-NG doubtful whether we could make impact many groups working, academic, data science institutes, experiment ML fora, …. parallel: FPGA, GPU, Xeon Phi … Machine/Deep Learning F u P t e u r r c Specific Expertise e e i N v e e d e d algorithms / HP programming Training for tension demands vs Moore PhD Students we do this in collaboration with existing training (Verkerke C++ course eg)
Main outcomes of Vista25-NG doubtful whether we could make impact many groups working, academic, data science institutes, experiment ML fora, …. parallel: FPGA, GPU, Xeon Phi … Machine/Deep Learning (important) niche right now F lots of groups working (also academic) u P t e u r r c Specific Expertise e e i N v e e d e d algorithms / HP programming Training for tension demands vs Moore PhD Students we do this in collaboration with existing training (Verkerke C++ course eg)
Main outcomes of Vista25-NG doubtful whether we could make impact many groups working, academic, data science institutes, experiment ML fora, …. parallel: FPGA, GPU, Xeon Phi … Machine/Deep Learning (important) niche right now F lots of groups working (also academic) u P t e u r r c Specific Expertise e e i N v e e d e d algorithms / HP programming Training for tension demands vs Moore PhD Students this is what we should go for we do this in collaboration with existing FPGA/GPU etc is a subset of this training (Verkerke C++ course eg)
Main outcomes of Vista25-NG doubtful whether we could make impact many groups working, academic, data science institutes, experiment ML fora, …. parallel: FPGA, GPU, Xeon Phi … Machine/Deep Learning (important) niche right now F lots of groups working (also academic) u P t e u r r c Specific Expertise e e i N v e e d e d algorithms / HP programming Training for tension demands vs Moore PhD Students this is what we should go for we do this in collaboration with existing FPGA/GPU etc is a subset of this training (Verkerke C++ course eg) aware of challenge: enough “in” collaboration to have impact while retaining PDP “independence” and tackling various projects
PDP and CT • Actual PDP group is small • Most of PDP work is done by CT staff • Vista25 choices in PDP have consequences for CT • same for exp’ts and tech depts, for PDP more acute
Recommend
More recommend