DUNE DAQ Testing Jon Sensenig, Pierre Lasorak, Lukas Arnold June 24, 2019 Jon Sensenig, Pierre Lasorak, Lukas Arnold DUNE DAQ Testing June 24, 2019 1 / 7
Parts of testing Hit sender Is working Memory throughput measurements Latency measurements Software trigger Test of software trigger Trigger candidate board reader Latency measurements MLT → DFO Inclusion of trigger fragment Module level trigger → DFO chain Merging process. . . Jon Sensenig, Pierre Lasorak, Lukas Arnold DUNE DAQ Testing June 24, 2019 2 / 7
Hit sender Pierre Lasorak Jon Sensenig, Pierre Lasorak, Lukas Arnold DUNE DAQ Testing June 24, 2019 3 / 7
Hit sending (aka hit finder) BR status TP rates [Hz] (link 509) • CPU Hit sender BR “works”: • Receives and sends hits via PTMP. • Follows closely the rates from the hit finder algorithm. • Sends artdaq fragments when requested: • Managed to run with fragment issued from PTMP serialisation. • Didn’t decode the hits as it requires a bit of shuffling with dependencies of dunetpc. Time [sec] • CPU hit sender BR + Felix BR + Hit finding process have a very high memory throughput. • Cannot cope with all the links enabled, works well with ≲ 4 links. • Leads to missing fragments. • ZCU102 hit sender BR: • BR code similar to the python script used for testing the ZCU102. • Integrated in the RC (partition 4). Time [sec]
Hit sender BR latency • Latency of the BR seems to be < 1 sec. • Latency = now - hit creation time.
Hit sender BR future • Short term: • Makes sure the decoder works with PTMP serialisation, decide whether this is the preferred option (I think it is). • Test ZCU reader. • Medium term: • Investigate the RAM throughput. • Implement hit sending BR “replay”, which allows testing downstream without real reading. • Long term: • Save all the hits to offline (c.f. DAQ requirements)? “Prescale” version of that?
Software trigger Jon Sensenig Jon Sensenig, Pierre Lasorak, Lukas Arnold DUNE DAQ Testing June 24, 2019 4 / 7
Things as they currently stand. 1 Fig. Credit: Philip Rodrigues
Recent Work ● Modified MLT BR to stand in for the Trigger Candidate + Module Level Trigger BRs and reads in the trigger primitives. ○ For clarity, no windowing or sorting done here ● Note, previous setup ran with 1 to 2 Felix links. ● Added Trigger Candidate BR (with TP windowing and TPSet sorting) and previously SWTrigger BR became the Module Level Trigger (MLT) ● Looked at TP rate of channels, ignored high rate. Helped a bit on one link. 2
All 10 Felix BRs running in both cases. Latency between TPset creation time time and now ● Left: No Hit-sending BRs running ● Right: All 10 Hit-sending BRs running ● Related to the high memory throughput of “CPU hit sender BR + Felix BR + Hit finding process”mentioned in Pierre’s slides 3
Upcoming ● Merging the SWTrigger code with the latest DFO version and test. ● Important to have latencies fixed from hit-finding/sending so we can start looking at trigger candidates. ● Next test period we will focus on testing the actual trigger candidate algorithms at the Candidate Level Trigger (CLT) and MLT. 4
MLT → DFO Implemented TriggerFragment Count Part ID Sources Start time Time span Module ID TBD Submodule ID TBD Jon Sensenig, Pierre Lasorak, Lukas Arnold DUNE DAQ Testing June 24, 2019 5 / 7
Outlook Process information through the dataflow chain Merge with the CLT (Candidate-level Trigger) Tests on next DAQ testing period Include Physics trigger on MLT level Jon Sensenig, Pierre Lasorak, Lukas Arnold DUNE DAQ Testing June 24, 2019 6 / 7
Dataflow tests → Kurt’s talk Jon Sensenig, Pierre Lasorak, Lukas Arnold DUNE DAQ Testing June 24, 2019 7 / 7
Recommend
More recommend