regions of interest data reduction and trigger rates for
play

Regions-of-Interest, Data Reduction and Trigger Rates for DUNE - PowerPoint PPT Presentation

Regions-of-Interest, Data Reduction and Trigger Rates for DUNE DocdB #16982 Josh Klein, Penn, 11/25/2019 The Question There are always questions about how high a trigger rate we could actually sustain: Calibration data 39 Ar


  1. Regions-of-Interest, Data Reduction and Trigger Rates for DUNE DocdB #16982 Josh Klein, Penn, 11/25/2019

  2. The Question There are always questions about how high a trigger rate we could actually sustain: • Calibration data 39 Ar monitor/calibration • • Low E physics (solar n s, ?) So we asked Data Flow: If storage were not an issue, at what trigger rate does the Event Builder become the bottlenecK?

  3. The Question There are always questions about how high a trigger rate we could actually sustain: • Calibration data 39 Ar monitor/calibration • • Low E physics (solar n s, ?) Kurt responded: At what trigger rate do you want the bottleneck to be?

  4. Big Picture Our overall data rate is > 9 Tb/s per 1 SP module=35.5 EB/year So about 1000 times bigger than our allocation (before any compression) Three choices to bring down to acceptable levels: 1. Bias the channels 2. Bias the trigger 3. Bias both

  5. Big Picture Our overall data rate is > 9 Tb/s per 1 SP module=35.5 EB/year So about 1000 times bigger than our allocation (before any compression) Three choices to bring down to acceptable levels: 1. Bias the channels 2. Bias the trigger 3. Bias both This choice simplified our effort up to the TDR (and perhaps beyond) • And makes analysis simpler than Option 3 • If trigger bias is small (e.g., low threshold, steep turn-on) then this is a good choice • And this follows the rest of the DUNE philosophy (no FE ZS, for example) •

  6. Big Picture For high-energy physics, this appears to be true D. Rivera Analysis threshold for LBL is 500 MeV (E vis )

  7. Physics Opportunities? Moving threshold lower could catch 8 B solar but there is a small window before we hit the neutrons. We need 4 orders-of- magnitude rejection of those unless their rate is a lot lower Storage cap than we think.

  8. The Question • But maybe the neutrons will be shielded down to 10 Hz… • And Lasorak and Rivera have shown that cluster-finders are ~10x more efficient for neutrino events than neutrons (3%). • Or we get even more creative So, what are the trigger rates if we start to bias the channels to reduce the data volume/event?

  9. The Question • But maybe the neutrons will be shielded down to 10 Hz… • And Lasorak and Rivera have shown that cluster-finders are ~10x more efficient for neutrino events than neutrons (3%). • Or we get even more creative So, what are the trigger rates if we start to bias the channels to reduce the data volume/event? And then, KURT, how much would it cost for EB to handle it? Or other bottlenecks in the system?

  10. Big Picture • I assume x2 compression because that is min reqd in current scheme • Not every possible scheme we could imagine (so be patient) • Ignores supernova triggers (In fact, as Giovanna points out, table does not include the overhead of headers and error bits, which push the 6.075 GB/event to > 7 GB/event).

  11. Big Picture Every additional bias requires an efficiency measurement : • Standard candle (none of these) • Calibration source (we may have none of these; PNS could work) • Zero-bias trigger or lower threshold/prescaled studies • Complete and tested Monte Carlo • We never get back what we lose (different than analysis efficiencies) And possibly reconstruction/analysis studies Additional logic means more corner cases • There are always boundaries in the logic So more bias should come with a strong motivation or low risk

  12. Current Rate Limit With x2 compression and total volume =4x SP data volume Maximum trigger rate is 0.078 Hz. (With real overheads it is ~20% lower than this)

  13. Reducing Readout Window More bias is bad unless it really really cuts nothing… Actual drift time at full field is 2.25 ms • We multiplied by 1.2 to be extra safe • Then x2 because of start/end ambiguity for trigger • • But at 10 MeV threshold, trigger ambiguity is << drift time • So if 5.4 à 2.7 ms (keeps 20% conservative buffer) Maximum trigger rate is 0.156 Hz. This change leverages no new known physics, but makes analysis faster (And maybe it is a little less embarrassing).

  14. Trigger Candidate APA Localization We already create candidates on an APA-by-APA basis So why store data from APAs that have no coincident candidates? (because candidate threshold may be above interesting secondaries, etc.) • If cosmics still dominate (unlikely) then on average we would store 6 APAs (Rodrigues) à 1.95 Hz • If we lowered the threshold then we are dominated by single APA events à 11.7 Hz This change might leverage 8 B neutrinos (see Rivera memo)

  15. (Effective) Zero Suppression (See Phil’s talk) Take 100 µ s around every “hit” and assume we are • dominated by 39 Ar singles (at 10 MBq/10 ktonne) Assume half of 39 Ar above threshold • And noise is much smaller than this rate • Then each 5.4 ms snapshot has 135,000 hits • And therefore 0.04 GB/event (not including overhead) • Maximum trigger rate is ~12 Hz.

  16. (Effective) Zero Suppression “Natural” way to do this is to re-capture TPs • Use them to identify hits associated with TCs • PTMP can buffer TPs for post-trigger access • (See Brett’s very nice diagram of this “epicycle”) •

  17. (Effective) Zero Suppression “Toward(s) Data Selection Algorithms” December 11, 2017

  18. Trigger Candidate Localization and Zero Suppression What is the most extreme possibility…? • High data volume for zero suppression caused by singles • APAs are big; forget about the ~2400 wires without hits • So only keep trigger candidates and only 100 µ s around these Assume we are now dominated by radiologicals…maybe we hit 50 channels per event…? Event size is now ~15 kB, maximum trigger rate is 32.5 kHz Collect large sample of 8 B n s, neutrons, 42 Ar, 40 Cl, and (maybe) pep n s This assumes signal ID on induction wires is efficient before processing

  19. Summary • Plenty of possibilities to reduce data set • Not a very steep curve for more physics • Not clear what this gets beyond just saving TPs (if we generate induction TPs) • But should not design system that precludes these (with future money and effort)

Recommend


More recommend