Volume 378 - International Symposium on Grids & Clouds 2021 (ISGC2021) - Converging High Performance infrastructures: Supercomputers, clouds, accelerators
Deep Learning fast inference on FPGA for CMS Muon Level-1 Trigger studies
T. Diotalevi*, M. Lorusso, R. Travaglini, C. Battilana and D. Bonacorsi
Full text: pdf
Published on: October 22, 2021
Abstract
With the advent of the High-Luminosity phase of the LHC (HL-LHC), the instantaneous luminosity of the Large Hadron Collider at CERN is expected to increase up to $\approx 7.5 \cdot 10^{34} cm^{-2}s^{-1}$. Therefore, new strategies for data acquisition and processing will be necessary, in preparation for the higher number of signals produced inside the detectors. In the context of an upgrade of the trigger system of the Compact Muon Solenoid (CMS), new reconstruction algorithms, aiming for an improved performance, are being developed. For what concerns the online tracking of muons, one of the figures that is being improved is the accuracy of the transverse momentum ($p_T$) measurement.
Machine Learning techniques have already been considered as a promising solution for this problem, as they make possible, with the use of more information collected by the detector, to build models able to predict with an improved precision the $p_T$.
This work aims to implement such models onto an FPGA, which promises smaller latency with respect to traditional inference algorithms running on CPU, an important aspect for a trigger system. The analysis carried out in this work will use data obtained through Monte Carlo simulations of muons crossing the barrel region of the CMS muon chambers, and compare the results with the $p_T$ assigned by the current CMS Level 1 Barrel Muon Track Finder (BMTF) trigger system.
DOI: https://doi.org/10.22323/1.378.0005
How to cite

Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.

Open Access