Machine Learning for Real-Time Processing of ATLAS Liquid Argon Calorimeter Signals with FPGAs
Abstract
The Large Hadron Collider (LHC) will increase its instantaneous luminosity by a factor of 5-7 for the High Luminosity LHC (HL-LHC) upgrade. At HL-LHC, the number of proton–proton (p-p) collisions per bunch crossing (pileup) will rise significantly, imposing stringent requirements on detector electronics and real-time data processing. The ATLAS Liquid Argon (LAr) calorimeter, measuring the energy of electrons and photons, will be upgraded to prepare it for the high rates expected at the HL-LHC. The full LAr readout electronic chain will be replaced, namely a new off-detector board (LASP) will be installed to compute the energy deposited in the detector. The LASP board is equipped with state-of-the-art FPGAs with greater processing power and memory, enabling the deployment of more advanced algorithms to compute the energy and replace the optimal filtering (OF) algorithm. Four neural network architectures are presented, and their improvements in energy resolution compared to the legacy filter algorithms are discussed. A Bayesian optimisation of the neural networks hyperparameters is used to ensure the best performance while limiting the network size for FPGA implementation. Moreover, the Deep Evidential Regression (DER) is used to compute the uncertainty on the network prediction without significant increase in the network size.
How to cite
Metadata are provided both in
article format (very
similar to INSPIRE)
as this helps creating very compact bibliographies which
can be beneficial to authors and readers, and in
proceeding format which
is more detailed and complete.