When very-high-energy gamma rays interact high in the Earth’s atmosphere, they produce cascades of particles that induce flashes of Cherenkov light. Imaging Atmospheric Cherenkov Telescopes (IACTs) detect these flashes and convert them into shower images that can be analyzed to extract the properties of the primary gamma ray. The dominant background for IACTs is comprised of air shower images produced by cosmic hadrons, with typical noise-to-signal ratios of several orders of magnitude. The standard technique adopted to differentiate between images initiated by gamma rays and those initiated by hadrons is based on classical machine learning algorithms, such as Random Forests, that operate on a set of handcrafted parameters extracted from the images. Likewise, the inference of the energy and the arrival direction of the primary gamma ray is performed using those parameters. State-of-the-art deep learning techniques based on convolutional neural networks (CNNs) have the potential to enhance the event reconstruction performance, since they are able to autonomously extract features from raw images, exploiting the pixel-wise information washed out during the parametrization process.
Here we present the results obtained by applying deep learning techniques to the reconstruction of Monte Carlo simulated events from a single, next-generation IACT, the Large-Sized Telescope (LST) of the Cherenkov Telescope Array (CTA). We use CNNs to separate the gamma-ray-induced events from hadronic events and to reconstruct the properties of the former, comparing their performance to the standard reconstruction technique. Three independent implementations of CNN-based event reconstruction models have been utilized in this work, producing consistent results.