PoS - Proceedings of Science
Volume 358 - 36th International Cosmic Ray Conference (ICRC2019) - GRI - Gamma Ray Indirect
GammaLearn: A Deep Learning Framework for IACT Data
M. Jacquemont*, T. Vuillaume, A. Benoit, G. Maurin, P. Lambert, G. Lamanna and A. Brill
Full text: pdf
Pre-published on: July 22, 2019
Published on: July 02, 2021
Abstract
Imaging atmospheric Cherenkov telescopes (IACT) data require an important analysis in order to reconstruct events and obtain a photon list. The state-of-the-art reconstruction is made of several steps including image analysis, features extraction and machine learning.
Since the 2012 ImageNet breakthrough, deep learning advances have shown dramatic improvements in data analysis across a variety of fields. Convolutional neural networks look particularly suited to the task of analysing IACT camera images for event reconstruction as
they provide a way to reconstruct photon list directly from raw images, skipping the pre-processing steps. Moreover, despite demanding important computing resources to be trained and optimised, neural networks show very good performances during execution, making them viable for real-time analysis for the future generation of IACT. Here we present GammaLearn, a python framework providing the tools and environment to easily train neural networks on IACT data. Relying on PyTorch, it allows the use of indexed convolution on images with non-cartesian pixel lattices predominant in IACT for the low-level operations and offers a simple configuration file-based workflow, producing the trained model, training estimators as well as higher level results. The proposed framework is modular and straightforward to customize by end users. It has been tested and validated on
the analysis of the Cherenkov Telescope Array simulated data.
DOI: https://doi.org/10.22323/1.358.0705
How to cite

Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.

Open Access
Creative Commons LicenseCopyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.