Dataset-wide Graph Neural Networks for BSM Searches at the LHC
Abstract
We present a new application of Graph Neural Networks (GNNs) for LHC searches that aims to improve event classification by representing entire datasets as graphs, with events as nodes and kinematically similar events connected by edges. The strategy builds from our development of graph convolutions and graph attention mechanisms, applying scalable solutions for training GNN models on large graphs with robust background validation. By merit of the search style and graph design, the GNN obtains extensive information from topological network structures such as clusters, helping to distinguish signal from background through their distinct characteristic connectivity. This work extends our previous proof of concept for dataset-wide graphs in beyond-Standard Model (BSM) searches, which demonstrates a promising baseline of signal-background separation. Since our recent extension to include GNNs, the strategy reveals further sensitivity improvements with a leptoquark BSM benchmark beyond a conventional Deep Neural Network (DNN) approach. The method focuses on anomaly detection with autoencoders, exploiting the new format of a dataset-wide GNN in an example unsupervised search, calculating the event-by-event anomaly score. The graph-based autoencoder achieves stronger signal-background separation than the baseline non-graph autoencoder in all tested cases.
How to cite
Metadata are provided both in
article format (very
similar to INSPIRE)
as this helps creating very compact bibliographies which
can be beneficial to authors and readers, and in
proceeding format which
is more detailed and complete.