The ATLAS experiment at CERN is constructing an upgraded system for the "High-Luminosity LHC" (HL-LHC), with collisions due to start in 2029. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to $7.5\times10^{-34}\ \mathrm{cm}^{-2}\ \mathrm{s}^{-1}$, resulting in much higher pile-up and data rates than the current experiment was designed to handle. While this is essential to realise the physics programme, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the Trigger and Data Acquisition (TDAQ) system.
The design of the TDAQ upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz, Data Acquisition, which combines custom readout with commodity hardware and networking to deal with 4.6 TB/s input, and an Event Filter running at 1 MHz, which combines offline-like algorithms on a large commodity compute service with the potential to be augmented by commercial accelerators. Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and accelerated track reconstruction are designed to combat pile-up in the Trigger and Event Filter, respectively.
An overview of the planned phase II TDAQ system is provided, followed by a more detailed description of recent progress on the design, technology and construction of the system.
