Developments in Architectures and Services for using High Performance Computing in Energy Frontier Experiments
2017 February 06
2017 April 19
The integration of HPC resources into the standard computing toolkit of HEP experiments is becoming important as the needs of experiments outpace traditional resources. We will describe solutions that address some of the difficulty in running data-intensive pipelines on HPC systems. Users of NERSC HPCs are benefiting from a newly developed package called "Shifter" that creates Docker-like images and the deployment of the new "Burst Buffers" NVRAM file system designed to offer extreme I/O performance, supporting terabyte-per-second bandwidth and over 10 million IO operations per second. These tools have enabled particle physicists from multiple experiments to routinely run their entire multi-TB CVMFS-based software stacks across tens of thousands of compute cores. In addition, an Edge Service has been developed to provide a uniform interface for HEP job management systems to access supercomputers. It is based on the Python Django framework and is composed of two processes, of which one runs inside the supercomputing environment and one runs outside. It has been used to run over 100 million core-hours of LHC experiment jobs on the Mira supercomputer at the Argonne Leadership Computing Facility.