PoS - Proceedings of Science
Volume 390 - 40th International Conference on High Energy physics (ICHEP2020) - Parallel: Computing and Data Handling
Resource provisioning and workload scheduling of CMS Offline Computing
A. Perez-Calero Yzquierdo* on behalf of the CMS collaboration
*corresponding author
Full text: pdf
Pre-published on: February 17, 2021
Published on:
Abstract
The CMS experiment requires vast amounts of computational capacity in order to generate, process and analyze the data coming from proton-proton collisions at the Large Hadron Collider, as well as Monte Carlo simulations. CMS computing needs have been mostly satisfied up to now by the supporting Worldwide LHC Computing Grid (WLCG), a joint collaboration of more than a hundred computing centers geographically distributed around the world. However, as CMS faces the Run 3 and High Luminosity LHC (HL-LHC) challenges, with increasing luminosity and event complexity, growing demands for CPU have been estimated. In these future scenarios, additional contributions from more diverse types of resources, such as Cloud and High-Performance Computing (HPC) clusters, will be required to complement the limited growth of the capacities of WLCG resources. A number of strategies are being evaluated on how to access and use WLCG and non-WLCG processing capacities as part of a combined infrastructure, successfully exploit an increasingly more heterogeneous pool of resources, efficiently schedule computing workloads according to their requirements and priorities, and timely deliver analysis results to the collaboration, which are described in this paper.
How to cite

Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.

Open Access
Creative Commons LicenseCopyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.