Main Image
Volume 270 - International Symposium on Grids and Clouds (ISGC) 2016 (ISGC 2016) - Infrastructure Clouds and Virtualisation
Opportunistic usage of the CMS online cluster using a cloud overlay
O. Chaze,* J.M. André, A. Andronidis, U. Behrens, J. Branson, P. Brummer, C. Contescu, S. Cittolin, B. Craigs, G.L. Darlea, C. Deldicque, Z. Demiragli, M. Dobson, N. Doualot, S. Erhan, J.R. Fulcher, D. Gigi, F. Glege, G. Gomez-Ceballos, J. Hegeman, A.G. Holzner, R. Jimenez-Estupiñán, L. Masetti, F. Meijers, E. Meschi, R. Mommsen, S. Morovic, V. O'Dell, L. Orsini, C. Paus, M. Pieri, A. Racz, H. Sakulin, C. Schwick, T. Reis, D. Simelevicius, P. Zejdl
*corresponding author
Full text: pdf
Published on: 2017 January 11
After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the
largest and most powerful particle accelerator in the world, has started its second three year run.
Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster
is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage
around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid
(WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can
access and process data, and are always seeking more computing power and data. The backbone
of the CMS Online cluster is composed of 16000 cores which provide as much computing power
as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across
CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up
the processing of data, so an effort has been made to allocate the resources of the CMS Online
cluster to the grid when it isn’t used to its full capacity for data acquisition. This occurs during
the maintenance periods when the LHC is non-operational, which corresponded to 117 days in
2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data
processing by making the cluster accessible during the time between two physics collisions while
the LHC and beams are being prepared. This is usually the case for a few hours every day, which
would vastly increase the computing power available for data processing. Work has already been
undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a
minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts
the different hardware and networks that the cluster is composed of. The operation of the cloud
(starting and stopping the virtual machines) is another challenge that has been overcome as the
cluster has only a few hours spare during the aforementioned beam preparation. By improving
the virtual image deployment and integrating the OpenStack services with the core services of
the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual
machines within 10 minutes and to turn them off within seconds. This document will explain
the architectural choices that were made to reach a fully redundant and scalable cloud, with a
minimal impact on the running cluster configuration while giving a maximal segregation between
the services. It will also present how to cold start 1000 virtual machines 25 times faster, using
tools commonly utilised in all data centres.
Open Access
Creative Commons LicenseCopyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.