Integration of a Heterogeneous Compute Resource in the ATLAS Workflow

Integration of a Heterogeneous Compute Resource in the ATLAS Workflow

Auteur : Felix Bührer, Anton J. Gamel, Benoit Roland, Ulrike Schnoor, Markus Schumacher, [Study Group] ATLAS Collaboration CERN

Date de publication : 2019

Éditeur : Universität

Nombre de pages : Non disponible

Résumé du livre

Abstract: With the ever-growing amount of data collected with the experiments at the Large Hadron Collider (LHC), the need for computing resources that can handle the analysis of this data is also rapidly increasing. This increase will even be amplified after upgrading to the High Luminosity LHC [1]. High-Performance Computing (HPC) and other cluster computing resources provided by universities can be useful supplements to the resources dedicated to the experiment as part of the Worldwide LHC Computing Grid (WLCG) for data analysis and production of simulated event samples. Freiburg is operating a combined Tier2/Tier3, the ATLAS-BFG [2]. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS [3] users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to the bare metal system of the local ATLAS Tier2/Tier3 centre. In addition to the provisioning of the virtual environment, the on-demand integration of these resources into the Tier3 scheduler in a dynamic way is described. In order to provide the external NEMO resources to the user in a transparent way, an intermediate layer connecting the two batch systems is put into place. This resource scheduler monitors requirements on the user-facing system and requests resources on the backend-system

Connexion / Inscription

Saisissez votre e-mail pour vous connecter ou créer un compte

Connexion

Inscription

Mot de passe oublié ?

Nous allons vous envoyer un message pour vous permettre de vous connecter.