Probing the nature of the universe with big data at the ATLAS experiment
Speaker: Dr Steven Schramm (CERN)
Venue: MANDEC Lecture Theatre, 3rd Floor, University Dental Hospital, Higher Cambridge Street, Manchester, M15 6FH
Abstract: The Large Hadron Collider is the most powerful particle accelerator ever built, allowing us to study the first nanoseconds of the lifetime of the universe to unprecedented precision. This accelerator has already discovered the highly anticipated Higgs Boson, providing an explanation for the origin of mass, and has turned its sights on the search for new physics and measurements of the fundamental properties of the universe.
In order to search for new physics, or measure properties of rare particles, it is necessary to sift through an enormous dataset. Processing the up to 40 million collisions per second delivered by the Large Hadron Collider, the ATLAS experiment currently records approximately ten petabytes of new data each year. On top of this large dataset, the experiment creates advanced simulations of numerous known and predicted physical processes, resulting in even larger simulated datasets.
Analyzing all of this data is a massive task, and is a natural place to exploit the latest advancements in data science. The usage of such techniques within ATLAS is rapidly growing, and machine learning has found a home in many corners of the experiment. We will examine some of the ways in which machine learning techniques are already driving the sensitivity to rare physical processes, and discuss some of the more recent developments which are expected to become critical in the years ahead.