Algorithms23
from
Monday 24 April 2023 (09:00)
to
Thursday 27 April 2023 (17:00)
Monday 24 April 2023
09:30
Coffee break
Coffee break
09:30 - 10:00
Room: Elm Lecture Theatre
10:00
Multilevel strategies for full-QCD simulations
-
Martin Lüscher
Multilevel strategies for full-QCD simulations
Martin Lüscher
10:00 - 11:00
Room: Elm Lecture Theatre
11:00
Variance reduction in the leading hadronic contribution to $g_\mu-2$ from a two-level QCD simulation
-
Tim Harris
(
University of Edinburgh
)
Variance reduction in the leading hadronic contribution to $g_\mu-2$ from a two-level QCD simulation
Tim Harris
(
University of Edinburgh
)
11:00 - 12:00
Room: Elm Lecture Theatre
I will review the signal-to-noise ratio problem for the electromagnetic current correlator, which hampers the accurate determination of the leading-order hadronic vacuum polarization contribution to the muon anomaly from first principles, among other physically-interesting matrix elements. Thanks to the recent factorization of the fermionic determinant, multi-level integration can be performed in QCD which offers exponential improvement in the precision in many correlation functions at long distances. In the work with Dalla Brida, Giusti and Pepe, we investigated the variance reduction in the current correlator in a two-level QCD simulation with light quarks corresponding to a pion mass of about 270 MeV and in a large volume of 3fm. The best estimate is obtained by using translation averaging for the correlator, and some time will be devoted discussing its implementation in our simulation.
12:00
Transfer matrices and temporal factorization of the Wilson fermion determinant
-
Urs Wenger
Transfer matrices and temporal factorization of the Wilson fermion determinant
Urs Wenger
12:00 - 12:30
Room: Elm Lecture Theatre
12:30
Lunch break
Lunch break
12:30 - 14:30
Room: Elm Lecture Theatre
14:30
Discussion on Masterfield simulations & Multilevel algorithms
-
Patrick Fritzsch
Discussion on Masterfield simulations & Multilevel algorithms
Patrick Fritzsch
14:30 - 16:00
Room: Elm Lecture Theatre
16:00
Coffee break
Coffee break
16:00 - 16:30
Room: Elm Lecture Theatre
Tuesday 25 April 2023
09:30
Coffee break
Coffee break
09:30 - 10:00
Room: Elm Lecture Theatre
10:00
QCD software and algorithms for the exascale
-
Peter Boyle
QCD software and algorithms for the exascale
Peter Boyle
10:00 - 11:00
Room: Elm Lecture Theatre
I will give an overview of the development directions of Grid on current and future US exascale computers. I will also give an overview of the USQCD SciDAC-5 algorithm project to develop multiscale algorithms to exploit these.
11:00
Learning Trivializing Gradient Flows
-
Simone Bacchio
Learning Trivializing Gradient Flows
Simone Bacchio
11:00 - 12:00
Room: Elm Lecture Theatre
In our recent work, [arXiv:2212.08469], we have presented a new approach for trivializing flows that starts from the perturbative construction of trivializing maps by Lüscher and improves on it by learning. The resulting continuous normalizing flow model can be implemented using common tools of lattice field theory and requires several orders of magnitude fewer parameters than state-of-the-art deep learning approaches. Specifically, our model can achieve competitive performance with as few as 14 parameters while existing deep-learning models have around 1 million parameters for SU(3).
12:00
Optimisation of lattice simulations energy efficiency
-
Antonin Portelli
(
The University of Edinburgh
)
Optimisation of lattice simulations energy efficiency
Antonin Portelli
(
The University of Edinburgh
)
12:00 - 12:30
Room: Elm Lecture Theatre
12:30
Lunch break
Lunch break
12:30 - 14:30
Room: Elm Lecture Theatre
14:30
Discussion on exascale computing
-
Kate Clark
Discussion on exascale computing
Kate Clark
14:30 - 16:00
Room: Elm Lecture Theatre
16:00
Coffee break
Coffee break
16:00 - 16:30
Room: Elm Lecture Theatre
18:30
Workshop dinner
Workshop dinner
18:30 - 20:30
Wednesday 26 April 2023
09:30
Coffee break
Coffee break
09:30 - 10:00
Room: Elm Lecture Theatre
10:00
Quantum Machine Learning in High Energy Physics
-
Sofia Vallecorsa
Quantum Machine Learning in High Energy Physics
Sofia Vallecorsa
10:00 - 11:00
Room: Elm Lecture Theatre
Theoretical and algorithmic advances, availability of data, and computing power have opened the door to exceptional perspectives for application of classical Machine Learning in the most diverse fields of science, business and society at large, and notably in High Energy Physics (HEP). In particular, Machine Learning is among the most promising techniques to analyse and understand the data the next generation HEP detectors will produce. Machine Learning is also a promising task for near-term quantum devices that can leverage compressed high dimensional representations and use the stochastic nature of quantum measurements as random source. Several architectures are being investigated. Quantum implementations of Boltzmann Machines, classifiers or Auto-Encoders, among the most popular ones, are being proposed for different applications. Born machines are purely quantum models that can generate probability distributions in a unique way, inaccessible to classical computers. One-class Support Vector Machines have proven to be very powerful tools in anomaly detection problems. This talk will give an overview of the current state of the art in terms of Machine Learning on quantum computers with focus on their application to HEP.
11:00
The Physics of Deep Learning
-
David Barrett
The Physics of Deep Learning
David Barrett
11:00 - 12:00
Room: Elm Lecture Theatre
Throughout the last decade, deep learning has provided us with state-of-the-art results across a wide variety of disciplines, from image recognition and game-play through to protein folding and physics. The connection to physics is particularly interesting since deep learning can be used for physics, and visa-versa, we can use ideas from physics to improve our understanding of deep learning. In this talk, we will derive equations of motion for deep learning using backward error analysis, we will use dynamical systems theory to analyse these equations and we will review recent applications of deep learning in physics and beyond.
12:00
Lunch break
Lunch break
12:00 - 14:00
Room: Elm Lecture Theatre
14:00
Normalizing Flows for Lattice QCD
-
Ryan Abbott
Normalizing Flows for Lattice QCD
Ryan Abbott
14:00 - 15:00
Room: Elm Lecture Theatre
Normalizing flows have recently emerged as a new approach to sampling in lattice field theories. In this talk I will give an overview of how normalizing flows have been developed for this application, and discuss the current status as well as future directions for these tools.
15:00
Discussion on machine learning in lattice field theories
-
Gert Aarts
Discussion on machine learning in lattice field theories
Gert Aarts
15:00 - 16:30
Room: Elm Lecture Theatre
16:30
Coffee break
Coffee break
16:30 - 17:00
Room: Elm Lecture Theatre
Thursday 27 April 2023
09:30
Coffee break
Coffee break
09:30 - 10:00
Room: Elm Lecture Theatre
10:00
Improving coarsest level solves in multigrid for lattice QCD
-
Andreas Frommer
Improving coarsest level solves in multigrid for lattice QCD
Andreas Frommer
10:00 - 11:00
Room: Elm Lecture Theatre
Aggregation-based, adaptive algebraic multigrid has established itself as the most efficient linear solver for QCD lattice discretizations such as the (clover improved) Wilson discretization or the twisted mass discretizationn. As we are now able to simulate at the physical point, the resulting systems are severely ill-conditioned, and the multigrid approach shifts this ill-conditioning to the coarsest level in the multigrid hierarchy. This is why now often the by far largest amount of time is spent in the coarse level solves, where standard multigrid implementations for lattice QCD rely just on (restarted) GMRES as a solver. In this talk we present three ways of accelerating the coarsest level solves, namely (1) polynomial preconditioning, (2) deflation and (3) using an (incomplete) LU-factorization. We will explain the heuristics underlying all three improvements and present numerical results on large lattices. It turns out that the twisted mass calculations profit the most of these improvements and that a technique called agglomeration is particularly beneficial in case (3). Agglomeration means that we reduce the number of cores used on the coarsest level to reduce communication.
11:00
Gauge-equivariant multigrid neural networks I
-
Tilo Wettig
Gauge-equivariant multigrid neural networks I
Tilo Wettig
11:00 - 11:30
Room: Elm Lecture Theatre
11:30
Gauge-equivariant multigrid neural networks II
-
Christoph Lehner
Gauge-equivariant multigrid neural networks II
Christoph Lehner
11:30 - 12:00
Room: Elm Lecture Theatre
12:00
On the geometric convergence of HMC on Riemannian manifolds
-
Xinhao Yu
On the geometric convergence of HMC on Riemannian manifolds
Xinhao Yu
12:00 - 12:30
Room: Elm Lecture Theatre
In this presentation we apply Harris' ergodic theorem on Markov chains to prove the geometric convergence of Hamiltonian Monte Carlo: first on compact Riemannian manifolds, and secondly on a large class of non-compact Riemannian manifolds by introducing an extra Metropolis step in the radial direction. We shall use $\phi^4$ theory as an explicit example of the latter case.
12:30
Lunch break
Lunch break
12:30 - 14:00
Room: Elm Lecture Theatre