PoS - Proceedings of Science
Volume 453 - The 40th International Symposium on Lattice Field Theory (LATTICE2023) - Algorithms and Artificial Intelligence
Equivariant transformer is all you need
A. Tomiya* and Y. Nagai
Full text: pdf
Pre-published on: December 27, 2023
Published on:
Abstract
Machine learning, deep learning, has been accelerating computational physics, which has been used to simulate systems on a lattice. Equivariance is essential to simulate a physical system because it imposes a strong induction bias for the probability distribution described by a machine learning model.
This reduces the risk of erroneous extrapolation that deviates from data symmetries and physical laws.
However, imposing symmetry on the model sometimes occur a poor acceptance rate in self-learning Monte-Carlo (SLMC). On the other hand, Attention used in Transformers like GPT realizes a large model capacity. We introduce symmetry equivariant attention to SLMC. To evaluate our architecture, we apply it to our proposed new architecture on a spin-fermion model on a two-dimensional lattice. We find that it overcomes poor acceptance rates for linear models and observe the scaling law of the acceptance rate as in the large language models with Transformers.
DOI: https://doi.org/10.22323/1.453.0001
How to cite

Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.

Open Access
Creative Commons LicenseCopyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.