Discovering faster matrix multiplication algorithms with reinforcement learning

October, 2022

Abstract

Improving the efciency of algorithms for fundamental computations can have a widespread impact, as it can afect the overall speed of a large amount of computations. Matrix multiplication is one such primitive task, occurring in many systems—from neural networks to scientifc computing routines. The automatic discovery of algorithms using machine learning ofers the prospect of reaching beyond human intuition and outperforming the current best human-designed algorithms. However, automating the algorithm discovery procedure is intricate, as the space of possible algorithms is enormous. Here we report a deep reinforcement learning approach based on AlphaZero for discovering efcient and provably correct algorithms for the multiplication of arbitrary matrices. Our agent, AlphaTensor, is trained to play a single-player game where the objective is fnding tensor decompositions within a fnite factor space. AlphaTensor discovered algorithms that outperform the stateof-the-art complexity for many matrix sizes. Particularly relevant is the case of 4 × 4 matrices in a fnite feld, where AlphaTensor’s algorithm improves on Strassen’s twolevel algorithm for the frst time, to our knowledge, since its discovery 50 years ago. We further showcase the fexibility of AlphaTensor through diferent use-cases: algorithms with state-of-the-art complexity for structured matrix multiplication and improved practical efciency by optimizing matrix multiplication for runtime on specifc hardware. Our results highlight AlphaTensor’s ability to accelerate the process of algorithmic discovery on a range of problems, and to optimize for diferent criteria.

Resource Type: