Tag: planning

Discovering Matrix Multiplication Algorithms with AlphaTensor

Matrix multiplication is at the foundation of modern machine learning - whether transformers or convolutional networks, diffusion models or GANs, they all boil down to matrix multiplications, executed efficiently on GPUs and TPUs. So far the best known algorithms have been discovered manually by humans, often optimized for specific use cases.

The most famous is probably the Strassen algorithm to multiply two 2x2 matrices using only 7 instead of the naive 8 multiplications:

illustration of the strassen matmul algorithm

Through clever addition and subtraction of the individual elements of the a and b matrices this algorithm is ...


Planning in Stochastic Environments with a Learned Model

After extending to arbitrary action spaces and offline RL, we recently published our next step in generalizing MuZero: Stochastic MuZero, which learns and plans with a stochastic model of the environment that can model not only a single deterministic outcome of our actions, but rather a full set of possibilities.

Previous approaches such as AlphaZero and MuZero have achieved very good results in fully-observed deterministic environments, but this type of environment is insufficiently general to model many problems and real world tasks. Beyond the stochasticity inherent in many domains (think roll ...


Online and Offline Reinforcement Learning by Planning with a Learned Model

After extending to arbitrary action spaces, our next step in generalizing MuZero was to work on data efficiency, and to extend it all the way to the offline RL case. We've now published this work as MuZero Unplugged at NeurIPS, below I will give a brief summary of the main ideas.

Environment interactions are often expensive or have safety considerations, while existing datasets frequently already demonstrate good behaviour. We want to learn efficiently from any such data source, without being restricted by off-policy issues or limited to the performance of the policy ...


Mastering Atari Games with Limited Data

Another interesting paper based on MuZero was published at NeurIPS 2021: Mastering Atari Games with Limited Data, aka EfficientZero. This paper by Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel and Yang Gao focuses on the application of MuZero to very low data tasks, such as Atari 100k (only two hours of gameplay!) or DMControl 100k.

To tackle these tasks, the author propose three main techniques:

First they introduce a Self-Supervised Consistency Loss, to ensure that the embeddings produced by MuZero's dynamics function are consistent with the embeddings ...


Learning and Planning in Complex Action Spaces

We've been working on making MuZero as easy to use as possible, allowing it to be applied to any RL problem of interest. First and foremost, this required us to make some algorithmic extensions.

The original MuZero algorithm achieved good results in discrete action environments with a small action space - classical board games and Atari, with up to a few thousand different actions. However, many interesting problems have action spaces that cannot be easily enumerated, or are not discrete at all - just think of controlling a robot arm with many degrees of freedom, smoothly ...

© Julian Schrittwieser. Built using 開板. Theme by Giulio Fidente on github. .