Tag: go

MuZero talk - ICAPS 2020

I gave a detailed talk about MuZero at ICAPS 2020, at the workshop "Bridging the Gap Between AI Planning and Reinforcement Learning".

In addition to giving an overview of the algorithm in general, I also went into more detail about reanalyse - the technique that allows MuZero to use the model based search to repeatedly learn more from the same episode data.

I hope you find the talk useful! I've also uploaded my slides for easy reference.


MuZero - Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

I'm excited to finally share some more details on what we've been working on since AlphaZero.

Recently, we made our latest paper - Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model, aka MuZero - available on arXiv:

Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero …


AlphaGo Documentary

As some of you may have noticed, the [cached]AlphaGo documentary is now available on [cached]Play Movies and [cached]Netflix!

It's a great documentary and really captures the history of AlphaGo very well - every time I watch it it takes me right back to the excitement of those months! If you are interested in AI, Go, or just like documentaries in general I really recommend you give it a try.


AlphaGo Zero

Usually in software, version numbers tend to go up, not down. With AlphaGo Zero, we did the opposite - by taking out handcrafted human knowledge, we ended up with both a simpler and more beautiful algorithm and a stronger Go program.

We provide a full description in our paper, Mastering the game of Go without human knowledge, which you can also read online.

At the core is a self-improvement loop based on self-play and Monte Carlo Tree Search (MCTS): We start with a randomly initialized network, then use this network in the MCTS to play the first games. The network is …


AlphaGo in China

You might have heard about our [cached]recent games with AlphaGo in China, at the Future of Go summit. No only did we play the legendary Ke Jie, but there were also two new and exciting formats: Team Go and Pair Go.

This match was also very exciting on the technical side because we had improved AlphaGo to the point where we ran it on [cached]a single machine in the Google Cloud - that's [cached]one tenth of the computation power compared to the distributed version we used in the last match!

Personally, I also really enjoyed the Pair Go …


AlphaGo - Lee Se-dol

After 5 long and exciting games AlphaGo finally managed to win 4:1 against the legendary Lee Se-dol, the first time in history a computer program managed to defeat a 9 dan professional player in an even match. And not just any 9 dan player, probably the best player of the decade. It was even awarded an honorary rank of 9 dan professional itself!

Obviously we are all extremely proud of this achievement, you can find out more about the details in our Nature paper. Most importantly, we still used roughly the same amount of hardware! This was a true …


Mastering the Game of Go with Deep Neural Networks and Tree Search

Today we published our [cached]paper on beating the human state of the art in Go, the only major board game where humans (or at least top professionals) could still beat computers. No more. Our program AlphaGo achieved a 99% winning rate against the strongest existing Go programs, and defeated the human European champion by 5 games to 0.

(That's me playing at 0:10)

The first major breakthrough in computer Go - after remaining at weak amateur level for decades - came with the advent of [cached]Monte Carlo Tree Search (MCTS) around 2007, massively improving playing strength. Still, Go programs …

© Julian Schrittwieser. Built using Pelican. Theme by Giulio Fidente on github. .