Deep Learning and the Game of Go
Max Pumperla and Kevin Ferguson
  • MEAP began November 2017
  • Publication in Summer 2018 (estimated)
  • ISBN 9781617295324
  • 325 pages (estimated)
  • printed in black & white

At the beginning of 2016, most serious Go players would have told you that a machine would never beat a Go world champion. Then, Google's AlphaGo AI beat the world's strongest player, Ke Jie 3-0. Six months later, Alpha Go Zero destroyed AlphaGo, defeating it 89 games to 11. AlphaGo was an incredible accomplishment for deep learning systems, and it's a fascinating story.

Deep Learning and the Game of Go opens up the world of deep learning and AI by teaching you to build your own Go-playing machine. You'll explore key deep learning ideas like neural networks and reinforcement learning and maybe even step up your Go game a notch or two. AI experts and Go enthusiasts Max Pumperla and Kevin Ferguson take you every step of the way as you build your Go bot and train it from eternal loser to hardened Go player.

"The book is fascinating. I have always wondered how to apply machine learning to games, especially Go."

~ Sean Lindsay

"It is about time for a new book on game AI, exploiting the most recent achievements in machine learning and AI."

~ Ursin Stauss

"The book serves as an excellent introduction to Deep Learning using the popular game of GO."

~ Jasba Simpson

Table of Contents detailed table of contents

Part 1: AI and Go

1. Toward deep learning: a machine learning introduction

1.1. What is machine learning?

1.1.1. How does machine learning relate to AI?

1.1.2. What you can and cannot do with machine learning

1.2. Machine learning by example

1.2.1. Using machine learning in software applications

1.2.2. Supervised learning

1.2.3. Unsupervised learning

1.2.4. Reinforcement learning

1.3. Deep learning

1.4. What you will learn in this book

1.5. Summary

2. Go as a machine learning problem

2.1. Why games?

2.2. A lightning introduction to the game of Go

2.2.1. The board

2.2.2. Placing and capturing stones

2.2.3. Ending the game and counting

2.2.4. Ko

2.3. Handicaps

2.4. Where to learn more

2.5. What can we teach a machine?

2.5.1. Selecting moves in the opening

2.5.2. Searching game states

2.5.3. Reducing the number of moves to consider

2.5.4. Evaluating game states

2.6. How to measure our Go AI’s strength

2.6.1. Traditional Go ranks

2.6.2. Benchmarking our Go AI

2.7. Summary

3. Implementing our first Go bot

3.1. Representing a game of Go in Python

3.1.1. Implementing the Go Board

3.1.2. Connected groups of stones in Go: Strings

3.1.3. Placing and capturing stones on a go board

3.2. Go game state and checking for illegal moves

3.2.1. Self-capture

3.2.2. Ko

3.3. Ending a game

3.4. Your first bot: the weakest Go AI imaginable

3.5. Speeding up gameplay with Zobrist hashing

3.6. Playing against your bot

3.7. Summary

Part 2: Way to go

4. Playing games with tree search

4.1. Classifying games

4.3. Solving tic-tac-toe: a minimax example

4.4. Reducing search space with pruning

4.4.1. Reducing search depth with position evaluation

4.4.2. Reducing search width with alpha-beta pruning

4.5. Evaluating game states with Monte Carlo tree search algorithm

4.5.1. Implementing Monte Carlo tree search in Python

4.5.2. How to select which branch to explore

4.5.3. Practical considerations for applying Monte Carlo tree search to Go

4.6. Summary

5. Getting started with neural networks

5.1. A simple use case: Classifying handwritten digits

5.1.1. The MNIST data set of handwritten digits

5.1.2. MNIST data preprocessing

5.2. The basics of neural networks

5.2.1. Logistic regression as simple artificial neural network

5.2.2. Networks with more than one output dimension

5.3. Feed-forward networks

5.4. How good are our predictions? Loss functions and optimization

5.4.1. What is a loss function?

5.4.2. Mean-squared error

5.4.3. Finding minima in loss functions

5.4.4. Gradient descent to find minima

5.4.5. Stochastic gradient descent for loss functions

5.4.6. Propagate gradients back through our network

5.5. Training a neural network step-by-step in Python

5.5.1. Neural network layers in Python

5.5.2. Activation layers in neural networks

5.5.3. Dense layers in Python as building block for feed-forward networks

5.5.4. Sequential neural networks with Python

5.5.5. Applying our network handwritten digit classification

5.6. Summary

6. Designing a neural network for Go data

6.1. Encoding a Go game position for neural networks

6.2. Generating tree search games as network training data

6.3. The Keras deep learning library

6.3.1. Keras design principles

6.3.2. Installing the Keras deep learning library

6.3.3. Running a familiar first example with Keras

6.3.4. Go move prediction with feed-forward neural networks in Keras

6.4. Analyzing space with convolutional networks

6.4.1. What convolutions do intuitively

6.4.2. Building convolutional neural networks with Keras

6.4.3. Reducing space with pooling layers

6.5. Predicting Go move probabilities

6.5.1. Using the softmax activation function in the last layer

6.5.2. Cross-entropy loss for classification problems

6.6. Building deeper networks with dropout and rectified linear units

6.6.1. Dropping neurons for regularization

6.6.2. The rectified linear unit activation function

6.7. Putting it all together for a stronger Go move prediction network

6.8. Summary

7. Learning from data: a deep learning bot

8. Deploying bots in the wild

9. Enter deep reinforcement learning

10. Reinforcement learning with policy gradients

11. Reinforcement learning with value methods

12. Beyond data: a self-playing go bot

Part 3: Bringing it all together

13. AlphaGo: Combining approaches

14. AlphaGoZero and AlphaZero: Combining approaches


Appendix A: Mathematical foundations with Python

Appendix B: The backpropagation algorithm

B.1. A bit of notation

B.2. The backpropagation algorithm for feed-forward networks

B.3. Backpropagation for sequential neural networks

B.4. Backpropagation for neural networks in general

B.5. Computational challenges with backpropagation

Appendix C: Sample games and resources

Appendix D: Go servers and data

About the Technology

Go is an ancient strategy game. It's much simpler to learn than chess and at the same time infinitely harder to master because players have many more potential moves with each turn. (Chess has 20 possible opening moves. Go has 361!) It's nearly impossible to build a competent Go-playing machine using conventional programming techniques, let alone have it win. By applying advanced AI techniques, in particular deep learning and reinforcement learning, you can train your Go-bot in the rules and tactics of the game. Because deep learning systems get better the more they're used, you'll see it grow from perpetual loser to unbeatable strategist!

About the book

Deep Learning and the Game of Go teaches you how to apply the power of deep learning to complex human-flavored reasoning tasks by building a Go-playing AI. After exposing you to the foundations of machine and deep learning, you'll use Python to build a bot and then teach it the rules of the game. Everything you need to know about Go is covered, from how the game works, to checking for illegal moves, learning from losses, and implementing winning strategies.

With the rules down, you'll turn your bot into a master with the help of Keras and deep reinforcement learning. You'll see, in real-time, your bot become a better player as you apply new learning techniques and more complex strategies. You'll be amazed as your fledgeling AI arms itself with the skills it needs to win. Before long, you'll have a Go playing AI sure to beat you every time!

What's inside

  • Getting started with neural networks
  • Building your Go AI
  • Improving how your Go-bot plays and reacts
  • Reinforcement learning with actor-critic and value methods

About the reader

No deep learning experience required. All you need is high school-level math and basic Python skills. This book even teaches you how to play Go!

About the authors

Max Pumperla is a Data Scientist and Engineer specializing in Deep Learning at the artificial intelligence company He is the co-founder of the Deep Learning platform Kevin Ferguson has 18 years of experience in distributed systems and data science. He is a data scientist at Honor, and has experience at companies such as Google and Meebo. Together, Max and Kevin are co-authors of betago, one of very few open source Go bots, developed in Python.

Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
MEAP combo $54.99 pBook + eBook + liveBook
MEAP eBook $43.99 pdf + ePub + kindle + liveBook

FREE domestic shipping on three or more pBooks