Grokking Deep Learning
Andrew W. Trask
  • MEAP began August 2016
  • Publication in Early 2018 (estimated)
  • ISBN 9781617293702
  • 325 pages (estimated)
  • printed in black & white

Artificial Intelligence is one of the most exciting technologies of the century, and Deep Learning is in many ways the "brain" behind some of the world's smartest Artificial Intelligence systems out there. Loosely based on neuron behavior inside of human brains, these systems are rapidly catching up with the intelligence of their human creators, defeating the world champion Go player, achieving superhuman performance on video games, driving cars, translating languages, and sometimes even helping law enforcement fight crime. Deep Learning is a revolution that is changing every industry across the globe.

Grokking Deep Learning is the perfect place to begin your deep learning journey. Rather than just learn the "black box" API of some library or framework, you will actually understand how to build these algorithms completely from scratch. You will understand how Deep Learning is able to learn at levels greater than humans. You will be able to understand the "brain" behind state-of-the-art Artificial Intelligence. Furthermore, unlike other courses that assume advanced knowledge of Calculus and leverage complex mathematical notation, if you're a Python hacker who passed high-school algebra, you're ready to go. And at the end, you'll even build an A.I. that will learn to defeat you in a classic Atari game.

Table of Contents detailed table of contents

Part 1: Neural Network Basics

1. Introducing Deep Learning

1.1. Welcome to Grokking Deep Learning

1.2. Why should you learn Deep Learning?

1.3. Why you should read this book!

1.4. What you need to get started

1.5. You’ll probably need some Python knowledge

1.6. Conclusion and Primer for Chapter 2

2. Fundamental Concepts

2.1. What is Deep Learning?

2.2. What is Machine Learning?

2.3. Supervised Machine Learning

2.4. Unsupervised Machine Learning

2.5. Parametric vs Non-Parametric Learning

2.6. Supervised Parametric Learning

2.7. Step 1: Predict

2.8. Step 2: Compare to Truth Pattern

2.9. Step 3: Learn the Pattern

2.10. Unsupervised Parametric Learning

2.11. Non-Parametric Learning

2.12. Conclusion

3. Introduction to Neural Prediction: Forward Propagation

3.1. Step 1: Predict

3.2. A Simple Neural Network Making a Prediction

3.3. What is a Neural Network?

3.4. What does this Neural Network do?

3.5. Making a Prediction with Multiple Inputs

3.6. Multiple Inputs - What does this Neural Network do?

3.7. Multiple Inputs - Complete Runnable Code

3.8. Making a Prediction with Multiple Outputs

3.9. Predicting with Multiple Inputs & Outputs

3.10. Multiple Inputs & Outputs - How does it work?

3.11. Predicting on Predictions

3.12. Numpy Version

3.13. A Quick Primer on Numpy

3.14. Conclusion

4. Introduction to Neural Learning: Gradient Descent

4.1. Predict, Compare, and Learn

4.2. Compare

4.3. Learn

4.4. Compare: Does our network make good predictions?

4.5. Why measure error?

4.6. What’s the Simplest Form of Neural Learning?

4.7. Hot and Cold Learning

4.8. Characteristics of Hot and Cold Learning

4.9. Calculating Both direction and amount from error

4.10. One Iteration of Gradient Descent

4.11. Learning Is Just Reducing Error

4.12. Let’s Watch Several Steps of Learning

4.13. Why does this work? What really is weight_delta?

4.14. Tunnel Vision on One Concept

4.15. A Box With Rods Poking Out of It

4.16. Derivatives…​ take Two

4.17. What you really need to know…​

4.18. What you don’t really need to know…​

4.19. How to use a derivative to learn

4.20. Look Familiar?

4.21. Breaking Gradient Descent

4.22. Visualizing the Overcorrections

4.23. Divergence

4.24. Introducing…​. Alpha

4.25. Alpha In Code

4.26. Memorizing

5. Learning Multiple Weights at a Time: Generalizing Gradient Descent

5.1. Gradient Descent Learning with Multiple Inputs

5.2. Gradient Descent with Multiple Inputs - Explained

5.3. Let’s Watch Several Steps of Learning

5.4. Freezing One Weight - What Does It Do?

5.5. Gradient Descent Learning with Multiple Outputs

5.6. Gradient Descent with Multiple Inputs & Outputs

5.7. What do these weights learn?

5.8. Visualizing Weight Values

5.9. Visualizing Dot Products (weighted sums)

5.10. Conclusion

6. Building Your First "Deep" Neural Network

6.1. The Street Light Problem

6.2. Preparing our Data

6.3. Matrices and the Matrix Relationship

6.4. Creating a Matrix or Two in Python

6.5. Building Our Neural Network

6.6. Learning the whole dataset!

6.7. Full / Batch / Stochastic Gradient Descent

6.8. Neural Networks Learn Correlation

6.9. Up and Down Pressure

6.10. Up and Down Pressure (cont.)

6.11. Edge Case: Overfitting

6.12. Edge Case: Conflicting Pressure

6.13. Edge Case: Conflicting Pressure (cont.)

6.14. Learning Indirect Correlation

6.15. Creating Our Own Correlation

6.16. Stacking Neural Networks - A Review

6.17. Backpropagation: Long Distance Error Attribution

6.18. Backpropagation: Why does this work?

6.19. Linear vs Non-Linear

6.20. Why The Neural Network Still Doesn’t Work

6.21. The Secret to "Sometimes Correlation"

6.22. A Quick Break

6.23. Our First "Deep" Neural Network

6.24. Backpropagation in Code

6.25. One Iteration of Backpropagation

6.26. Putting it all together

6.27. Why do deep networks matter?

7. How to Picture Neural Networks: In Your Head and on Paper

7.1. It’s Time to Simplify

7.2. Correlation Summarization

7.3. Our Previously Overcomplicated Visualization

7.4. Our Simplified Visualization

7.5. Simplifying Even Further

7.6. Let’s See This Network Predict

7.7. Visualizing Using Letters Instead of Pictures

7.8. Linking Our Variables

7.9. Everything Side-by-Side

7.10. The Importance of Visualization Tools

8. Learning Signal and Ignoring Noise

8.1. 3 Layer Network on MNIST

8.2. Well…​ that was easy!

8.3. Memorization vs Generalization

8.4. Overfitting in Neural Networks

8.5. Where Overfitting Comes From

8.6. The Simplest Regularization: Early Stopping

8.7. Industry Standard Regularization: Dropout

8.8. Why Dropout Works: Ensembling Works

8.9. Dropout In Code

8.10. Dropout Evaluated on MNIST

8.11. Batch Gradient Descent

8.12. Conclusion

9. Modeling Probabilities and Non-Linearities

9.1. What is an Activation Function?

9.2. Standard Hidden Layer Activation Functions

9.3. Standard Output Layer Activation Functions

9.4. The Core Issue: Inputs Have Similarity

9.5. Softmax Computation

9.6. Activation Installation Instructions

9.7. Multiplying Delta By The Slope

9.8. Converting Output to Slope (derivative)

9.9. Upgrading our MNIST Network

Part 2: Advanced Layers and Architectures

10. Neural Networks that Understand Edges and Corners

11. Neural Network Word Math (King - Man + Woman = Queen)

12. Writing Like Shakespeare + Translating English to Spanish

13. Neural Networks that Read & Answer Questions

14. Building a Neural Network that Destroys You in Pong

15. Where to go from here

About the Technology

Artificial Intelligence is one of the most exciting technologies of the century, and Deep Learning is in many ways the "brain" behind some of the world's smartest Artificial Intelligence systems out there.

What's inside

  • How neural networks "learn"
  • You will build neural networks that can see and understand images
  • You will build neural networks that can translate text between languages and even write like Shakespeare
  • You will build neural networks that can learn how to play videogames

About the reader

Written for readers with high school-level math and intermediate programming skills. Experience with Calculus is helpful but NOT required.

About the author

Andrew Trask is a PhD student at Oxford University, funded by the Oxford-DeepMind Graduate Scholarship, where he researches Deep Learning approaches with special emphasis on human language. Previously, Andrew was a researcher and analytics product manager at Digital Reasoning where he trained the world's largest artificial neural network with over 160 billion parameters, and helped guide the analytics roadmap for the Synthesys cognitive computing platform which tackles some of the most complex analysis tasks across government intelligence, finance, and healthcare industries.

Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
Buy
MEAP combo $49.99 pBook + eBook
MEAP eBook $39.99 PDF only

FREE domestic shipping on three or more pBooks