An excellent introduction and overview of deep learning by a masterful teacher who guides, illuminates, and encourages you along the way.
Grokking Deep Learning teaches you to build deep learning neural networks from scratch! In his engaging style, seasoned deep learning expert Andrew Trask shows you the science under the hood, so you grok for yourself every detail of training neural networks.
Deep learning, a branch of artificial intelligence, teaches computers to learn by using neural networks, technology inspired by the human brain. Online text translation, self-driving cars, personalized product recommendations, and virtual voice assistants are just a few of the exciting modern advancements possible thanks to deep learning.
Grokking Deep Learning teaches you to build deep learning neural networks from scratch! In his engaging style, seasoned deep learning expert Andrew Trask shows you the science under the hood, so you grok for yourself every detail of training neural networks. Using only Python and its math-supporting library, NumPy, you’ll train your own neural networks to see and understand images, translate text into different languages, and even write like Shakespeare! When you’re done, you’ll be fully prepared to move on to mastering deep learning frameworks.
Andrew Trask is a PhD student at Oxford University and a research scientist at DeepMind. Previously, Andrew was a researcher and analytics product manager at Digital Reasoning, where he trained the world’s largest artificial neural network and helped guide the analytics roadmap for the Synthesys cognitive computing platform.
placing your order...
Don't refresh or navigate away from the page.FREE domestic shipping on three or more pBooks
All concepts are clearly explained with excellent visualizations and examples.
Excels at navigating the reader through the introductory details of deep learning in a simple and intuitive way.
Your step-by-step guide to learning AI.
A complex topic simplified!
To current and future readers of Grokking Deep Learning,
After nearly three years of effort, I’ve delivered the final chapters of Grokking Deep Learning. It's been a monumental challenge, and I'm very grateful for your patience. I like to tell you about why I think this is fantastic book. First, though, let me tell you why it took so long to write it.
Grokking Deep Learning is just over 300 pages long. To get to those 300 pages, though, I wrote at least twice that number. Half-a-dozen chapters were re-written from scratch three or four times times before they were ready to publish, and along the way we added some important chapters that weren’t in the original Table of Contents.
More significantly, we arrived at two expensive decisions early on that make Grokking Deep Learning uniquely valuable: This book requires no math background beyond basic arithmetic, and it doesn’t rely on a high-level library that might hide what’s going on. In other words, anyone can read this book, and understand how deep learning really works. To accomplish this, we had to invent new ways to describe and teach the core ideas and techniques without falling back on advanced mathematics or sophisticated code that someone else wrote.
My goal in writing Grokking Deep Learning was to create the lowest possible barrier to entry to the practice of Deep Learning. You won’t just read the theory, you’ll discover it yourself. To help you get there, I had to write a lot of code, and the book had to explain it all in the right order so that the code snippets required for the working demos all made sense.
Now, let me tell you about three changes we’re especially proud of:
Instead, I wound up creating possibly the most valuable piece I've ever done on the subject of Deep Learning, and easily the most valuable chapter in the whole book-- building a deep learning framework from scratch.
In the real world, you will spend 5% of your time coming up with a "cool new idea" to tackle a problem and 95% of your time wrestling with a framework (PyTorch, Keras, Tensorflow, etc.), trying to bring your idea to life. It is my great hope that the new Chapter 13 will fast-track you to becoming a power-user of Deep Learning frameworks by having a mental model of what actually happens within them.
This knowledge, combined with all the theory, code, and examples you explore in this book, will make you much faster at iterating through experiments. You'll have quick successes, better job opportunities, and you’ll even learn about more advanced Deep Learning concepts more rapidly.
I sincerely hope you enjoy Grokking Deep Learning!
Andrew Trask
August 31, 2018