Deep Learning with Python, Second Edition
François Chollet
  • MEAP began March 2020
  • Publication in Spring 2021 (estimated)
  • ISBN 9781617296864
  • 400 pages (estimated)
  • printed in black & white
free previous edition eBook included
An eBook copy of the previous edition of this book is included at no additional cost. It will be automatically added to your Manning Bookshelf within 24 hours of purchase.

The first edition of Deep Learning with Python is one of the best books on the subject. The 2nd edition made it even better.

Todd Cook
The bestseller revised! Deep Learning with Python, Second Edition is a comprehensive introduction to the field of deep learning using Python and the powerful Keras library. Written by Google AI researcher François Chollet, the creator of Keras, this revised edition has been updated with new chapters, new tools, and cutting-edge techniques drawn from the latest research. You’ll build your understanding through practical examples and intuitive explanations that make the complexities of deep learning accessible and understandable.

About the Technology

Machine learning has made remarkable progress in recent years. We’ve gone from near-unusable speech recognition, to near-human accuracy. From machines that couldn't beat a serious Go player, to defeating a world champion. Medical imaging diagnostics, weather forecasting, and natural language question answering have suddenly become tractable problems. Behind this progress is deep learning—a combination of engineering advances, best practices, and theory that enables a wealth of previously impossible smart applications across every industry sector

About the book

Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. You’ll learn directly from the creator of Keras, François Chollet, building your understanding through intuitive explanations and practical examples. Updated from the original bestseller with over 50% new content, this second edition includes new chapters, cutting-edge innovations, and coverage of the very latest deep learning tools. You'll explore challenging concepts and practice with applications in computer vision, natural-language processing, and generative models. By the time you finish, you'll have the knowledge and hands-on skills to apply deep learning in your own projects.
Table of Contents detailed table of contents

1 What is deep learning?

1.1 Artificial intelligence, machine learning, and deep learning

1.1.1 Artificial intelligence

1.1.2 Machine learning

1.1.3 Learning rules and representations from data

1.1.4 The “deep” in deep learning

1.1.5 Understanding how deep learning works, in three figures

1.1.6 What deep learning has achieved so far

1.1.7 Don’t believe the short-term hype

1.1.8 The promise of AI

1.2 Before deep learning: a brief history of machine learning

1.2.1 Probabilistic modeling

1.2.2 Early neural networks

1.2.3 Kernel methods

1.2.4 Decision trees, random forests, and gradient boosting machines

1.2.5 Back to neural networks

1.2.6 What makes deep learning different

1.2.7 The modern machine-learning landscape

1.3 Why deep learning? Why now?

1.3.1 Hardware

1.3.2 Data

1.3.3 Algorithms

1.3.4 A new wave of investment

1.3.5 The democratization of deep learning

1.3.6 Will it last?

2 The mathematical building blocks of neural networks

2.1 A first look at a neural network

2.2 Data representations for neural networks

2.2.1 Scalars (rank-0 tensors)

2.2.2 Vectors (rank-1 tensors)

2.2.3 Matrices (rank-2 tensors)

2.2.4 Rank-3 tensors and higher-rank tensors

2.2.5 Key attributes

2.2.6 Manipulating tensors in NumPy

2.2.7 The notion of data batches

2.2.8 Real-world examples of data tensors

2.2.9 Vector data

2.2.10 Timeseries data or sequence data

2.2.11 Image data

2.2.12 Video data

2.3 The gears of neural networks: tensor operations

2.3.1 Element-wise operations

2.3.2 Broadcasting

2.3.3 Tensor product

2.3.4 Tensor reshaping

2.3.5 Geometric interpretation of tensor operations

2.3.6 A geometric interpretation of deep learning

2.4 The engine of neural networks: gradient-based optimization

2.4.1 What’s a derivative?

2.4.2 Derivative of a tensor operation: the gradient

2.4.3 Stochastic gradient descent

2.4.4 Chaining derivatives: the Backpropagation algorithm

2.5 Looking back at our first example

2.5.1 Reimplementing our first example from scratch in TensorFlow

2.5.2 Running one training step

2.5.3 The full training loop

2.5.4 Evaluating the model

2.6 Chapter summary

3 Introduction to Keras and TensorFlow

3.1 What’s TensorFlow?

3.2 What’s Keras?

3.3 Keras and TensorFlow: a brief history

3.4 Setting up a deep-learning workspace

3.4.1 Jupyter notebooks: the preferred way to run deep-learning experiments

3.4.2 Using Colaboratory

3.5 First steps with TensorFlow

3.5.1 Constant tensors and Variables

3.5.2 Tensor operations: doing math in TensorFlow

3.5.3 A second look at the GradientTape API

3.5.4 An end-to-end example: a linear classifier in pure TensorFlow

3.6 Anatomy of a neural network: understanding core Keras APIs

3.6.1 Layers: the building blocks of deep learning

3.6.2 From layers to models

3.6.3 The "compile" step: configuring the learning process

3.6.4 Picking a loss function

3.6.5 Understanding the "fit" method

3.6.6 Monitoring loss & metrics on validation data

3.6.7 Inference: using a model after training

3.7 Chapter summary

4 Getting started with neural networks: classification and regression

4.1 Classifying movie reviews: a binary classification example

4.1.1 The IMDB dataset

4.1.2 Preparing the data

4.1.3 Building your model

4.1.4 Validating your approach

4.1.5 Using a trained model to generate predictions on new data

4.1.6 Further experiments

4.1.7 Wrapping up

4.2 Classifying newswires: a multiclass classification example

4.2.1 The Reuters dataset

4.2.2 Preparing the data

4.2.3 Building your model

4.2.4 Validating your approach

4.2.5 Generating predictions on new data

4.2.6 A different way to handle the labels and the loss

4.2.7 The importance of having sufficiently large intermediate layers

4.2.8 Further experiments

4.2.9 Wrapping up

4.3 Predicting house prices: a regression example

4.3.1 The Boston Housing Price dataset

4.3.2 Preparing the data

4.3.3 Building your model

4.3.4 Validating your approach using K-fold validation

4.3.5 Generating predictions on new data

4.3.6 Wrapping up

4.4 Chapter summary

5 Fundamentals of machine learning

5.1 Generalization: the goal of machine learning

5.1.1 Underfitting and overfitting

5.1.2 The nature of generalization in deep learning

5.2 Evaluating machine-learning models

5.2.1 Training, validation, and test sets

5.2.2 Beating a common-sense baseline

5.2.3 Things to keep in mind about model evaluation

5.3 Improving model fit

5.3.1 Tuning key gradient descent parameters

5.3.2 Leveraging better architecture priors

5.3.3 Increasing model capacity

5.4 Improving generalization

5.4.1 Dataset curation

5.4.2 Feature engineering

5.4.3 Using early stopping

5.4.4 Regularizing your model

5.5 Chapter summary

6 The universal workflow of machine learning

6.1 Define the task

6.1.1 Frame the problem

6.1.2 Collect a dataset

6.1.3 Understand your data

6.1.4 Choose a measure of success

6.2 Develop a model

6.2.1 Prepare the data

6.2.2 Choose an evaluation protocol

6.2.3 Beat a baseline

6.2.4 Scale up: develop a model that overfits

6.2.5 Regularize and tune your model

6.3 Deploy your model

6.3.1 Explain your work to stakeholders and set expectations

6.3.2 Ship an inference model

6.3.3 Monitor your model in the wild

6.3.4 Maintain your model

6.4 Chapter summary

7 Working with Keras

8 Introduction to deep learning for computer vision

9 Advanced computer vision

10 Deep learning for timeseries

11 Deep learning for text

12 Generative deep learning

13 Best practices for the real world

14 Deep learning in production

15 Conclusions

What's inside

  • Deep learning from first principles
  • Image-classification, imagine segmentation, and object detection
  • Deep learning for natural language processing
  • Timeseries forecasting
  • Neural style transfer, text generation, and image generation

About the reader

Readers need intermediate Python skills. No previous experience with Keras, TensorFlow, or machine learning is required.

About the author

François Chollet works on deep learning at Google in Mountain View, CA. He is the creator of the Keras deep-learning library, as well as a contributor to the TensorFlow machine-learning framework. He also does AI research, with a focus on abstraction and reasoning. His papers have been published at major conferences in the field, including the Conference on Computer Vision and Pattern Recognition (CVPR), the Conference and Workshop on Neural Information Processing Systems (NIPS), the International Conference on Learning Representations (ICLR), and others.

placing your order...

Don't refresh or navigate away from the page.
Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
print book $35.99 $59.99 pBook + eBook + liveBook
includes previous edition eBook
Additional shipping charges may apply
Deep Learning with Python, Second Edition (print book) added to cart
continue shopping
go to cart

eBook $33.59 $47.99 3 formats + liveBook
includes previous edition eBook
Deep Learning with Python, Second Edition (eBook) added to cart
continue shopping
go to cart

Prices displayed in rupees will be charged in USD when you check out.

FREE domestic shipping on three or more pBooks