Deep Learning with PyTorch
Eli Stevens, Luca Antiga, and Thomas Viehmann
Foreword by Soumith Chintala
  • July 2020
  • ISBN 9781617295263
  • 520 pages
  • printed in black & white
ePub + Kindle available Jul 17, 2020

With this publication, we finally have a definative treatise on PyTorch. It covers the basics and abstractions in great detail.

From the Foreword by Soumith Chintala, Cocreator of PyTorch
Every other day we hear about new ways to put deep learning to good use: improved medical imaging, accurate credit card fraud detection, long range weather forecasting, and more. PyTorch puts these superpowers in your hands, providing a comfortable Python experience that gets you started quickly and then grows with you as you—and your deep learning skills—become more sophisticated. Deep Learning with PyTorch will make that journey engaging and fun.

About the Technology

Although many deep learning tools use Python, the PyTorch library is truly Pythonic. Instantly familiar to anyone who knows PyData tools like NumPy and scikit-learn, PyTorch simplifies deep learning without sacrificing advanced features. It’s excellent for building quick models, and it scales smoothly from laptop to enterprise. Because companies like Apple, Facebook, and JPMorgan Chase rely on PyTorch, it’s a great skill to have as you expand your career options. It’s easy to get started with PyTorch. It minimizes cognitive overhead without sacrificing the access to advanced features, meaning you can focus on what matters the most - building and training the latest and greatest deep learning models and contribute to making a dent in the world. PyTorch is also a snap to scale and extend, and it partners well with other Python tooling. PyTorch has been adopted by hundreds of deep learning practitioners and several first-class players like FAIR, OpenAI, FastAI and Purdue.

About the book

Deep Learning with PyTorch teaches you to create neural networks and deep learning systems with PyTorch. This practical book quickly gets you to work building a real-world example from scratch: a tumor image classifier. Along the way, it covers best practices for the entire DL pipeline, including the PyTorch Tensor API, loading data in Python, monitoring training, and visualizing results. After covering the basics, the book will take you on a journey through larger projects. The centerpiece of the book is a neural network designed for cancer detection. You'll discover ways for training networks with limited inputs and start processing data to get some results. You'll sift through the unreliable initial results and focus on how to diagnose and fix the problems in your neural network. Finally, you'll look at ways to improve your results by training with augmented data, make improvements to the model architecture, and perform other fine tuning.
Table of Contents detailed table of contents

Part 1: Core PyTorch

1 Introducing Deep Learning and the PyTorch Library

1.1 What is PyTorch?

1.2 What is this book?

1.3 Why PyTorch

1.3.1 The Deep Learning Revolution

1.3.2 Immediate vs. deferred execution

1.3.3 The deep learning competitive landscape

1.4 PyTorch has the batteries included

1.4.1 Hardware for deep learning

1.4.2 Using Jupyter notebooks

1.5 Conclusion

1.6 Exercises

1.7 Summary

2 Pre-Trained Networks

2.1 A pre-trained network that recognizes the subject of an image

2.1.1 Obtaining a pre-trained network for image recognition

2.1.2 AlexNet

2.1.3 ResNet

2.1.4 Ready, set, almost run

2.1.5 Run!

2.2 A pre-trained model that fakes it until it makes it

2.2.1 The GAN game

2.2.2 CycleGAN

2.2.3 A network that turns horses into zebras

2.3 A pre-trained network that describes scenes

2.3.1 NeuralTalk2

2.4 Torch Hub

2.5 Conclusion

2.6 Exercises

2.7 Summary

3 It Starts with a Tensor

3.1 Tensors are multi-dimensional arrays

3.1.1 From Python lists to PyTorch tensors

3.1.2 Constructing our first tensors

3.1.3 The essence of tensors

3.2 Indexing Tensors

3.3 Named Tensors

3.4 Tensor element types

3.4.1 Specifying the numeric type with dtype

3.4.2 A dtype for every occasion

3.4.3 Managing a tensor’s dtype attribute

3.5 The tensor API

3.6 Tensors — scenic views on storage

3.6.1 Indexing into storage

3.6.2 Modifying Stored Values — Inplace Operations

3.7 Tensor metadata: size, offset, stride

3.7.1 Views over another tensor’s storage

3.7.2 Transposing without copying

3.7.3 Transposing in higher dimensions

3.7.4 Contiguous tensors

3.8 NumPy interoperability

3.9 Moving tensors to the GPU

3.9.1 Managing a tensor’s device attribute

3.10 Generalized Tensors are Tensors, too

3.11 Serializing tensors

3.11.1 Serializing to HDF5 with h5py

3.12 Conclusion

3.13 Exercises

3.14 Summary

4 Real-World Data Representation Using Tensors

4.1 Images

4.2 Volumetric Data

4.3 Tabular Data

4.4 Time Series

4.5 Text

4.5.1 Text embeddings

4.5.2 Text embeddings a a blueprint

4.6 Conclusion

4.7 Exercises

4.8 Summary

5 The Mechanics of Learning

5.1 Learning is just parameter estimation

5.1.1 A Hot Problem

5.1.2 Choosing a linear model as a first try

5.1.3 Less loss is what we want

5.1.4 From Problem to PyTorch

5.1.5 Down Along the Gradient

5.1.6 Getting Analytical

5.1.7 The Training Loop

5.2 PyTorch’s Autograd: Back-propagate all things

5.2.1 Optimizers a-la Carte

5.2.2 Training, Validation, and Overfitting

5.2.3 Autograd Nits and Switching it Off

5.3 Conclusion

5.4 Exercises

5.5 Summary

6 Using A Neural Network To Fit the Data

6.1 Artificial Neurons

6.1.1 All We Need is Activation

6.1.2 What learning means for a neural network

6.2 The PyTorch nn module

6.2.1 Finally a Neural Network

6.3 Subclassing nn.Module

6.3.1 The Functional API

6.4 Conclusion

6.5 Exercises

6.6 Summary

7 Telling Birds from Airplanes: Learning from Images

7.1 A dataset of tiny images

7.1.1 Downloading CIFAR10

7.1.2 The Dataset class

7.1.3 Dataset transforms

7.1.4 Normalizing data

7.2 Distinguishing birds from airplanes

7.2.1 Building the dataset

7.2.2 A fully connected classifier

7.2.3 A loss for classifying

7.2.4 Training the classifier

7.2.5 The limits of going fully connected

7.3 Conclusion

7.4 Exercises

7.5 Summary

8 Using Convolutions To Generalize

8.1 The case for convolutions

8.2 Convolutions in action

8.2.1 Looking further with depth and pooling

8.3 Subclassing nn.Module

8.3.1 The Functional API

8.4 Training our Convnet

8.5 Model Design

8.5.1 Witdh

8.5.2 Depth

8.5.3 Building very deep models in PyTorch

8.5.4 Now it’s already outdated

8.6 Conclusion

8.7 Exercises

8.8 Summary

Part 2: Learning from Images in the Real-World: Early Detection of Lung Cancer

9 Using PyTorch To Fight Cancer

9.1 What is a CT scan, exactly?

9.2 The project: an end-to-end malignancy detector for lung cancer

9.2.1 Why can’t we just throw data at a neural network until it works?

9.2.2 What is a nodule?

9.2.3 Our data source: the LUNA Grand Challenge

9.2.4 How to download the LUNA data

9.3 Conclusion

9.4 Summary

10 Ready, Dataset, Go!

10.1 Parsing LUNA’s annotation data

10.2 Loading individual CT scans

10.2.1 Hounsfield Units

10.3 Locating a nodule using the patient coordinate system

10.3.1 Extracting a nodule from a CT scan

10.4 A straightforward Dataset implementation

10.4.1 Caching nodule arrays with the getCtRawNodule function

10.4.2 Constructing our dataset in LunaDataset.{uu}init{uu}

10.4.3 A Training / Validation Split

10.4.4 Rendering the data

10.5 Conclusion

10.6 Exercises

10.7 Summary

11 Training A Classification Model To Detect Suspected Tumors

11.1 The main entrypoint for our application

11.2 Pre-training setup and initialization

11.2.1 Initalizing the model and optimizer

11.2.2 Care and feeding of DataLoaders

11.3 Our first-pass neural network design

11.3.1 The Core Convolutions

11.3.2 The Full Model

11.4 Training and validating the model

11.4.1 Deleting the loss variable

11.4.2 The computeBatchLoss function

11.4.3 The validation loop is similar

11.5 Outputting performance metrics

11.5.1 The logMetrics function

11.6 Running the training script

11.6.1 Needed data for training

11.6.2 Interlude: the enumerateWithEstimate function

11.7 Evaluating the model: Getting 99.7% correct means we’re done, right?

11.8 Graphing training metrics with TensorBoard

11.8.1 Running TensorBoard

11.8.2 Adding tensorboard support to our metrics logging function

11.9 Why is the model not learning to detect malignant tumors?

11.10 Conclusion

11.11 Exercises

11.12 Summary

12 Monitoring Metrics: Precision, Recall, and Pretty Pictures

12.1 Good dogs versus bad guys: false positives and false negatives

12.2 Graphing the positives and negatives

12.2.1 Recall

12.2.2 Precision

12.2.3 Implementing precision and recall in logMetrics

12.2.4 Our ultimate performance metric: the F1 score

12.2.5 How does our model perform with our new metrics?

12.3 What does an ideal data set look like

12.3.1 Making the data look less like the actual and more like the "ideal"

12.3.2 Changes to training.py, dset.py to balance benign and malignant samples

12.3.3 Contrasting training with a balanced LunaDataset to previous runs

12.4 Revisiting the problem of over-fitting

12.4.1 An over-fit face-to-age prediction model

12.4.2 Detecting over-fitting

12.5 Data Augmentation

12.5.1 Specific Data Augmentation Techniques

12.5.2 Seeing the improvement from data augmentation

12.6 Conclusion

12.7 Exercises

12.8 Summary

13 Using Segmentation To Find Suspected Nodules

13.1 Segmentation is per-pixel classification

13.1.1 The UNet architecture

13.1.2 An off-the-shelf model: adding UNet to our project

13.2 A 3D Dataset in 2D

13.2.1 UNet has very specific input size requirements

13.2.2 UNet in 3D would use too much RAM

13.2.3 Building the ground truth data

13.2.4 Implementing the Luna2dSegmentationDataset

13.3 Updating the training script

13.3.1 Getting images into tensorboard

13.3.2 Dice loss

13.3.3 Updating our metrics logging

13.3.4 Saving our model

13.4 Conclusion

13.5 Exercises

13.6 Summary

14 End-to-end nodule analysis, and where to go next

14.1 Towards the finish line

14.2 Independence of the validation set

14.3 Bridging CT segmentation and nodule candidate classification

14.3.1 Segmentation

14.3.2 Grouping voxels into nodule candidates

14.3.3 Did we find a nodule? Classification to reduce false positives

14.4 Quantitative validation

14.5 Predicting malignancy

14.5.1 Getting malignancy information

14.5.2 An area under the curve baseline: Classifying by diameter

14.5.3 Reusing preexisting weights: Fine-tuning

14.5.4 More output in TensorBoard

14.6 What we see when we diagnose

14.6.1 Training, validation, and test sets

14.7 What next? Additional sources of inspiration (and data)

14.7.1 Preventing overfitting: Better regularization

14.7.2 Refined training data

14.7.3 Competition results and research papers

14.8 Conclusion

14.8.1 Behind the curtain

14.9 Exercises

14.10 Summary

Part 3: Deploying PyTorch Models

15 Deploying to production

15.1 Serving PyTorch models

15.1.1 Our model behind a Flask server

15.1.2 What we want from deployment

15.1.3 Request batching

15.2 Exporting models

15.2.1 Interoperability beyond PyTorch with ONNX

15.2.2 PyTorch’s own export: Tracing

15.2.3 Our server with a traced model

15.3 Interacting with the PyTorch JIT

15.3.1 What to expect from moving beyond classic Python/PyTorch

15.3.2 The dual nature of PyTorch as interface and backend

15.3.3 TorchScript

15.3.4 Scripting the gaps of traceability

15.4 LibTorch: PyTorch in C++

15.4.1 Running JITed models from C++

15.4.2 C from the start: The C API

15.5 Going mobile

15.5.1 Improving efficiency: Model design and quantization

15.6 Emerging technology: Enterprise serving of PyTorch models

15.7 Conclusion

15.8 Exercises

15.9 Summary

What's inside

  • Training deep neural networks
  • Implementing modules and loss functions
  • Utilizing pretrained models from PyTorch Hub
  • Exploring code samples in Jupyter Notebooks

About the reader

For Python programmers with an interest in machine learning.

About the authors

Eli Stevens had roles from software engineer to CTO, and is currently working on machine learning in the self-driving-car industry. Luca Antiga is cofounder of an AI engineering company and an AI tech startup, as well as a former PyTorch contributor. Thomas Viehmann is a PyTorch core developer and machine learning trainer and consultant. consultant based in Munich, Germany and a PyTorch core developer.
Deep Learning with PyTorch authors Luca Antiga (L) and Eli Stevenson (R) eating dessert in San Francisco's Mission District with the book's editor Frances Lefkowitz. Luca is from Bergamo, Italy, Eli lives in San Jose, and Frances hails from San Francisco.

placing your order...

Don't refresh or navigate away from the page.
print book $24.99 $49.99 pBook + eBook + liveBook
Additional shipping charges may apply
Deep Learning with PyTorch (print book) added to cart
continue shopping
go to cart

eBook $19.99 $39.99 3 formats + liveBook
Deep Learning with PyTorch (eBook) added to cart
continue shopping
go to cart

Prices displayed in rupees will be charged in USD when you check out.

FREE domestic shipping on three or more pBooks