GANs in Action
Deep learning with Generative Adversarial Networks
Jakub Langr and Vladimir Bok
  • September 2019
  • ISBN 9781617295560
  • 240 pages
  • printed in black & white
ePub + Kindle available Sep 27, 2019

Comprehensive and in-depth coverage of the future of AI.

Simeon Leyzerzon, Excelsior Software

GANs in Action teaches you how to build and train your own Generative Adversarial Networks, one of the most important innovations in deep learning. In this book, you’ll learn how to start building your own simple adversarial system as you explore the foundation of GAN architecture: the generator and discriminator networks.

Table of Contents detailed table of contents

Part 1: Introduction to GANs and generative modeling

1 Introduction to GANs

1.1 What are Generative Adversarial Networks?

1.2 How do GANs work?

1.3 GANs in action

1.3.1 GAN training

1.3.2 Reaching equilibrium

1.4 Why study GANs?

Summary

2 Intro to generative modeling with autoencoders

2.1 Introduction to generative modeling

2.2 How do autoencoders function on a high level?

2.3 What are autoencoders to GANs?

2.4 What is an autoencoder made of?

2.5 Usage of autoencoders

2.6 Unsupervised learning

2.6.1 New take on an old idea

2.6.2 Generation using an autoencoder

2.6.3 Variational autoencoder

2.7 Code is life

2.8 Why did we try a GAN?

Summary

3 Your first GAN: Generating handwritten digits

3.1 Foundations of GANs: Adversarial training

3.1.1 Cost functions

3.1.2 Training process

3.2 The Generator and the Discriminator

3.2.1 Conflicting objectives

3.2.2 Confusion matrix

3.3 GAN training algorithm

3.4 Tutorial: Generating handwritten digits

3.4.1 Importing statements

3.4.2 Implementing the Generator

3.4.3 Implementing the Discriminator

3.4.4 Building the model

3.4.5 Training

3.4.6 Outputting sample images

3.4.7 Running the model

3.4.8 Inspecting the results

3.5 Conclusion

Summary

4 Deep Convolutional GAN

4.1 Convolutional neural networks

4.1.1 Convolutional filters

4.1.2 Parameter sharing

4.1.3 ConvNets visualized

4.2 Brief history of the DCGAN

4.3 Batch normalization

4.3.1 Understanding normalization

4.3.2 Computing batch normalization

4.4 Tutorial: Generating handwritten digits with DCGAN

4.4.1 Importing modules and specifying model input dimensions

4.4.2 Implementing the Generator

4.4.3 Implementing the Discriminator

4.4.4 Building and running the DCGAN

4.4.5 Model output

4.5 Conclusion

Summary

Part 2: Advanced topics in GANs

5 Training and common challenges: GANing for success

5.1 Evaluation

5.1.1 Evaluation framework

5.1.2 Inception score

5.1.3 Fréchet inception distance

5.2 Training challenges

5.2.1 Adding network depth

5.2.2 Game setups

5.2.3 Min-Max GAN

5.2.4 Non-Saturating GAN

5.2.5 When to stop training

5.2.6 Wasserstein GAN

5.3 Summary of game setups

5.4 Training hacks

5.4.1 Normalizations of inputs

5.4.2 Batch normalization

5.4.3 Gradient penalties

5.4.4 Train Discriminator more

5.4.5 Avoid sparse gradients

5.4.6 Soft and noisy labels

Summary

6 Progressing with GANs

6.1 Latent space interpolation

6.2 They grow up so fast

6.2.1 Progressive growing and smoothing in of higher-resolution layers

6.2.2 Example implementation

6.2.3 Mini-batch standard deviation

6.2.4 Equalized learning rate

6.2.5 Pixel-wise feature normalization in the generator

6.3 Summary of key innovations

6.4 TensorFlow Hub and hands-on

6.5 Practical applications

Summary

7 Semi-Supervised GAN

7.1 Introducing the Semi-Supervised GAN

7.1.1 What is a Semi-Supervised GAN?

7.1.2 Architecture

7.1.3 Training process

7.1.4 Training objective

7.2 Tutorial: Implementing a Semi-Supervised GAN

7.2.1 Architecture diagram

7.2.2 Implementation

7.2.3 Setup

7.2.4 The dataset

7.2.5 The Generator

7.2.6 The Discriminator

7.2.7 Build the model

7.2.8 Training

7.3 Comparison to a fully supervised classifier

7.4 Conclusion

Summary

8 Conditional GAN

8.1 Motivation

8.2 What is Conditional GAN?

8.2.1 CGAN Generator

8.2.2 CGAN Discriminator

8.2.3 Summary table

8.2.4 Architecture diagram

8.3 Tutorial: Implementing Conditional GAN

8.3.1 Implementation

8.3.2 Setup

8.3.3 CGAN Generator

8.3.4 CGAN Discriminator

8.3.5 Build the model

8.3.6 Training

8.3.7 Outputting sample images

8.3.8 Train the model

8.3.9 Inspecting the output: Targeted data generation

8.4 Conclusion

Summary

9 CycleGAN

9.1 Image-to-image translation

9.2 Cycle-consistency loss: There and back aGAN

9.3 Adversarial loss

9.4 Identity loss

9.5 Architecture

9.5.1 CycleGAN architecture: building the network

9.5.2 Generator architecture

9.5.3 Discriminator architecture

9.6 Object-oriented design of GANs

9.7 Tutorial: CycleGAN

9.7.1 Building the network

9.7.2 Building the Discriminator

9.7.3 Running CycleGAN

9.8 Expansions, augmentations, and applications

9.8.1 Augmented CycleGAN

9.8.2 Applications

Summary

Part 3: Where to go from here

10 Adversarial examples

10.1 Context of adversarial examples

10.2 Lies, damned lies, and distributions

10.3 Use and abuse of training

10.4 Signal and the noise

10.5 Not all hope is lost

10.6 Adversaries to GANs

10.7 Conclusion

Summary

11 Practical applications of GANs

11.1 GANs in medicine

11.1.1 Using GANs to improve diagnostic accuracy

11.1.2 Methodology

11.1.3 Results

11.2 GANs in fashion

11.2.1 Using GANs to design fashion

11.2.2 Methodology

11.2.3 Creating new items matching individual preferences

11.2.4 Adjusting existing items to better match individual preferences

11.3 Conclusion

Summary

12 Looking ahead

12.1 Ethics

12.2 GAN innovations

12.2.1 Relativistic GAN

12.2.2 Self-Attention GAN

12.2.2 Self-Attention GAN

12.3 Further reading

12.4 Looking back and closing thoughts

Summary

About the Technology

Generative Adversarial Networks, GANs, are an incredible AI technology capable of creating images, sound, and videos that are indistinguishable from the “real thing.” By pitting two neural networks against each other—one to generate fakes and one to spot them—GANs rapidly learn to produce photo-realistic faces and other media objects. With the potential to produce stunningly realistic animations or shocking deepfakes, GANs are a huge step forward in deep learning systems.

About the book

GANs in Action teaches you to build and train your own Generative Adversarial Networks. You’ll start by creating simple generator and discriminator networks that are the foundation of GAN architecture. Then, following numerous hands-on examples, you’ll train GANs to generate high-resolution images, image-to-image translation, and targeted data generation. Along the way, you’ll find pro tips for making your system smart, effective, and fast.

What's inside

  • Building your first GAN
  • Handling the progressive growing of GANs
  • Practical applications of GANs
  • Troubleshooting your system

About the reader

For data professionals with intermediate Python skills, and the basics of deep learning–based image processing.

About the author

Jakub Langr is a Computer Vision Cofounder at Founders Factory (YEPIC.AI). Vladimir Bok is a Senior Product Manager overseeing machine learning infrastructure and research teams at a New York–based startup.


combo $49.99 pBook + eBook + liveBook
eBook $39.99 pdf + ePub + kindle + liveBook
Prices displayed in rupees will be charged in USD when you check out.

placing your order...

Don't refresh or navigate away from the page.

FREE domestic shipping on three or more pBooks