Real-World Natural Language Processing
Practical applications with deep learning
Masato Hagiwara
  • MEAP began July 2019
  • Publication in Early 2021 (estimated)
  • ISBN 9781617296420
  • 500 pages (estimated)
  • printed in black & white

Really interesting topic, well written and practical. It’s nice to see the practical side to the theory of NLP too!

Stuart Perks
Voice assistants, automated customer service agents, and other cutting-edge human-to-computer interactions rely on accurately interpreting language as it is written and spoken. Real-world Natural Language Processing teaches you how to create practical NLP applications without getting bogged down in complex language theory and the mathematics of deep learning. In this engaging book, you’ll explore the core tools and techniques required to build a huge range of powerful NLP apps.

About the Technology

Natural language processing is the part of AI dedicated to understanding and generating human text and speech. NLP covers a wide range of algorithms and tasks, from classic functions such as spell checkers, machine translation, and search engines to emerging innovations like chatbots, voice assistants, and automatic text summarization. Wherever there is text, NLP can be useful for extracting meaning and bridging the gap between humans and machines.

About the book

Real-world Natural Language Processing teaches you how to create practical NLP applications using Python and open source NLP libraries such as AllenNLP and Fairseq. In this practical guide, you’ll begin by creating a complete sentiment analyzer, then dive deep into each component to unlock the building blocks you’ll use in all different kinds of NLP programs. By the time you’re done, you’ll have the skills to create named entity taggers, machine translation systems, spelling correctors, and language generation systems.
Table of Contents detailed table of contents

Part 1: Basics

Introduction to Natural Language Processing

1.1 What is natural language processing (NLP)?

1.1.1 What is NLP

1.1.2 What is not NLP

1.1.3 AI, ML, DL, and NLP

1.1.4 Why NLP?

1.2 How NLP is used

1.2.1 NLP applications

1.2.2 NLP tasks

1.3 Building NLP applications

1.3.1 Development of NLP applications

1.3.2 Structure of NLP applications

1.4 Summary

Your First NLP Application

2.1 Introduction to sentiment analysis

2.2 Working with NLP datasets

2.2.1 What is a dataset?

2.2.2 Stanford Sentiment Treebank

2.2.3 Train, validation, and test sets

2.2.4 Loading SST datasets using AllenNLP

2.3 Using word embeddings

2.3.1 What word embeddings are

2.3.2 How to use word embeddings for sentiment analysis

2.4 Neural networks

2.4.1 What neural networks are

2.4.2 Recurrent neural networks (RNNs) and linear layers

2.4.3 Architecture for sentiment analysis

2.5 Loss functions and optimization

2.6 Training your own classifier

2.6.1 Batching

2.6.2 Putting everything together

2.7 Evaluating your classifier

2.8 Deploying your application

2.8.1 Making predictions

2.8.2 Serving predictions

2.9 Summary

Word and Document Embeddings

3.1 Introduction to embeddings

3.1.1 What embeddings are

3.1.2 Why embeddings are important

3.2 Building blocks of language: characters, words, and phrases

3.2.1 Characters

3.2.2 Words, tokens, morphemes, and phrases

3.2.3 N-grams

3.3 Tokenization, stemming, and lemmatization

3.3.1 Tokenization

3.3.2 Stemming

3.3.3 Lemmatization

3.4 Skip-gram and continuous bag-of-words (CBOW)

3.4.1 Where word embeddings come from

3.4.2 Using word associations

3.4.3 Linear layers

3.4.4 Softmax

3.4.5 Implementing Skip-gram on AllenNLP

3.4.6 Continuous bag-of-words (CBOW) model

3.5 GloVe

3.5.1 How GloVe learns word embeddings

3.5.2 Using pre-trained GloVe vectors

3.6 FastText

3.6.1 Making use of subword information

3.6.2 Using FastText toolkit

3.7 Document-level embeddings

3.8 Visualizing embeddings

3.9 Summary

Sentence classification

4.1 Recurrent neural networks (RNNs)

4.1.1 Handling variable-length input

4.1.2 RNN abstraction

4.1.3 Simple RNN and Nonlinearity

4.2 Long Short Term Memory Units (LSTMs) and Gated Recurrent Units (GRUs)

4.2.1 Vanishing gradients problem

4.2.2 Long Short Term Memory (LSTM)

4.2.3 Gated Recurrent Units (GRUs)

4.3 Accuracy, precision, recall, and F measure

4.3.1 Accuracy

4.3.2 Precision and Recall

4.3.3 F measure

4.4 Building AllenNLP training pipelines

4.4.1 Instances and fields

4.4.2 Vocabulary and token indexers

4.4.3 Token embedders and RNNs

4.4.4 Building your own model

4.4.5 Putting all together

4.5 Configuring AllenNLP training pipelines

4.6 Case study: language detection

4.6.1 Using characters as input

4.6.2 Creating a dataset reader

4.6.3 Building the training pipeline

4.6.4 Running the detector on unseen instances

4.7 Summary

Sequential labeling and Language Modeling

5.1 Introduction to sequential labeling

5.1.1 What is sequential labeling

5.1.2 Using RNNs to encode sequences

5.1.3 Implementing Seq2Seq encoder in AllenNLP

5.2 Building a part-of-speech tagger

5.2.1 Reading dataset

5.2.2 Defining the model and the loss

5.2.3 Building the training pipeline

5.3 Multi-layer and bidirectional RNNs

5.3.1 Multi-layer RNNs

5.3.2 Bidirectional RNN

5.4 Named entity recognition

5.4.1 What is named entity recognition

5.4.2 Tagging spans

5.4.3 Implementing a named entity recognizer

5.5 Modeling a language

5.5.1 What a language model is

5.5.2 Why language models are useful

5.5.3 Training an RNN language model

5.6 Text generation using RNNs

5.6.1 Feeding characters to an RNN

5.6.2 Evaluating text using language model

5.6.3 Generating text using language model

5.7 Summary

Part 2: Advanced Models

Sequence to sequence models

6.1 Introduction to sequence to sequence models

6.2 Machine Translation 101

6.3 Building your first translator

6.3.1 Preparing the datasets

6.3.2 Training the model

6.3.4 Running the translator

6.4 How Seq2Seq models work

6.4.1 Encoder

6.4.2 Decoder

6.4.3 Greedy decoding

6.4.4 Beam search decoding

6.5 Evaluating translation systems

6.5.1 Human evaluation

6.5.2 Automatic evaluation

6.6. Case study: building a chatbot

6.6.1 Introduction to dialog systems

6.6.2 Preparing a dataset

6.6.3 Training and running a chatbot

6.6.4 Next steps

6.7 Summary

Convolutional neural networks

7.1 Introduction to convolutional neural networks

7.1.1 RNNs and their shortcomings

7.1.2 Pattern matching for sentence classification

7.1.3 Convolutional neural networks (CNNs)

7.2 Convolutional layers

7.2.1 Pattern matching using filters

7.2.2 Rectified linear unit (ReLU)

7.2.3 Combining scores

7.3 Pooling layers

7.4 Case study: text classification

7.4.1 Review: text classification

7.4.2 Using CnnEncoder

7.4.3 Training and running the classifier

7.5 Summary

8 Attention and Transformer

8.1 What is attention

8.1.1 Limitation of vanilla Seq2Seq models

8.1.2 Attention mechanism

8.2 Sequence to sequence with attention

8.2.1 Encoder-decoder attention

8.2.2 Building a Seq2Seq machine translation with attention

8.3 Transformer and self-attention

8.3.1 Self-attention

8.3.2 Transformer

8.3.3 Experiments

8.4 Transformer-based language models

8.4.1 Transformer as a language model

8.4.2 Transformer-XL

8.4.3 GPT-2

8.4.4 XLM

8.5 Case study: spell checker

8.5.1 Spell correction as machine translation

8.5.2 Training a spell checker

8.5.3 Improving a spell checker

8.6 Summary

9 Transfer and multitask learning

Part 3: Putting into Production

10 Best practices in developing NLP application

11 Deploying and serving NLP application

What's inside

  • Design, develop, and deploy basic NLP applications
  • NLP libraries such as AllenNLP and Fairseq
  • Advanced NLP concepts such as attention and transfer learning

About the reader

Aimed at intermediate Python programmers. No mathematical or machine learning knowledge required.

About the author

Masato Hagiwara received his computer science PhD from Nagoya University in 2009, focusing on Natural Language Processing and machine learning. He has interned at Google and Microsoft Research, and worked at Baidu Japan, Duolingo, and Rakuten Institute of Technology. He now runs his own consultancy business advising clients, including startups and research institutions.

placing your order...

Don't refresh or navigate away from the page.
Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
print book $59.99 pBook + eBook + liveBook
Additional shipping charges may apply
Real-World Natural Language Processing (print book) added to cart
continue shopping
go to cart

eBook $19.99 $47.99 3 formats + liveBook
Real-World Natural Language Processing (eBook) added to cart
continue shopping
go to cart

Prices displayed in rupees will be charged in USD when you check out.

FREE domestic shipping on three or more pBooks