11.1 What does it mean to understand language?
What kinds of predictions do people make about language?
11.2 Natural language processing (NLP)
NLP is divided into a collection of tasks or challenges.
11.3 Supervised NLP
Words go in, and predictions come out.
11.4 IMDB movie reviews dataset
You can predict whether people post positive or negative reviews.
11.5 Capturing word correlation in input data
Bag of words: Given a review’s vocabulary, predict the sentiment.
11.6 Predicting movie reviews
With the encoding strategy and the previous network, you can predict sentiment.
11.7 Intro to an embedding layer
Here’s one more trick to make the network faster.
After running the previous code, run this code.
11.8 Interpreting the output
11.9 Neural architecture
How did the choice of architecture affect what the network learned?
11.10 Comparing word embeddings
How can you visualize weight similarity?
11.11 What is the meaning of a neuron?
Meaning is entirely based on the target labels being predicted.
11.12 Filling in the blank
Learn richer meanings for words by having a richer signal to learn.
11.13 Meaning is derived from loss.
Neural networks don’t really learn data; they minimize the loss function.
The choice of loss function determines the neural network’s knowledge.
11.14 King — Man + Woman ~= Queen
Word analogies are an interesting consequence of the previously built network.
11.15 Word analogies
Linear compression of an existing property in the data
You’ve learned a lot about neural word embeddings and the impact of loss on learning.