Look inside
In this liveProject, you’ll use the DistilBERT variation of the BERT Transformer to detect and block occurrences of spam emails in a data set. You’ll utilize binary classification to determine whether an email is spam, or legitimate. The DistilBERT model uses knowledge distillation to highly reduce the size of the transformer model, thus optimizing time and resources. You’ll learn to use the HuggingFace library to load your data set, and fine-tune it to your task with PyTorch Lightning. You’ll also explore alternative training approaches that utilize novel APIs in the transformers library to fine-tune pre trained DistilBERT models. Every part of an NLP pipeline is covered, from preprocessing your data to remove symbols and numbers, to model training and validation using F1-scoring to assess the robustness of your pipeline.
This project is designed for learning purposes and is not a complete, production-ready application or solution.
prerequisites
This liveProject is for intermediate Python and NLP practitioners who are interested in implementing pretrained BERT architectures and customizing them to solve real-world NLP problems. To begin this liveProject you will need to be familiar with:
TOOLS
- Intermediate Python
- Intermediate PyTorch
- Basics of Google Colab
TECHNIQUES
- Basics of machine learning
- Basics of neural networks
- Basics of natural language processing
you will learn
In this liveProject, you will develop hands-on experience in building a text classifier using PyTorch Lightning and Hugging Face. You’ll also get practical experience working on GPUs in the Google Colab environment.
- Working with Jupyter Notebook on Google Colab
- Loading and preprocessing a text data set
- Tokenizing data using pre-trained tokenizers
- Creating dataloaders and tensor data sets
- Loading and configuring pre-trained DistilBERT model using Hugging Face
- Building and training a text classifier using PyTorch Lightning
- Validating the performance of the model using F1 scoring