In this liveProject, you’ll use the RoBERTa variation of the BERT Transformer to detect occurrences of fake news in a data set. Political news can be tricky to validate for accuracy, as sources report the same events from different biased angles. RoBERTa uses different pre-training methods than traditional BERT and has hyperparameters that are highly optimized, meaning it tends to perform better than its predecessor. You’ll start out by loading the model using the Hugging Face library and training it to your data with PyTorch Lightning. The project will also include the implementation of training a custom tokenizer from scratch and using it to tokenize the data. A successful model will maximize positives, and so model evaluation will be based on your model having a high recall score.
This project is designed for learning purposes and is not a complete, production-ready application or solution.
This liveProject is for intermediate Python and NLP practitioners who are interested in implementing pretrained BERT architectures and customizing them to solve real-world NLP problems. To begin this liveProject you will need to be familiar with:
- Intermediate Python
- Intermediate PyTorch
- Basics of Google Colab
- Basics of machine learning
- Basics of neural networks
- Basics of natural language processing
you will learn
In this liveProject, you will develop hands-on experience in building a text classifier using PyTorch Lightning and Hugging Face. You’ll also get practical experience working on GPUs in the Google Colab environment.
- Working with Jupyter Notebook on Google Colab
- Loading and preprocessing a text data set
- Tokenizing data using pretrained tokenizers
- Creating dataloaders and tensor data sets
- Loading and configuring pre-trained RoBERTa model using Hugging Face
- Building and training a text classifier using PyTorch Lightning
- Validating the performance of the model using high recall scoring