In this series of liveProjects, you’ll step into the role of a forensics consultant. You’re investigating a ring of cyber criminals who are blackmailing prominent social media personalities with scandalous “deepfake” videos. These videos use a deep learning model to splice a victim’s face onto an actor, creating highly realistic content that can be indistinguishable from the real thing. Your boss wants you to develop a method to efficiently detect these deepfakes from a huge data set of online videos. The method needs to be fast, and also run without needing GPU resources, which are in short supply. Each project in this series covers a different aspect of developing this deepfake detecting solution, with essential computer vision and machine learning skills you can easily transfer to other tasks.
In this liveProject, you’ll develop a machine learning solution that can identify the difference between faces in deepfake videos and real faces. You’ll use a support-vector machine (SVM) classifier approach to determine which videos have the artifacts associated with deepfakes, and combine face detection, feature extraction, and your SVM classifier together in one pipeline to create your detection system.
Feature extraction is an essential part of detecting deepfake videos. In this liveProject, you’ll analyze both deepfaked and real video data in order to determine what features are common in faked videos. You’ll then compute those features for faces detected in the videos to determine which are fake and which are real.
Face detection has numerous applications across software and is an essential part of the pipeline for detecting deepfake videos. In this liveProject, you’ll implement a component that detects faces in videos, and normalizes them to the same size and visual appearance. This component is a vital foundation for a deepfake detection system.