Five-Project Series

Adversarial Machine Learning you own this product

prerequisites
intermediate Python (NumPy, array merging) • basics of CNN models • intermediate scikit-learn • intermediate Matplotlib • intermediate Keras/TensorFlow
skills learned
CNN model building with Keras • basics of untargeted adversarial attacks (FGSM, PGD) • attack generators (ART and CleverHans libraries) • basics of targeted adversarial attacks (PGD, BIM and Carlini & Wagner) • mitigation methods • adversarial training • defensive distillation basics
Earn a Certificate of Completion with this liveProject series
Ferhat Özgur Catak
5 weeks · 4-6 hours per week average · INTERMEDIATE
includes 5 liveProjects
liveProject $49.99 $69.99 self-paced learning

Step into the realm of machine learning where adversarial attacks are a growing concern. In each of the liveProjects in this series, you’ll either play the role of an attacker penetrating a classification model or a cybersecurity professional protecting the model from malicious attacks. Using Convolutional Neural Network (CNN) architecture, you’ll build a deep learning model to predict patterns in images. You’ll generate untargeted and targeted adversarial ML attacks using the highly popular attack libraries CleverHans and Adversarial Robustness Toolbox (ART). Then, you’ll implement mitigation based on adversarial training and defensive distillation strategies. Throughout this series, you’ll gain firsthand experience on what goes into malicious ML attacks and building models to defend against them.

These projects are designed for learning purposes and are not complete, production-ready applications or solutions.
How to get your FREE
Certificate of Completion
  • Finish all the projects in this liveProject series
  • Take a short online test
  • Answer questions from the liveProject mentor
That's it!

here's what's included

Project 1 Traffic Sign Classifier

Tackle a fundamental step in many AI applications: building a simple image classification model. Using Convolutional Neural Network (CNN) layers, you’ll create this deep learning model for victims of adversarial machine learning attacks, train it on a publicly accessible traffic sign dataset, and implement it using Python.

$29.99 FREE
try now
Project 2 Untargeted Attacks on Your Classifier

Play the villain! Your goal is to mislead an existing DL model into incorrectly predicting the pattern. First, you’ll load your dataset, learn its structure, and examine a few random samples using OpenCV or Matplotlib. Using NumPy, you’ll prepare your dataset for training. Then, it’s attack time: Using FGSM and PGD, you’ll generate malicious inputs for the model in an effort to predict any class other than the correct one. Finally, you’ll enlist NumPy again to evaluate the success ratio of your attacks.

$29.99 $19.99
add to cart
Project 3 Targeted Attacks on Your Classifier

Mount a targeted attack! Your goal is to mislead an existing DL model into predicting a specific incorrect target class. First, you’ll load your dataset, learn its structure, and examine a few random samples using OpenCV or Matplotlib. Next, you’ll prepare your dataset for training using NumPy. Then you’ll generate malicious input using three different classes from the highly popular CleverHans attack library. Finally, you’ll enlist NumPy again to evaluate the success ratio of your attacks.

$29.99 $19.99
add to cart
Project 4 Adversarial Training

Protect your model by implementing adversarial training, the easiest method of safeguarding against adversarial attacks. You’ll load your dataset, learn its structure, and examine a few random samples using OpenCV or Matplotlib. Using Numpy, you’ll prepare your dataset for training, then you’ll use FGSM to generate malicious input for both untargeted and targeted attacks on a trained DL model. For each type of attack, you’ll evaluate your model before and after you apply adversarial training-based mitigation methods, gauging the success of your defense.

$29.99 $19.99
add to cart
Project 5 Defensive Distillation

Make your model less vulnerable to exploitation with defensive distillation, an adversarial training strategy that uses a teacher (larger, more efficient) model to learn the critical features of a student (smaller, less efficient) model, then use the teacher model to improve the accuracy of the student model. In this liveProject, you’ll use a pre-trained model to train your student model without distillation, generate malicious input using FGSM, and evaluate the undefended model. Then, you’ll train a teacher model with a pre-trained dataset, train your student model with the same training set and teacher model using distillation, generate malicious input, and evaluate the defended student model, comparing the results with and without distillation.

$29.99 $19.99
add to cart

books resources

When you start each of the projects in this series, you'll get full access to the following books for 90 days.

The free project does not include full access to these Manning books. Purchase the full series to unlock this access in the free project, too!

project author

Ferhat Özgur Catak

Ferhat Ozgur Catak is an associate professor of computer science at the University of Stavanger, Norway. He has experience developing machine/deep learning models for cybersecurity, security for deep learning models, and data privacy using statistical and cryptographic methods. He has also been involved in several national, international, and NATO-wide security and research activities.

Prerequisites

This liveProject series is for intermediate Python programmers who know the basics of data science. To begin this series, you’ll need to be familiar with the following:

TOOLS
  • Intermediate Python (file processing, data frames, data processing)
  • Basics of Jupyter Notebook
  • Basics of Matplotlib
  • Basics of scikit-learn
  • Basics of Keras/TensorFlow
  • Basic NumPy
TECHNIQUES
  • Basic knowledge of neural networks
  • Basic concepts in machine learning
  • Basic data visualization

you will learn

In this liveProject, you’ll learn to generate malicious input to target deep learning models, and mitigate the models using adversarial training and defensive distillation.

  • Implementing data preprocessing for image data
  • Training deep learning models adopting the data preprocessing
  • Generate targeted and untargeted malicious inputs
  • Mitigate the deep learning models against malicious inputs

features

Self-paced
You choose the schedule and decide how much time to invest as you build your project.
Project roadmap
Each project is divided into several achievable steps.
Get Help
While within the liveProject platform, get help from other participants and our expert mentors.
Compare with others
For each step, compare your deliverable to the solutions by the author and other participants.
Certificate of Completion
Earn a certificate of completion, including a badge to display on your resume, LinkedIn page, and other social media, after you complete this series.
book resources
Get full access to select books for 90 days. Permanent access to excerpts from Manning products are also included, as well as references to other resources.
How to get your FREE
Certificate of Completion
  • Finish all the projects in this liveProject series
  • Take a short online test
  • Answer questions from the liveProject mentor
That's it!
RECENTLY VIEWED