5, 10 or 20 seats+ for your team - learn more
Help wildlife organizations automatically identify and monitor elephants in images! In this liveProject series, you’ll build a custom CNN to classify Asian vs. African elephants, boost accuracy with transfer learning using Xception and MobileNet, and implement YOLOv8 segmentation for precise detection and localization. By the end, you’ll have a complete elephant image analysis pipeline and hands-on experience with CNNs, transfer learning, and object segmentation—skills you can apply to wildlife monitoring and real-world computer vision projects.
In this liveProject, you’ll join up with WildVision AI, a startup specializing in wildlife monitoring and conservation solutions. WildVision AI needs your help developing a camera system that can spot the difference between Asian and African elephants in order to track conservation efforts and migratory patterns. You’ve decided to build a deep learning powered image classifier, using convolutional neural networks. You’ll start by curating a well-labeled dataset of elephant images, making sure classes like Asian and African elephants are clearly distinguished and balanced for training. Next, you’ll preprocess the data. With clean inputs ready, you’ll design and build a CNN-based model tailored for binary classification. Finally, you’ll evaluate the model and iterate for improvement by tuning hyperparameters, fine-tuning layers, and adjusting augmentations.
In this liveProject, you’ll work alongside EcoSanAI, a startup dedicated to wildlife monitoring and conservation. Their goal is to develop a smart wildlife monitoring system that can tell Asian and African elephants apart, helping track migration routes and conservation progress. To achieve this, you’ll develop a deep learning-based image classifier that uses transfer learning to accurately differentiate between Asian and African elephants. Begin by preparing a labeled dataset of Asian and African elephants with proper preprocessing and splits. Then select a pre-trained CNN like Xception or MobileNet, then adapt it for binary classification by adjusting the final layers and adding regularization. Train and fine-tune the model with callbacks to optimize performance, and finish by testing on a held-out set to evaluate accuracy and identify improvements.
In this liveProject, you’ll team up with WildVisionTech, a startup focused on wildlife conservation and monitoring. WildVisionTech needs a system that can measure and monitor aspects of elephants who are photographed in their camera traps, including tusk length, size, and skin texture. To assist them, you’ll build a deep learning-based segmentation system using the powerful YOLOv8 segmentation model. Start by preparing a YOLO-compatible dataset with annotated images, label files, and a data.yaml configuration, ensuring proper structure and verification. Next, install and configure Ultralytics YOLOv8 to support segmentation tasks on your dataset. Train the model while monitoring validation performance and saving the best weights, then test it on unseen samples to evaluate generalization. Finally, visualize segmentation results by comparing the model’s predictions with ground truth annotations.
The liveProject is for aspiring machine learning engineers, AI-curious developers, and students looking to dive deep into deep learning through a fun, real-world project.
Vision Models for Classification and YOLO Segmentation project for free