Look inside
Voice assistants and speech-based applications need a reliable understanding of human commands. Deep learning systems trained on audio data are a great way for models to learn directly from sound and easily classify what humans are telling them.
In this liveProject, you’ll become a data scientist prototyping a speech command system for a medical organization. Your company’s application will allow hospital patients to adjust their room TVs with their voice, and you’ve been tasked with building the core: a machine learning pipeline that can label audio inputs with given command words. You’ll tackle preprocessing and feature engineering methods for audio signals, expand your existing machine learning capabilities with audio data, and get hands-on experience with TensorFlow, Keras, and other powerful Python deep learning tools.
This project is designed for learning purposes and is not a complete, production-ready application or solution.