Voice assistants and speech-based applications need a reliable understanding of human commands. Deep learning systems trained on audio data are a great way for models to learn directly from sound and easily classify what humans are telling them.
In this liveProject, you’ll become a data scientist prototyping a speech command system for a medical organization. Your company’s application will allow hospital patients to adjust their room TVs with their voice, and you’ve been tasked with building the core: a machine learning pipeline that can label audio inputs with given command words. You’ll tackle preprocessing and feature engineering methods for audio signals, expand your existing machine learning capabilities with audio data, and get hands-on experience with TensorFlow, Keras, and other powerful Python deep learning tools.
This project is designed for learning purposes and is not a complete, production-ready application or solution.
This liveProject is for confident Python programmers with existing experience in machine learning. It will hone your skills for handling the uncommon audio data type, and constructing deep learning pipelines. To begin this liveProject, you will need to be familiar with:
- Intermediate Python
- Intermediate NumPy
- Intermediate pandas
- Intermediate Matplotlib
- Intermediate TensorFlow
- Intermediate Keras
- Basics of data analysis
- Basics of supervised machine learning
- Classification problems and suitable quality metrics
- Supervised training of artificial neural networks
you will learn
In this liveProject, you’ll master foundational skills in audio data processing that provide an excellent starting point for voice-controlled assistants, conversational UIs, and other speech apps.
- Audio processing with Python
- Representing audio data
- Feature engineering for machine learning with audio
- Computing and visualizing spectrograms
- Convolutional neural network architectures for audio
- GPU-accelerated deep learning