In this series of liveProjects, you’ll build a custom search engine that’s capable of quickly and accurately sourcing documents from the CDC’s document database. Your search engine will improve the CDC’s ability to handle future pandemics, with the capability to aggregate and search unstructured text data from records of earlier outbreaks. Each liveProject in this series tackles a different aspect of searching with natural language processing so you can pick and choose the specific skills you need.
These projects are designed for learning purposes and are not complete, production-ready applications or solutions.
How to get your FREE
Certificate of Completion
Finish all the projects in this liveProject series
Take a short online test
Answer questions from the liveProject mentor
That's it!
liveProject mentor Rohit Agarwal shares what he likes about the Manning liveProject platform.
here's what's included
Project 1 Text Search with spaCy and scikit-learn
Project 1 Text Search with spaCy and scikit-learn
In this liveProject, you’ll explore and assess essential methods for unstructured text search in order to identify which is the best for building a search engine. You’ll preprocess the data for this task using the spaCy library, and then experiment with implementing both a TF-IDF search and an inverted index search to find relevant information.
Project 2 Implement Semantic Search with ML and BERT
Project 2 Implement Semantic Search with ML and BERT
In this liveProject, you’ll apply premade transfer learning models to improve the context understanding of your search. You’ll implement BERT (Bidirectional Encoder Representations from Transformers) to create a semantic search engine. BERT excels in many traditional NLP tasks, like search, summarization and question answering. It will allow your search engine to find documents with terms that are contextually related to what your user is searching for, rather than just exact word occurrence.
Project 3 Building a Search API with Elasticsearch and BERT
Project 3 Building a Search API with Elasticsearch and BERT
In this liveProject, you’ll bring together multiple tools and models to construct a production-grade smart search engine. You’ll combine off-the-shelf Elasticsearch models for keyword search with your own semantic search API using transformers. Start by setting up and indexing an Elasticsearch Docker container, then quickly move on to boosting search relevance with BERT. Finally, you’ll set up a Flask API to serve a BERT model and look into customizing your search engine for your own particular topics of interest.
Project 4 UI for a Search API with Flask and Bootstrap
Project 4 UI for a Search API with Flask and Bootstrap
In this liveProject, you’ll use Docker, the Docker compose command line, and Bootstrap to construct an easy-to-use UI for your search engine. You’ll work to set up a single node Elasticsearch cluster, before putting together your Bootstrap UI, and then create a customized Docker image of the Flask API that serves your transformers model and frontend.
Olesya Bondarenko has a multidisciplinary background and experience in natural language processing (NLP), machine learning, deep learning, statistics, time-series analysis, process automation, engineering R&D and new product prototyping. Currently, she is a data scientist at Strong Analytics, a leading provider of customized AI solutions, where she specializes in developing NLP systems. Prior to joining Strong, she worked with several startups leading research and development efforts in the areas of conversational AI, data-leveraged scientific discovery solutions, and a variety of automated analytic and data collection tools. Olesya received her PhD in electrical engineering from the University of California San Diego where she designed and prototyped novel optical devices, as well as custom instrumentation for their analysis.
Prerequisites
This liveProject is for intermediate Python programmers familiar with the basics of manipulations with strings, lists and dictionaries. To begin this liveProject, you will need to be familiar with the following:
TOOLS
Intermediate Python
Basic understanding of conda environments
Basic scikit-learn
Basic NumPy
TECHNIQUES
Reading data from and writing to JSON files
Manipulations with tuples, lists and dictionaries using loops and comprehensions
Natural language processing tokenization, lemmatization, and cleaning of text data
Basic NumPy array operations
you will learn
In this liveProject you will learn to implement the simple-but-effective term frequency - inverse document frequency (TF-IDF) search method. This method will encompass calculating the frequency of certain words in documents.
Use Python’s built in JSON library to store multi-level text data
Create, update and transform lists and dictionaries with text data
Apply Python’s spaCy library to perform essential natural language processing steps
Compute TF-IDF tables and apply term frequency search to them
Calculate cosine similarity with scikit-learn
Build an inverted index, an essential element of a search engine
features
Self-paced
You choose the schedule and decide how much time to invest as you build your project.
Project roadmap
Each project is divided into several achievable steps.
Get Help
While within the liveProject platform, get help from other participants and our expert mentors.
Compare with others
For each step, compare your deliverable to the solutions by the author and other participants.
Certificate of Completion
Earn a certificate of completion, including a badge to display on your resume, LinkedIn page, and other social media, after you complete this series.
book resources
Get full access to select books for 90 days. Permanent access to excerpts from Manning products are also included, as well as references to other resources.
how to play
guess the geekle in 5-, 6-, 7- tries.
each guess must be a valid 4-6 letter tech word. hit enter to submit.
after each guess, the color of the tiles will change to show how close your guess was to the word.