In this series of liveProjects, you’ll build a custom search engine that’s capable of quickly and accurately sourcing documents from the CDC’s document database. Your search engine will improve the CDC’s ability to handle future pandemics, with the capability to aggregate and search unstructured text data from records of earlier outbreaks. Each liveProject in this series tackles a different aspect of searching with natural language processing so you can pick and choose the specific skills you need.
In this liveProject, you’ll use Docker, the Docker compose command line, and Bootstrap to construct an easy-to-use UI for your search engine. You’ll work to set up a single node Elasticsearch cluster, before putting together your Bootstrap UI, and then create a customized Docker image of the Flask API that serves your transformers model and frontend.
In this liveProject, you’ll bring together multiple tools and models to construct a production-grade smart search engine. You’ll combine off-the-shelf Elasticsearch models for keyword search with your own semantic search API using transformers. Start by setting up and indexing an Elasticsearch Docker container, then quickly move on to boosting search relevance with BERT. Finally, you’ll set up a Flask API to serve a BERT model and look into customizing your search engine for your own particular topics of interest.
In this liveProject, you’ll apply premade transfer learning models to improve the context understanding of your search. You’ll implement BERT (Bidirectional Encoder Representations from Transformers) to create a semantic search engine. BERT excels in many traditional NLP tasks, like search, summarization and question answering. It will allow your search engine to find documents with terms that are contextually related to what your user is searching for, rather than just exact word occurrence.
In this liveProject, you’ll explore and assess essential methods for unstructured text search in order to identify which is the best for building a search engine. You’ll preprocess the data for this task using the spaCy library, and then experiment with implementing both a TF-IDF search and an inverted index search to find relevant information.