Man Wai Winnie Yeung

Winnie Yeung is a full-stack senior data scientist at Visa in the San Francisco Bay Area, working on developing and deploying risk-related machine learning solutions. She earned her master’s in analytics at Georgia Institute of Technology and has 3 years of experience working on natural language processing projects in the investment industry. She actively contributes to the open-source community by creating a neural machine translation package on PyPI, as well as giving talks at PyCon Hong Kong.

projects by Man Wai Winnie Yeung

End-to-End Deep Learning for Opinion Mining

4 weeks · 4-5 hours per week average · INTERMEDIATE

In this series of liveProjects, you’ll use data science and natural language processing techniques to perform the kind of real-world work routinely conducted by data scientists in the marketing sector. You’ll build an effective solution that can scrape, analyze, and monitor chatter on a Reddit forum to determine the opinions of your company’s customers. Each project in this series can stand alone or be worked through together, as you go hands on with data collection, data exploration, utilizing transfer learning, and building effective data dashboards.

Deploy a Streamlit Dashboard

1 week · 4-6 hours per week · INTERMEDIATE

In this liveProject, you’ll build an interactive dashboard that will allow the marketing team at your company to monitor any mention of your company’s products on Reddit. You’ll start by visualizing relevant subreddit data, then build a model to monitor your mentions on Reddit. Finally, you’ll deploy the Streamlit dashboard on Heroku. Streamlit offers a simple and easy way to build a highly interactive and beautiful dashboard with just a few lines of codes, whilst Heroku offers free web hosting for data apps.

Transfer Learning with Transformers

1 week · 4-6 hours per week · INTERMEDIATE

In this liveProject, you’ll use transformer-based deep learning models to predict the tag of Reddit subreddits to help your company understand what its customers are saying about them. Transformers are the state of the art, large-scale deep language models pretrained on a huge corpus of text, and are capable of understanding the complexity of grammar really well. You’ll train this model on your own data set, and tune its hyperparameters for the best results.

Cleaning and Exploring Text Data

1 week · 2-4 hours per week · INTERMEDIATE

In this liveProject, you’ll clean and analyze data scraped from Reddit to determine customer opinions of your products within a set time period. You’ll utilize common natural language processing techniques such as stemming, tokenization, and latent dirichlet allocation (LDA) to discover patterns in people’s opinions, and then visualize your results and summarize your findings.

Web-scraping for Text Threads

1 week · 4-6 hours per week · INTERMEDIATE

In this liveProject, you’ll harvest customer opinions about your company’s products from the comments left on the subreddit for your company, and store them in a database for future analysis. You’ll connect to the Reddit API, identify and clean up the data fields you need, and store the data into MongoDB.