Nicole Koenigstein

Nicole Königstein currently works as data science and technology lead at impactvise, an ESG analytics company, and as a quantitative researcher and technology lead at Quantmate, an innovative FinTech startup that leverages alternative data as part of its predictive modeling strategy. She’s a regular speaker, sharing her expertise at conferences such as ODSC Europe. In addition, she teaches Python, machine learning, and deep learning, and holds workshops at conferences including the Women in Tech Global Conference.

books & projects by Nicole Koenigstein

Transformers in Action

  • MEAP began August 2023
  • Last updated May 2024
  • Publication in October 2025 (estimated)
  • ISBN 9781633437883
  • 325 pages (estimated)

Transformers in Action adds the revolutionary transformers architecture to your AI toolkit. You’ll dive into the essential details of the model’s architecture, with all complex concepts explained through easy-to-understand examples and clever analogies—from sock sorting to skiing! Even complex foundational concepts start with practical applications, so you never have to struggle with abstract theory. The book includes an extensive code repository that lets you instantly start playing and exploring different LLMs.

In this interesting guide, you’ll start by applying transformers to fundamental NLP tasks like text summarization and text classification. Then, you’ll push transformers farther with tasks like generating text, honing text generation with reinforcement learning, developing multimodal models, and few-shot learning. You’ll discover one-of-a-kind advice on prompt engineering, as well as proven-and-tested methods for optimizing and tuning large language models. Plus, you’ll find unique coverage of AI ethics such as mitigating bias and responsible usage.

Math for Machine Learning

4 weeks · 6-8 hours per week average · INTERMEDIATE

Put on your data scientist hat for this series of liveProjects, where you’ll work at Finative, an analytics company that uses environmental, social, and governance (ESG) factors to measure companies’ sustainability, a brand new, eco-focused trend that's changing the way businesses think about investing. In each liveProject, you’ll focus on different machine learning (ML) and deep learning (DL) mathematical approaches—including Bayes' theorem, principal component analysis (PCA), cosine similarity, latent semantic analysis, and backpropagation—as you help Finative accomplish its goal of increasing its own sustainability.

You’ll develop a method to reduce the runtime of ML models, and you’ll save digital storage space by finding relevant keywords in order to determine whether documents should be discarded or saved. To increase efficiency, you’ll save training time by using a pre-trained language model to classify a sustainability report. Then, you’ll analyze the sentiment of tweets in order to detect greenwashing, the practice of spreading disinformation about a company’s sustainability. When you’re finished with these liveProjects, you’ll have a solid understanding of the mathematical basics of machine learning, strong programming and data science skills, and familiarity with sustainability.

Detect Sentiment with Transformers

1 week · 6-8 hours per week · INTERMEDIATE

Finative, the environmental, social, and governance (ESG) analytics company you work for, analyzes a high volume of data using advanced natural language processing (NLP) techniques to provide its clients with valuable insights about their sustainability. Your CEO has concerns that some of the companies Finative analyzes may be greenwashing: spreading disinformation about their sustainability in order to appear more environmentally conscious than they actually are.

As a data scientist for Finative, your task is to validate your sustainability reports by creating and analyzing them. You’ll compute conditional probability with Bayes’ Theorem, by hand, to better understand your model’s performance through metrics such as recall and precision. You’ll learn an efficient way to prepare your data from different sources and merge it into one dataset, which you’ll use to prepare tweets. To successfully classify the tweets, you’ll use a pre-trained large language model and fine-tune it using the Hugging Face ecosystem as well as hyperopt and Ray Tune. You’ll use TensorBoard and Weights & Biases to analyze and track your experiments, and you’ll analyze the tweets to determine whether enough negative sentiment exists to indicate that the company you analyzed has been greenwashing its data.

Analyze Reports with Hugging Face

1 week · 6-8 hours per week · INTERMEDIATE

You’re a data scientist at Finative, an environmental, social, and governance (ESG) analytics company that analyzes a high volume of data using advanced natural language processing (NLP) techniques in order to provide its clients insights for sustainable investing. Recently, your CEO has decided that Finative should increase its own financial sustainability. Your task is to classify sustainability reports of a publicly traded company in an efficient and sustainable way.

You’ll learn the fundamental mathematics—including backpropagation, matrix multiplication, and attention mechanisms—of Transformers, empowering you to optimize your model’s performance, improve its efficiency, and handle undesirable model predictions. You’ll use Python’s pdfplumber library to extract text from a sustainability report for quick delivery to your CEO. To further increase efficiency, you’ll save training time by using a language model that’s been pre-trained with ESG data to build a pipeline for the model and classify the sustainability report.

Latent Semantic Analysis for NLP

1 week · 8-10 hours per week · INTERMEDIATE

At Finative, an ESG analytics company, you’re a data scientist who helps measure the sustainability of publicly traded companies by analyzing environmental, social, and governance (ESG) factors so Finative can report back to its clients. Recently, the CEO has decided that Finative should increase its own sustainability. You’ve been assigned the task of saving digital storage space by storing only relevant data. You’ll test different methods—including keyword retrieval with TD-IDF, computing cosine similarity, and latent semantic analysis—to find relevant keywords in documents and determine whether the documents should be discarded or saved for use in training your ML models.

Principal Component Analysis

1 week · 6-8 hours per week · INTERMEDIATE

Step into the role of data scientist at Finative, an analytics company that uses environmental, social, and governance (ESG) factors to measure companies’ sustainability, a brand new, eco-focused trend that's changing the way businesses think about investing. To provide its clients with the valuable insights they need in order to develop their investment strategies, Finative analyzes a high volume of data using advanced natural language processing (NLP) techniques.

Recently, your CEO has decided that Finative should increase its own sustainability. Your task is to develop a method to optimize the runtime for the company’s machine learning models. You’ll apply principal component analysis (PCA) to the data in order to speed up the ML models. To classify handwritten digits and prove your theory that PCA speeds up ML algorithms, you’ll implement logistic regression with scikit-learn. You’ll use the explained variance ratio to gain an understanding of the trade-offs between speed and accuracy. When you’re done, you’ll be able to present your CEO with proof of PCA’s efficiency in optimizing runtime.