click to
look inside
Look inside
Manning Early Access Program (MEAP) Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
FREE
You can see any available part of this book for free.
Click the table of contents to start reading.

Interpretable AI

Building explainable machine learning systems
Ajay Thampi
  • MEAP began June 2020
  • Publication in Early 2022 (estimated)
  • ISBN 9781617297649
  • 275 pages (estimated)
  • printed in black & white

placing your order...

Don't refresh or navigate away from the page.
print book Receive a print copy shipped to your door + the eBook in Kindle, ePub, & PDF formats + liveBook, our enhanced eBook format accessible from any web browser. $49.99 pBook + eBook + liveBook
Additional shipping charges may apply
FREE domestic shipping on orders of three or more print books
Interpretable AI (print book) added to cart
continue shopping
go to cart

eBook Our eBooks come in Kindle, ePub, and DRM-free PDF formats + liveBook, our enhanced eBook format accessible from any web browser. $39.99 3 formats + liveBook
FREE domestic shipping on orders of three or more print books
Interpretable AI (eBook) added to cart
continue shopping
go to cart

I think this is a valuable book both for beginners as well for more experienced users.

Kim Falk Jørgensen
Look inside
AI models can become so complex that even experts have difficulty understanding them—and forget about explaining the nuances of a cluster of novel algorithms to a business stakeholder! Fortunately, there are techniques and best practices that will help make your AI systems transparent and interpretable. Interpretable AI is filled with cutting-edge techniques that will improve your understanding of how your AI models function. Focused on practical methods that you can implement with Python, it teaches you to open up the black box of machine learning so that you can combat data leakage and bias, improve trust in your results, and ensure compliance with legal requirements. You’ll learn to identify when to utilize models that are inherently transparent, and how to mitigate opacity when you’re facing a problem that demands the predictive power of a hard-to-interpret deep learning model.

about the technology

How deep learning models produce their results is often a complete mystery, even to their creators. These AI "black boxes" can hide unknown issues—including data leakage, the replication of human bias, and difficulties complying with legal requirements such as the EU’s "right to explanation." State-of-the-art interpretability techniques have been developed to understand even the most complex deep learning models, allowing humans to follow an AI’s methods and to better detect when it has made a mistake.

about the book

Interpretable AI is a hands-on guide to interpretability techniques that open up the black box of AI. This practical guide simplifies cutting-edge research into transparent and explainable AI, delivering practical methods you can easily implement with Python and open source libraries. With examples from all major machine learning approaches, this book demonstrates why some approaches to AI are so opaque, teaches you to identify the patterns your model has learned, and presents best practices for building fair and unbiased models. When you’re done, you’ll be able to improve your AI’s performance during training, and build robust systems that counteract errors from bias, data leakage, and concept drift.

what's inside

  • Why AI models are hard to interpret
  • Interpreting white box models such as linear regression, decision trees and generalized additive models
  • Partial dependence plots, LIME, SHAP and Anchors, and various other techniques such as saliency mapping, network dissection and representational learning
  • What is fairness and how to mitigate bias in AI systems
  • Implement robust AI systems that are GDPR-compliant

about the reader

For data scientists and engineers familiar with Python and machine learning.

about the author

Ajay Thampi is a machine learning engineer at a large tech company primarily focused on responsible AI and fairness. He holds a PhD and his research was focused on signal processing and machine learning. He has published papers at leading conferences and journals on reinforcement learning, convex optimization, and classical machine learning techniques applied to 5G cellular networks.

FREE domestic shipping on orders of three or more print books

This book provides a great insight into the interpretability step of developing a structured learning robust AI systems.

Izhar Haq

Really great introduction to interpretability of ML models as well as great examples of how you can do it to your own models.

Jonathan Wood

Techniques are consistently presented with excellent examples.

James J. Byleckie

A fine book towards making ML models less opaque.

Alain Couniot

Read this to understand what the model actually says about the underlying data.

Shashank Polasa

Everybody working with ML models should be able to interpret (and check) results. This book will help you with that.

Kai Gellien
RECENTLY VIEWED