AI models can become so complex that even experts have difficulty understanding them—and forget about explaining the nuances of a cluster of novel algorithms to a business stakeholder! Fortunately, there are techniques and best practices that will help make your AI systems transparent and interpretable. Interpretable AI is filled with cutting-edge techniques that will improve your understanding of how your AI models function. Focused on practical methods that you can implement with Python, it teaches you to open up the black box of machine learning so that you can combat data leakage and bias, improve trust in your results, and ensure compliance with legal requirements. You’ll learn to identify when to utilize models that are inherently transparent, and how to mitigate opacity when you’re facing a problem that demands the predictive power of a hard-to-interpret deep learning model.
about the technology
How deep learning models produce their results is often a complete mystery, even to their creators. These AI "black boxes" can hide unknown issues—including data leakage, the replication of human bias, and difficulties complying with legal requirements such as the EU’s "right to explanation." State-of-the-art interpretability techniques have been developed to understand even the most complex deep learning models, allowing humans to follow an AI’s methods and to better detect when it has made a mistake.
about the book
Interpretable AI is a hands-on guide to interpretability techniques that open up the black box of AI. This practical guide simplifies cutting-edge research into transparent and explainable AI, delivering practical methods you can easily implement with Python and open source libraries. With examples from all major machine learning approaches, this book demonstrates why some approaches to AI are so opaque, teaches you to identify the patterns your model has learned, and presents best practices for building fair and unbiased models. When you’re done, you’ll be able to improve your AI’s performance during training, and build robust systems that counteract errors from bias, data leakage, and concept drift.
Why AI models are hard to interpret
Interpreting white box models such as linear regression, decision trees and generalized additive models
Partial dependence plots, LIME, SHAP and Anchors, and various other techniques such as saliency mapping, network dissection and representational learning
What is fairness and how to mitigate bias in AI systems
Implement robust AI systems that are GDPR-compliant
about the reader
For data scientists and engineers familiar with Python and machine learning.
about the author
Ajay Thampi is a machine learning engineer at a large tech company primarily focused on responsible AI and fairness. He holds a PhD and his research was focused on signal processing and machine learning. He has published papers at leading conferences and journals on reinforcement learning, convex optimization, and classical machine learning techniques applied to 5G cellular networks.
customers also reading
FREE domestic shipping on orders of three or more print books