Manning Early Access Program (MEAP)
Read chapters as they are written, get the finished eBook as soon as it’s ready, and receive the pBook long before it's in bookstores.
By default, general purpose LLMs are not optimized for specific domains and business goals. Using techniques like specialized fine-tuning, pruning unnecessary neural components, and knowledge distillation, you can rearchitect your models to cost less, run faster, and deliver more accurate results.
Rearchitecting LLMs: Structural techniques for efficient models turns research from the latest AI papers into production-ready practices for domain-specific model optimization. As you work through this practical book, you’ll perform hands-on surgery on popular open-source models like Llama-3, Gemma, and Qwen to create cost-effective local Small Language Models (SLMs). Along the way, you’ll learn how to combine behavioral analysis with structural modifications, identifying and removing parts that don’t contribute to your model’s goals, and even use “fair pruning” to reduce model bias at the neuron level
what's inside
Universal techniques for customizing model architecture
End-to-end pipelines for model rearchitecting
Improve bias and explainability with model “cleanup”
Replacing external LLMs with local SLMs
about the reader
For practicing AI, ML, and data engineers who know Python.
about the author
Pere Martra is an Applied AI Engineer, the creator of the Optipfair model efficiency library, an international AI speaker, and maintainer of widely-used LLM courses and popular open-source tools. He is the author of Large Language Models Projects.
Introductory offer Save 50% for a limited time!
eBook
pdf, ePub, online
$47.99
$23.99
you save $24.00 (50%)
Introductory offer Save 50% for a limited time!
print
includes eBook
$59.99
$29.99
you save $30.00 (50%)
with subscription
free or 50% off
$599.99
pro $24.99 per month
access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!