The authoritative guide for Reinforcement Learning From Human Feedback, alignment, and post-training LLMs.
Aligning AI models to human preferences helps them become safer, smarter, easier to use, and tuned to the exact style the creator desires. Reinforcement Learning From Human Feedback (RHLF) is the process for using human responses to a model’s output to shape its alignment, and therefore its behavior. In
The RLHF Book, author
Nathan Lambert blends diverse perspectives from fields like philosophy and economics with the core mathematics and computer science of RLHF to provide a practical guide you can use to apply RLHF to your models.
In
The RLHF Book you’ll discover:
- How today’s most advanced AI models are taught from human feedback
- How large-scale preference data is collected and how to improve your data pipelines
- A comprehensive overview with derivations and implementations for the core policy-gradient methods used to train AI models with reinforcement learning (RL)
- Direct Preference Optimization (DPO), direct alignment algorithms, and simpler methods for preference finetuning
- How RLHF methods led to the current reinforcement learning from verifiable rewards (RLVR) renaissance
- Tricks used in industry to round out models, from product, character or personality training, AI feedback, and more
- How to approach evaluation and how evaluation has changed over the years
- Standard recipes for post-training combining more methods like instruction tuning with RLHF
- Behind-the-scenes stories from building open models like Llama-Instruct, Zephyr, Olmo, and Tülu
After ChatGPT used RLHF to become production-ready, this foundational technique exploded in popularity. In
The RLHF Book, AI expert
Nathan Lambert gives a true industry insider's perspective on modern RLHF training pipelines, and their trade-offs. Using hands-on experiments and mini-implementations, Nathan clearly and concisely introduces the alignment techniques that can transform a generic base model into a human-friendly tool.