It’s Green Tuesday—SAVE HALF on eBooks! Offer includes all eBooks, including MEAPs. Only at manning.com. Everything else is 35% off! Only at manning.com.
Why wait for Black Friday and Cyber Monday? Every year, Manning celebrates Green Tuesday, when you can SAVE HALF on all earth-friendly eBooks!
New MEAP! The RLHF Book
The authoritative guide for Reinforcement learning from human feedback, alignment, and post-training LLMs.
Aligning AI models to human preferences helps them become safer, smarter, easier to use, and tuned to the exact style the creator desires. Reinforcement Learning From Human Feedback (RHLF) is the process for using human responses to a model’s output to shape its alignment, and therefore its behavior. In The RLHF Book, author Nathan Lambert blends diverse perspectives from fields like philosophy and economics with the core mathematics and computer science of RLHF to provide a practical guide you can use to apply RLHF to your models.
A comprehensive overview with derivations and implementations for the core policy-gradient methods used to train AI models with reinforcement learning (RL)
Direct Preference Optimization (DPO), direct alignment algorithms, and simpler methods for preference finetuning
How RLHF methods led to the current reinforcement learning from verifiable rewards (RLVR) renaissance
And much more!
All together, you’ll be at the front of the line as cutting edge AI training transitions from the top AI companies and into the hands of everyone interested in AI for their business or personal use-cases! [Read more]