Arxiv paper – Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate


In this episode, we discuss Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate by Yubo Wang, Xiang Yue, Wenhu Chen. The paper introduces Critique Fine-Tuning (CFT), a novel approach where language models are trained to critique noisy responses instead of simply imitating correct ones, inspired by human critical thinking. Using a 50K-sample dataset generated by GPT-4o, CFT demonstrated consistent improvements of 4–10% over traditional supervised fine-tuning across various math benchmarks and datasets. The results show that CFT is both efficient and competitive, matching or outperforming models trained with much larger datasets and more compute, thereby effectively enhancing the reasoning capabilities of language models.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *