250217_metarfft
How to enhance the rule-following ability of LLMs? 🤔 We propose Meta Rule-Following Fine-Tuning (Meta-RFFT) to improve the cross-task transferability of rule-following abilities. We construct a dataset of 88 length generalization tasks and show Meta-RFFT helps models to outperform baselines in both downstream fine-tuning and few-shot prompting scenarios. 👋 Check out our new paper: Training Large Language Models to be Better Rule Followers!