arxiv preprint – One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation

In this episode, we discuss One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation by Fabian Paischer, Lukas Hauzenberger, Thomas Schmied, Benedikt Alkin, Marc Peter Deisenroth, Sepp Hochreiter. The paper introduces Explained Variance Adaptation (EVA), a method that enhances the fine-tuning of foundation models by using singular value decomposition for a more effective initialization of LoRA matrices. EVA optimizes rank distribution to capture maximum variance before proceeding with task-specific fine-tuning. This improvement leads to faster convergence and better performance across diverse domains such as language, vision, and reinforcement learning.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *