arxiv preprint – The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction


In this episode, we discuss The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction by Pratyusha Sharma, Jordan T. Ash, Dipendra Misra. The paper presents Layer-Selective Rank Reduction (LASER), an innovative method that enhances Transformer-based Large Language Models (LLMs) by reducing higher-order features in their weight matrices post-training, without adding parameters or data. Extensive experiments show that LASER significantly boosts the performance of various LLMs on multiple datasets. The authors also delve into the theoretical understanding of LASER, examining the conditions under which it is most beneficial and the principles of how it works.


Posted

in

by

Tags: