arxiv Preprint – Low-rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition


In this episode we discuss Low-rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition
by Yu Yu, Chao-Han Huck Yang, Jari Kolehmainen, Prashanth G. Shivakumar, Yile Gu, Sungho Ryu, Roger Ren, Qi Luo, Aditya Gourav, I-Fan Chen, Yi-Chieh Liu, Tuan Dinh, Ankur Gandhe, Denis Filimonov, Shalini Ghosh, Andreas Stolcke, Ariya Rastow, Ivan Bulyko. The paper presents a low-rank adaptation method called LoRB for training neural language models. LoRB uses low-rank decomposition to adapt a pretrained model to new domains with fewer parameters. The experimental results demonstrate that LoRB achieves faster training times while maintaining performance on the target domain.


Posted

in

by

Tags: