arxiv preprint – SliceGPT: Compress Large Language Models by Deleting Rows and Columns

In this episode, we discuss SliceGPT: Compress Large Language Models by Deleting Rows and Columns by Saleh Ashkboos, Maximilian L. Croci, Marcelo Gennari do Nascimento, Torsten Hoefler, James Hensman. The paper introduces SliceGPT, a new method for post-training sparsification of large language models that reduces their size and computational requirements by replacing weight matrices with smaller ones and thus cutting down the embedding dimension. This approach can eliminate up to 25% of parameters in certain models with minimal loss in task performance. The authors highlight computational invariance in transformer networks, which SliceGPT utilizes, and demonstrate that models can run faster and on fewer GPUs, all without additional optimization, providing code for their method at a provided GitHub repository.


Posted

in

by

Tags: