arxiv preprint – LLM in a flash: Efficient Large Language Model Inference with Limited Memory


In this episode, we discuss LLM in a flash: Efficient Large Language Model Inference with Limited Memory by Keivan Alizadeh, Iman Mirzadeh, Dmitry Belenko, Karen Khatamifard, Minsik Cho, Carlo C Del Mundo, Mohammad Rastegari, Mehrdad Farajtabar. The paper introduces an approach to operate large language models (LLMs) efficiently on devices with limited DRAM by using flash memory to store and selectively load model parameters. It proposes an inference cost model specific to flash memory to optimize data transfers and introduces “windowing” and “row-column bundling” techniques to improve data read efficiency. By implementing these strategies, the paper demonstrates that LLMs up to twice the size of the DRAM can be run 4-5 times faster on CPU and 20-25 times faster on GPU compared to standard loading methods, while also incorporating sparsity and context-awareness for enhanced performance.


Posted

in

by

Tags: