arxiv preprint – KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization

In this episode, we discuss KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization by Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W. Mahoney, Yakun Sophia Shao, Kurt Keutzer, Amir Gholami. The paper introduces KVQuant, a novel method for reducing memory usage in Large Language Models (LLMs) by efficiently quantizing key-value (KV) cache activations to sub-4-bit precision. KVQuant improves the accuracy of ultra-low precision representations through techniques such as per-channel and pre-rotary positional embedding quantization, non-uniform datatypes, per-vector dense-and-sparse quantization, and normalization of quantization centroids. The application of KVQuant results in negligible performance loss, increased maximum context lengths on GPUs, and a speedup in computation, with the code made available for public use.


Posted

in

by

Tags: