arxiv preprint – LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference

In this episode, we discuss LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference by Qichen Fu, Minsik Cho, Thomas Merth, Sachin Mehta, Mohammad Rastegari, Mahyar Najibi. The paper introduces LazyLLM, a method that selectively computes only the essential token’s Key-Value (KV) cache for next token prediction during the prefilling and decoding stages of transformer-based language models to address the bottleneck caused by long prompts. Unlike static pruning approaches, LazyLLM dynamically adapts which tokens to consider at each generation step. This method significantly accelerates the generation process without sacrificing accuracy, as demonstrated in experiments like the multi-document question-answering task with LLama 2 7B model, achieving a 2.34× speedup.


Posted

in

by

Tags: