Arxiv Paper – LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding

In this episode, we discuss LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding by Xiaoqian Shen, Yunyang Xiong, Changsheng Zhao, Lemeng Wu, Jun Chen, Chenchen Zhu, Zechun Liu, Fanyi Xiao, Balakrishnan Varadarajan, Florian Bordes, Zhuang Liu, Hu Xu, Hyunwoo J. Kim, Bilge Soran, Raghuraman Krishnamoorthi, Mohamed Elhoseiny, Vikas Chandra. LongVU presents a spatiotemporal adaptive compression method for processing long videos using Multimodal Large Language Models, efficiently reducing redundancy while preserving important visual information. It employs techniques like cross-modal queries, DINOv2 features, and token reduction to manage spatial and temporal information. This approach shows superior performance on video understanding benchmarks, handling lengthy videos effectively and demonstrating scalability even in smaller models.


Posted

in

by

Tags: