arxiv – MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI


In this episode, we discuss MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI by Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen. MMMU is a new benchmark for evaluating multimodal models using college-level questions from various disciplines to test advanced reasoning and subject knowledge. The benchmark contains 11.5K questions across six core disciplines and 30 subjects, featuring diverse visual content like graphs and music sheets. Initial testing on 14 models, including the sophisticated GPT-4V, showed a best accuracy of 56%, suggesting ample scope for improvement in artificial general intelligence.


Posted

in

by

Tags: