Category: Uncategorized
-
Arxiv paper – ImplicitQA: Going beyond frames towards Implicit Video Reasoning
In this episode, we discuss ImplicitQA: Going beyond frames towards Implicit Video Reasoning by Sirnam Swetha, Rohit Gupta, Parth Parag Kulkarni, David G Shatwell, Jeffrey A Chan Santiago, Nyle Siddiqui, Joseph Fioresi, Mubarak Shah. The paper introduces ImplicitQA, a new VideoQA benchmark designed to evaluate models on implicit reasoning in creative and cinematic videos, requiring…
-
Arxiv paper – BlenderFusion: 3D-Grounded Visual Editing and Generative Compositing
In this episode, we discuss BlenderFusion: 3D-Grounded Visual Editing and Generative Compositing by Jiacheng Chen, Ramin Mehran, Xuhui Jia, Saining Xie, Sanghyun Woo. BlenderFusion is a generative visual compositing framework that enables scene synthesis by segmenting inputs into editable 3D elements, editing them in Blender, and recomposing them with a generative compositor. The compositor uses…
-
Arxiv paper – Strategic Intelligence in Large Language Models: Evidence from evolutionary Game Theory
In this episode, we discuss Strategic Intelligence in Large Language Models: Evidence from evolutionary Game Theory by Kenneth Payne, Baptiste Alloui-Cros. The paper investigates whether Large Language Models (LLMs) can engage in strategic decision-making by testing them in evolutionary Iterated Prisoner’s Dilemma tournaments against classic strategies. Results show that LLMs are highly competitive and exhibit…
-
Blogpost paper – Project Vend: Can Claude run a small shop? (And why does that matter?)
In this episode, we discuss Project Vend: Can Claude run a small shop? (And why does that matter?) The paper describes a month-long experiment where the AI model Claude autonomously managed an office store as a small business. The study reveals both how close the AI came to successfully running the business and the unexpected…
-
Arxiv paper – Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens
In this episode, we discuss Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens by Zeyuan Yang, Xueyang Yu, Delin Chen, Maohao Shen, Chuang Gan. The paper proposes Mirage, a framework that enables vision-language models to perform internal visual reasoning by generating latent visual tokens alongside text, without producing explicit images. Mirage is trained…
-
Arxiv paper – SuperEdit: Rectifying and Facilitating Supervision for Instruction-Based Image Editing
In this episode, we discuss SuperEdit: Rectifying and Facilitating Supervision for Instruction-Based Image Editing by Ming Li, Xin Gu, Fan Chen, Xiaoying Xing, Longyin Wen, Chen Chen, Sijie Zhu. The paper addresses the issue of noisy supervision in instruction-based image editing datasets by rectifying editing instructions to better align with image pairs and introducing contrastive…
-
Arxiv paper – OMEGA: Can LLMs Reason Outside the Box in Math? Evaluating Exploratory, Compositional, and Transformative Generalization
In this episode, we discuss OMEGA: Can LLMs Reason Outside the Box in Math? Evaluating Exploratory, Compositional, and Transformative Generalization by Yiyou Sun, Shawn Hu, Georgia Zhou, Ken Zheng, Hannaneh Hajishirzi, Nouha Dziri, Dawn Song. The paper introduces OMEGA, a new benchmark to evaluate large language models’ out-of-distribution generalization on math problems along three creativity-inspired…
-
Arxiv paper – Long-Context State-Space Video World Models
In this episode, we discuss Long-Context State-Space Video World Models by Ryan Po, Yotam Nitzan, Richard Zhang, Berlin Chen, Tri Dao, Eli Shechtman, Gordon Wetzstein, Xun Huang. The paper introduces a novel video diffusion model architecture that uses state-space models (SSMs) to extend temporal memory efficiently for causal sequence modeling. It employs a block-wise SSM…
-
Arxiv paper – From Bytes to Ideas: Language Modeling with Autoregressive U-Nets
In this episode, we discuss From Bytes to Ideas: Language Modeling with Autoregressive U-Nets by Mathurin Videau, Badr Youbi Idrissi, Alessandro Leite, Marc Schoenauer, Olivier Teytaud, David Lopez-Paz. The paper introduces an autoregressive U-Net model that dynamically learns its own token embeddings from raw bytes instead of relying on fixed tokenization schemes like BPE. This…
-
Arxiv paper – Reinforcement Pre-Training
In this episode, we discuss Reinforcement Pre-Training by Qingxiu Dong, Li Dong, Yao Tang, Tianzhu Ye, Yutao Sun, Zhifang Sui, Furu Wei. The paper introduces Reinforcement Pre-Training (RPT), a method that applies reinforcement learning to next-token prediction by rewarding correct predictions as a reasoning task. This approach leverages large text datasets without needing domain-specific annotations,…
-
Arxiv paper – Token-Efficient Long Video Understanding for Multimodal LLMs
In this episode, we discuss Token-Efficient Long Video Understanding for Multimodal LLMs by Jindong Jiang, Xiuyu Li, Zhijian Liu, Muyang Li, Guo Chen, Zhiqi Li, De-An Huang, Guilin Liu, Zhiding Yu, Kurt Keutzer, Sungjin Ahn, Jan Kautz, Hongxu Yin, Yao Lu, Song Han, Wonmin Byeon. The paper introduces STORM, a new architecture that incorporates a…
-
Arxiv paper – The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
In this episode, we discuss The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity by The authors of the paper are: – Parshin Shojaee – Iman Mirzadeh – Keivan Alizadeh – Maxwell Horton – Samy Bengio – Mehrdad Farajtabar. This paper examines the reasoning abilities of…
-
Arxiv paper – Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models
In this episode, we discuss Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models by Piotr Padlewski, Max Bain, Matthew Henderson, Zhongkai Zhu, Nishant Relan, Hai Pham, Donovan Ong, Kaloyan Aleksiev, Aitor Ormazabal, Samuel Phua, Ethan Yeo, Eugenie Lamprecht, Qi Liu, Yuqi Wang, Eric Chen, Deyu Fu, Lei Li, Che Zheng, Cyprien…
-
Arxiv paper – How much do language models memorize?
In this episode, we discuss How much do language models memorize? by John X. Morris, Chawin Sitawarin, Chuan Guo, Narine Kokhlikyan, G. Edward Suh, Alexander M. Rush, Kamalika Chaudhuri, Saeed Mahloujifar. The paper introduces a method to quantify how much a language model memorizes versus generalizes from data, defining model capacity as total memorization excluding…
-
Arxiv paper – MMaDA: Multimodal Large Diffusion Language Models
In this episode, we discuss MMaDA: Multimodal Large Diffusion Language Models by Ling Yang, Ye Tian, Bowen Li, Xinchen Zhang, Ke Shen, Yunhai Tong, Mengdi Wang. MMaDA is a unified multimodal diffusion foundation model that leverages a modality-agnostic architecture, a mixed long chain-of-thought fine-tuning strategy, and a novel unified policy-gradient reinforcement learning algorithm to excel…
-
Arxiv paper – Superhuman performance of a large language model on the reasoning tasks of a physician
In this episode, we discuss Superhuman performance of a large language model on the reasoning tasks of a physician by Peter G. Brodeur, Thomas A. Buckley, Zahir Kanjee, Ethan Goh, Evelyn Bin Ling, Priyank Jain, Stephanie Cabral, Raja-Elie Abdulnour, Adrian D. Haimovich, Jason A. Freed, Andrew Olson, Daniel J. Morgan, Jason Hom, Robert Gallo, Liam…
-
Arxiv paper – The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
In this episode, we discuss The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models by Seungone Kim, Juyoung Suk, Ji Yong Cho, Shayne Longpre, Chaeeun Kim, Dongkeun Yoon, Guijin Son, Yejin Cho, Sheikh Shafayat, Jinheon Baek, Sue Hyun Park, Hyeonbin Hwang, Jinkyung Jo, Hyowon Cho, Haebin Shin, Seongyun Lee,…
-
Arxiv paper – DanceGRPO: Unleashing GRPO on Visual Generation
In this episode, we discuss DanceGRPO: Unleashing GRPO on Visual Generation by Zeyue Xue, Jie Wu, Yu Gao, Fangyuan Kong, Lingting Zhu, Mengzhao Chen, Zhiheng Liu, Wei Liu, Qiushan Guo, Weilin Huang, Ping Luo. The paper presents DanceGRPO, a unified reinforcement learning framework that adapts Group Relative Policy Optimization to various generative paradigms, including diffusion…
-
Arxiv paper – Visual Planning: Let’s Think Only with Images
In this episode, we discuss Visual Planning: Let’s Think Only with Images by Yi Xu, Chengzu Li, Han Zhou, Xingchen Wan, Caiqi Zhang, Anna Korhonen, Ivan Vulić. This paper proposes Visual Planning, a new approach that uses purely visual sequences to perform reasoning and planning without relying on text. They introduce a reinforcement learning framework,…
-
Arxiv paper – A Preliminary Study for GPT-4o on Image Restoration
In this episode, we discuss A Preliminary Study for GPT-4o on Image Restoration by Hao Yang, Yan Yang, Ruikun Zhang, Liyuan Pan. This paper presents the first comprehensive evaluation of OpenAI’s GPT-4o model on various image restoration tasks, revealing that while its outputs are visually appealing, they often lack pixel-level structural accuracy. The authors demonstrate…
-
Arxiv paper – DiffusionSfM: Predicting Structure and Motion via Ray Origin and Endpoint Diffusion
In this episode, we discuss DiffusionSfM: Predicting Structure and Motion via Ray Origin and Endpoint Diffusion by Qitao Zhao, Amy Lin, Jeff Tan, Jason Y. Zhang, Deva Ramanan, Shubham Tulsiani. The paper introduces DiffusionSfM, a novel data-driven framework that directly infers 3D scene geometry and camera poses from multi-view images using a transformer-based denoising diffusion…
-
Arxiv paper – RayZer: A Self-supervised Large View Synthesis Model
In this episode, we discuss RayZer: A Self-supervised Large View Synthesis Model by Hanwen Jiang, Hao Tan, Peng Wang, Haian Jin, Yue Zhao, Sai Bi, Kai Zhang, Fujun Luan, Kalyan Sunkavalli, Qixing Huang, Georgios Pavlakos. RayZer is a self-supervised multi-view 3D vision model that learns 3D scene understanding without any 3D supervision, including camera poses…
-
Arxiv paper – Reinforcement Learning for Reasoning in Large Language Models with One Training Example
In this episode, we discuss Reinforcement Learning for Reasoning in Large Language Models with One Training Example by Yiping Wang, Qing Yang, Zhiyuan Zeng, Liliang Ren, Lucas Liu, Baolin Peng, Hao Cheng, Xuehai He, Kuan Wang, Jianfeng Gao, Weizhu Chen, Shuohang Wang, Simon Shaolei Du, Yelong Shen. The paper demonstrates that reinforcement learning with verifiable…
-
Arxiv paper – MINERVA: Evaluating Complex Video Reasoning
In this episode, we discuss MINERVA: Evaluating Complex Video Reasoning by Arsha Nagrani, Sachit Menon, Ahmet Iscen, Shyamal Buch, Ramin Mehran, Nilpa Jha, Anja Hauth, Yukun Zhu, Carl Vondrick, Mikhail Sirotenko, Cordelia Schmid, Tobias Weyand. The paper introduces MINERVA, a new video reasoning dataset featuring complex multi-step questions with detailed reasoning traces to evaluate multimodal…
-
Arxiv paper – The Leaderboard Illusion
In this episode, we discuss The Leaderboard Illusion by Shivalika Singh, Yiyang Nan, Alex Wang, Daniel D’Souza, Sayash Kapoor, Ahmet Üstün, Sanmi Koyejo, Yuntian Deng, Shayne Longpre, Noah Smith, Beyza Ermis, Marzieh Fadaee, Sara Hooker. The paper reveals that Chatbot Arena’s leaderboard rankings are biased due to undisclosed private testing, allowing some providers to selectively…