Category: Uncategorized
-
Arxiv paper – MatAnyone: Stable Video Matting with Consistent Memory Propagation
In this episode, we discuss MatAnyone: Stable Video Matting with Consistent Memory Propagation by Peiqing Yang, Shangchen Zhou, Jixin Zhao, Qingyi Tao, Chen Change Loy. The paper introduces **MatAnyone**, a robust framework for target-assigned video matting that overcomes challenges posed by complex or ambiguous backgrounds without relying on auxiliary inputs. It employs a memory-based approach…
-
Arxiv paper – Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate
In this episode, we discuss Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate by Yubo Wang, Xiang Yue, Wenhu Chen. The paper introduces Critique Fine-Tuning (CFT), a novel approach where language models are trained to critique noisy responses instead of simply imitating correct ones, inspired by human critical thinking. Using a…
-
Arxiv paper – Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs
In this episode, we discuss Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs by Yue Wang, Qiuzhi Liu, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Linfeng Song, Dian Yu, Juntao Li, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yu. The paper identifies “underthinking” in large language models like…
-
Arxiv paper – MetaMorph: Multimodal Understanding and Generation via Instruction Tuning
In this episode, we discuss MetaMorph: Multimodal Understanding and Generation via Instruction Tuning by Shengbang Tong, David Fan, Jiachen Zhu, Yunyang Xiong, Xinlei Chen, Koustuv Sinha, Michael Rabbat, Yann LeCun, Saining Xie, Zhuang Liu. The paper introduces Visual-Predictive Instruction Tuning (VPiT), which enhances pretrained large language models to generate both text and visual tokens by…
-
Arxiv paper – Improving Video Generation with Human Feedback
In this episode, we discuss Improving Video Generation with Human Feedback by Jie Liu, Gongye Liu, Jiajun Liang, Ziyang Yuan, Xiaokun Liu, Mingwu Zheng, Xiele Wu, Qiulin Wang, Wenyu Qin, Menghan Xia, Xintao Wang, Xiaohong Liu, Fei Yang, Pengfei Wan, Di Zhang, Kun Gai, Yujiu Yang, Wanli Ouyang. The paper introduces a pipeline that utilizes…
-
Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling
In this episode, we discuss Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling by The authors of the paper are: – Xiaokang Chen – Zhiyu Wu – Xingchao Liu – Zizheng Pan – Wen Liu – Zhenda Xie – Xingkai Yu – Chong Ruan. The paper introduces Janus-Pro, an enhanced version of…
-
Arxiv paper – DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
In this episode, we discuss DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning by DeepSeek-AI. The paper introduces DeepSeek-R1-Zero, a reasoning model trained solely with large-scale reinforcement learning, which exhibits strong reasoning abilities but struggles with readability and language mixing. To overcome these limitations, the authors developed DeepSeek-R1 by adding multi-stage training and cold-start…
-
Arxiv paper – Can We Generate Images with CoT? Let’s Verify and Reinforce Image Generation Step by Step
In this episode, we discuss Can We Generate Images with CoT? Let’s Verify and Reinforce Image Generation Step by Step by Ziyu Guo, Renrui Zhang, Chengzhuo Tong, Zhizheng Zhao, Peng Gao, Hongsheng Li, Pheng-Ann Heng. The paper investigates the use of Chain-of-Thought (CoT) reasoning to improve autoregressive image generation through techniques like test-time computation scaling,…
-
Arxiv paper – Improving Factuality with Explicit Working Memory
In this episode, we discuss Improving Factuality with Explicit Working Memory by Mingda Chen, Yang Li, Karthik Padthe, Rulin Shao, Alicia Sun, Luke Zettlemoyer, Gargi Gosh, Wen-tau Yih. The paper presents Ewe, a novel method that incorporates explicit working memory into large language models to improve factuality in long-form text generation by updating memory in…
-
Arxiv paper – Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control
In this episode, we discuss Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control by Zekai Gu, Rui Yan, Jiahao Lu, Peng Li, Zhiyang Dou, Chenyang Si, Zhen Dong, Qifeng Liu, Cheng Lin, Ziwei Liu, Wenping Wang, Yuan Liu. The paper introduces “Diffusion as Shader” (DaS), a novel approach that supports various video…
-
Arxiv paper – FaceLift: Single Image to 3D Head with View Generation and GS-LRM
In this episode, we discuss FaceLift: Single Image to 3D Head with View Generation and GS-LRM by Weijie Lyu, Yi Zhou, Ming-Hsuan Yang, Zhixin Shu. FaceLift is a feed-forward approach for rapid and high-quality 360-degree head reconstruction using a single image, utilizing a multi-view latent diffusion model followed by a GS-LRM reconstructor to create 3D…
-
Arxiv paper – GenHMR: Generative Human Mesh Recovery
In this episode, we discuss GenHMR: Generative Human Mesh Recovery by Muhammad Usama Saleem, Ekkasit Pinyoanuntapong, Pu Wang, Hongfei Xue, Srijan Das, Chen Chen. The paper introduces GenHMR, a novel generative framework for human mesh recovery (HMR) that addresses uncertainties in converting 2D images to 3D mesh. It employs a pose tokenizer and an image-conditional…
-
Arxiv paper – Video Creation by Demonstration
In this episode, we discuss Video Creation by Demonstration by Yihong Sun, Hao Zhou, Liangzhe Yuan, Jennifer J. Sun, Yandong Li, Xuhui Jia, Hartwig Adam, Bharath Hariharan, Long Zhao, Ting Liu. The paper introduces Video Creation by Demonstration, utilizing a method called 𝛿-Diffusion to generate videos that smoothly continue from a given context image, integrating…
-
Arxiv paper – Byte Latent Transformer: Patches Scale Better Than Tokens
In this episode, we discuss Byte Latent Transformer: Patches Scale Better Than Tokens by Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunting Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, Srinivasan Iyer. The Byte Latent Transformer (BLT) presents a novel approach to large language models…
-
Arxiv paper – Align3R: Aligned Monocular Depth Estimation for Dynamic Videos
In this episode, we discuss Align3R: Aligned Monocular Depth Estimation for Dynamic Videos by Jiahao Lu, Tianyu Huang, Peng Li, Zhiyang Dou, Cheng Lin, Zhiming Cui, Zhen Dong, Sai-Kit Yeung, Wenping Wang, Yuan Liu. Align3R is introduced as a method for achieving temporally consistent depth maps in videos using monocular inputs, addressing the challenge of…
-
Arxiv paper – FreeScale: Unleashing the Resolution of Diffusion Models via Tuning-Free Scale Fusion
In this episode, we discuss FreeScale: Unleashing the Resolution of Diffusion Models via Tuning-Free Scale Fusion by Haonan Qiu, Shiwei Zhang, Yujie Wei, Ruihang Chu, Hangjie Yuan, Xiang Wang, Yingya Zhang, Ziwei Liu. The paper introduces FreeScale, a tuning-free inference method that enhances visual diffusion models’ ability to generate high-resolution images by combining data from…
-
Arxiv paper – ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis
In this episode, we discuss ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis by Wangbo Yu, Jinbo Xing, Li Yuan, Wenbo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, Tien-Tsin Wong, Ying Shan, Yonghong Tian. ViewCrafter introduces a new method for synthesizing high-fidelity novel views from single or sparse images, using video diffusion models…
-
Arxiv paper – o1-Coder: an o1 Replication for Coding
In this episode, we discuss o1-Coder: an o1 Replication for Coding by Yuxiang Zhang, Shangxi Wu, Yuqi Yang, Jiangming Shu, Jinlin Xiao, Chao Kong, Jitao Sang. The paper discusses “O1-CODER,” which aims to replicate OpenAI’s o1 model focusing on coding tasks, utilizing reinforcement learning and Monte Carlo Tree Search to boost System-2 thinking. The framework…
-
Arxiv paper – DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning
In this episode, we discuss DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning by Hao Bai, Yifei Zhou, Mert Cemri, Jiayi Pan, Alane Suhr, Sergey Levine, Aviral Kumar. DigiRL is an innovative autonomous reinforcement learning approach designed to train device control agents by refining pre-trained vision language models through a two-stage process involving offline…
-
ICLR 2025 submission – CYCLE-CONSISTENT LEARNING FOR JOINT LAYOUT-TO-IMAGE GENERATION AND OBJECT DETECTION
In this episode, we discuss CYCLE-CONSISTENT LEARNING FOR JOINT LAYOUT-TO-IMAGE GENERATION AND OBJECT DETECTION by The paper’s authors are listed as “Anonymous authors” since it is under double-blind review.. The paper introduces a new generation-detection cycle consistent (GDCC) learning framework that simultaneously optimizes layout-to-image generation and object detection, highlighting the inherent duality of these tasks.…
-
Arxiv Paper – WonderWorld: Interactive 3D Scene Generation from a Single Image
In this episode, we discuss WonderWorld: Interactive 3D Scene Generation from a Single Image by Hong-Xing Yu, Haoyi Duan, Charles Herrmann, William T. Freeman, Jiajun Wu. WonderWorld is an innovative framework designed for rapid, interactive 3D scene generation, allowing users to specify and view scene contents and layouts with minimal delay. The primary challenge addressed…
-
Arxiv Paper – Hymba: A Hybrid-head Architecture for Small Language Models
In this episode, we discuss Hymba: A Hybrid-head Architecture for Small Language Models by Xin Dong, Yonggan Fu, Shizhe Diao, Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabaleshwarkar, Shih-Yang Liu, Matthijs Van Keirsbilck, Min-Hung Chen, Yoshi Suhara, Yingyan Lin, Jan Kautz, Pavlo Molchanov. The paper introduces Hymba, a new family of small language models that combines…
-
Arxiv Paper – Covert Malicious Finetuning: Challenges in Safeguarding LLM Adaptation
In this episode, we discuss Covert Malicious Finetuning: Challenges in Safeguarding LLM Adaptation by Danny Halawi, Alexander Wei, Eric Wallace, Tony T. Wang, Nika Haghtalab, Jacob Steinhardt. The paper highlights security risks in black-box finetuning interfaces for large language models and introduces covert malicious finetuning, a method to compromise a model’s safety undetected. This involves…
-
Arxiv Paper – Video Instruction Tuning With Synthetic Data
In this episode, we discuss Video Instruction Tuning With Synthetic Data by Yuanhan Zhang, Jinming Wu, Wei Li, Bo Li, Zejun Ma, Ziwei Liu, Chunyuan Li. The paper proposes a high-quality synthetic dataset, LLaVA-Video-178K, to address the challenge of developing large multimodal video models by improving video instruction-following tasks through detailed captioning and question-answering. Using…
-
Arxiv Paper – Generative Agent Simulations of 1,000 People
In this episode, we discuss Generative Agent Simulations of 1,000 People by Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, Michael S. Bernstein. The paper introduces a new agent architecture that simulates the behaviors and attitudes of over 1,000 individuals using large language…