Category: Uncategorized
-
Arxiv paper – Story-Adapter: A Training-free Iterative Framework for Long Story Visualization
In this episode, we discuss Story-Adapter: A Training-free Iterative Framework for Long Story Visualization by Jiawei Mao, Xiaoke Huang, Yunfei Xie, Yuanqi Chang, Mude Hui, Bingjie Xu, Yuyin Zhou. The paper tackles the challenge of generating coherent image sequences for long narratives using text-to-image diffusion models. It introduces Story-Adapter, a training-free and efficient framework that…
-
Arxiv paper – ReCamMaster: Camera-Controlled Generative Rendering from A Single Video
In this episode, we discuss ReCamMaster: Camera-Controlled Generative Rendering from A Single Video by Jianhong Bai, Menghan Xia, Xiao Fu, Xintao Wang, Lianrui Mu, Jinwen Cao, Zuozhu Liu, Haoji Hu, Xiang Bai, Pengfei Wan, Di Zhang. ReCamMaster is a generative framework that modifies camera trajectories in existing videos by re-rendering scenes from new perspectives. It…
-
Arxiv paper – Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models
In this episode, we discuss Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models by Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, Shaohui Lin. The paper aims to enhance the reasoning abilities of Multimodal Large Language Models (MLLMs) using reinforcement learning (RL). To overcome the lack…
-
Arxiv paper – MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks
In this episode, we discuss MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks by Jiacheng Chen, Tianhao Liang, Sherman Siu, Zhengqing Wang, Kai Wang, Yubo Wang, Yuansheng Ni, Wang Zhu, Ziyan Jiang, Bohan Lyu, Dongfu Jiang, Xuan He, Yuan Liu, Hexiang Hu, Xiang Yue, Wenhu Chen. The paper introduces MEGA-BENCH, a comprehensive evaluation suite…
-
Arxiv paper – TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models
In this episode, we discuss TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models by Mark YU, Wenbo Hu, Jinbo Xing, Ying Shan. TrajectoryCrafter is a new method that precisely redirects camera paths in monocular videos by separating view changes from content generation. It uses a dual-stream conditional video diffusion model that combines point…
-
Arxiv paper – PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving
In this episode, we discuss PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving by Mihir Parmar, Xin Liu, Palash Goyal, Yanfei Chen, Long Le, Swaroop Mishra, Hossein Mobahi, Jindong Gu, Zifeng Wang, Hootan Nakhost, Chitta Baral, Chen-Yu Lee, Tomas Pfister, Hamid Palangi. The paper introduces **PlanGEN**, a versatile agent…
-
Arxiv paper – VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing
In this episode, we discuss VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing by Xiangpeng Yang, Linchao Zhu, Hehe Fan, Yi Yang. The paper introduces VideoGrain, a zero-shot method that enhances multi-grained video editing by modulating space-time attention mechanisms for class-, instance-, and part-level modifications. It addresses challenges like semantic misalignment and feature coupling by…
-
Arxiv paper – ZeroBench: An Impossible Visual Benchmark for Contemporary Large Multimodal Models
In this episode, we discuss ZeroBench: An Impossible Visual Benchmark for Contemporary Large Multimodal Models by Jonathan Roberts, Mohammad Reza Taesiri, Ansh Sharma, Akash Gupta, Samuel Roberts, Ioana Croitoru, Simion-Vlad Bogolin, Jialu Tang, Florian Langer, Vyas Raina, Vatsal Raina, Hanyi Xiong, Vishaal Udandarao, Jingyi Lu, Shiyang Chen, Sam Purkis, Tianshuo Yan, Wenye Lin, Gyungin Shin,…
-
Arxiv paper – Teaching Language Models to Critique via Reinforcement Learning
In this episode, we discuss Teaching Language Models to Critique via Reinforcement Learning by Zhihui Xie, Jie chen, Liyu Chen, Weichao Mao, Jingjing Xu, Lingpeng Kong. The paper presents CTRL, a framework that uses reinforcement learning to train critic models which provide feedback for improving code generated by large language models without needing human input.…
-
Arxiv paper – PANDAS: Improving Many-shot Jailbreaking via Positive Affirmation, Negative Demonstration, and Adaptive Sampling
In this episode, we discuss PANDAS: Improving Many-shot Jailbreaking via Positive Affirmation, Negative Demonstration, and Adaptive Sampling by Avery Ma, Yangchen Pan, Amir-massoud Farahmand. The paper introduces PANDAS, a hybrid technique that enhances many-shot jailbreaking by altering fabricated dialogues with positive affirmations, negative demonstrations, and optimized adaptive sampling tailored to specific prompts. Experimental results on…
-
Arxiv paper – VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video Generation
In this episode, we discuss VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video Generation by Sixiao Zheng, Zimian Peng, Yanpeng Zhou, Yi Zhu, Hang Xu, Xiangru Huang, Yanwei Fu. The paper presents VidCRAFT3, a new framework for image-to-video generation that allows simultaneous control over camera motion, object movement, and lighting direction. It addresses previous limitations…
-
Arxiv paper – Heuristically Adaptive Diffusion-Model Evolutionary Strategy
In this episode, we discuss Heuristically Adaptive Diffusion-Model Evolutionary Strategy by Benedikt Hartl, Yanbo Zhang, Hananel Hazan, Michael Levin. The paper explores the connection between diffusion models and evolutionary algorithms, highlighting that both generate high-quality samples through iterative refinement of random initial states. By integrating deep learning-based diffusion models into evolutionary processes, the authors enhance…
-
Arxiv paper – Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
In this episode, we discuss Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach by Jonas Geiping, Sean McLeish, Neel Jain, John Kirchenbauer, Siddharth Singh, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Tom Goldstein. The paper presents a new language model architecture that enhances test-time computation by iteratively reasoning in latent space using…
-
Arxiv paper – EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents
In this episode, we discuss EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents by Rui Yang, Hanyang Chen, Junyu Zhang, Mark Zhao, Cheng Qian, Kangrui Wang, Qineng Wang, Teja Venkat Koripella, Marziyeh Movahedi, Manling Li, Heng Ji, Huan Zhang, Tong Zhang. The paper presents **EMBODIEDBENCH**, a comprehensive benchmark with 1,128 tasks across…
-
Arxiv paper – VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection
In this episode, we discuss VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection by Songhao Han, Wei Huang, Hairong Shi, Le Zhuo, Xiu Su, Shifeng Zhang, Xu Zhou, Xiaojuan Qi, Yue Liao, Si Liu. The paper introduces VideoEspresso, a high-quality, large-scale VideoQA dataset that maintains essential spatial and temporal details…
-
Arxiv paper – VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models
In this episode, we discuss VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models by Hila Chefer, Uriel Singer, Amit Zohar, Yuval Kirstain, Adam Polyak, Yaniv Taigman, Lior Wolf, Shelly Sheynin. Generative video models typically prioritize appearance accuracy over motion coherence, limiting their ability to capture realistic dynamics. The paper presents VideoJAM, a…
-
Arxiv paper – HunyuanVideo: A Systematic Framework For Large Video Generative Models
In this episode, we discuss HunyuanVideo: A Systematic Framework For Large Video Generative Models by Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, Kathrina Wu, Qin Lin, Junkun Yuan, Yanxin Long, Aladdin Wang, Andong Wang, Changlin Li, Duojun Huang, Fang Yang, Hao Tan,…
-
Arxiv paper – s1: Simple test-time scaling
In this episode, we discuss s1: Simple test-time scaling by Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, Tatsunori Hashimoto. The paper introduces a straightforward method for test-time scaling in language models to enhance reasoning performance by utilizing additional computational resources during inference. The…
-
Arxiv paper – Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation
In this episode, we discuss Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation by The authors of the paper are the **Hunyuan3D Team**. Specific contributor names are indicated to be listed at the end of the full report.. Hunyuan3D 2.0 is a large-scale 3D synthesis system featuring Hunyuan3D-DiT for generating detailed…
-
Arxiv paper – MatAnyone: Stable Video Matting with Consistent Memory Propagation
In this episode, we discuss MatAnyone: Stable Video Matting with Consistent Memory Propagation by Peiqing Yang, Shangchen Zhou, Jixin Zhao, Qingyi Tao, Chen Change Loy. The paper introduces **MatAnyone**, a robust framework for target-assigned video matting that overcomes challenges posed by complex or ambiguous backgrounds without relying on auxiliary inputs. It employs a memory-based approach…
-
Arxiv paper – Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate
In this episode, we discuss Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate by Yubo Wang, Xiang Yue, Wenhu Chen. The paper introduces Critique Fine-Tuning (CFT), a novel approach where language models are trained to critique noisy responses instead of simply imitating correct ones, inspired by human critical thinking. Using a…
-
Arxiv paper – Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs
In this episode, we discuss Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs by Yue Wang, Qiuzhi Liu, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Linfeng Song, Dian Yu, Juntao Li, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yu. The paper identifies “underthinking” in large language models like…
-
Arxiv paper – MetaMorph: Multimodal Understanding and Generation via Instruction Tuning
In this episode, we discuss MetaMorph: Multimodal Understanding and Generation via Instruction Tuning by Shengbang Tong, David Fan, Jiachen Zhu, Yunyang Xiong, Xinlei Chen, Koustuv Sinha, Michael Rabbat, Yann LeCun, Saining Xie, Zhuang Liu. The paper introduces Visual-Predictive Instruction Tuning (VPiT), which enhances pretrained large language models to generate both text and visual tokens by…
-
Arxiv paper – Improving Video Generation with Human Feedback
In this episode, we discuss Improving Video Generation with Human Feedback by Jie Liu, Gongye Liu, Jiajun Liang, Ziyang Yuan, Xiaokun Liu, Mingwu Zheng, Xiele Wu, Qiulin Wang, Wenyu Qin, Menghan Xia, Xintao Wang, Xiaohong Liu, Fei Yang, Pengfei Wan, Di Zhang, Kun Gai, Yujiu Yang, Wanli Ouyang. The paper introduces a pipeline that utilizes…
-
Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling
In this episode, we discuss Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling by The authors of the paper are: – Xiaokang Chen – Zhiyu Wu – Xingchao Liu – Zizheng Pan – Wen Liu – Zhenda Xie – Xingkai Yu – Chong Ruan. The paper introduces Janus-Pro, an enhanced version of…