Category: Uncategorized
-
arxiv preprint – NOLA: Compressing LoRA using Linear Combination of Random Basis
In this episode, we discuss NOLA: Compressing LoRA using Linear Combination of Random Basis by Soroush Abbasi Koohpayegani, KL Navaneet, Parsa Nooralinejad, Soheil Kolouri, Hamed Pirsiavash. The paper introduces a novel technique called NOLA for fine-tuning and deploying large language models (LLMs) like GPT-3 more efficiently by addressing the limitations of existing Low-Rank Adaptation (LoRA)…
-
arxiv preprint – NOLA: Compressing LoRA using Linear Combination of Random Basis
In this episode, we discuss NOLA: Compressing LoRA using Linear Combination of Random Basis by Soroush Abbasi Koohpayegani, KL Navaneet, Parsa Nooralinejad, Soheil Kolouri, Hamed Pirsiavash. The paper introduces a novel technique called NOLA for fine-tuning and deploying large language models (LLMs) like GPT-3 more efficiently by addressing the limitations of existing Low-Rank Adaptation (LoRA)…
-
arxiv preprint – StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
In this episode, we discuss StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation by Yupeng Zhou, Daquan Zhou, Ming-Ming Cheng, Jiashi Feng, Qibin Hou. The paper introduces advanced techniques to improve diffusion-based generative models used for creating consistent and continuous sequences in image and video generation. It presents “Consistent Self-Attention” for maintaining content consistency…
-
arxiv preprint – Iterative Reasoning Preference Optimization
In this episode, we discuss Iterative Reasoning Preference Optimization by Richard Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, Jason Weston. This study explores a new iterative method aimed at improving how AI models generate step-by-step logical reasoning, or Chain-of-Thought (CoT), to reach correct answers by optimizing between competing reasoning steps. The technique…
-
arxiv preprint – Better & Faster Large Language Models via Multi-token Prediction
In this episode, we discuss Better & Faster Large Language Models via Multi-token Prediction by Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozière, David Lopez-Paz, Gabriel Synnaeve. The paper “Better & Faster Large Language Models via Multi-token Prediction” introduces a novel training methodology for large language models (LLMs) by predicting multiple future tokens simultaneously rather than…
-
arxiv preprint – Make Your LLM Fully Utilize the Context
In this episode, we discuss Make Your LLM Fully Utilize the Context by Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou. The paper “Make Your LLM Fully Utilize the Context” delves into solving the lost-in-the-middle challenge in large language models (LLMs), where these models fail to fully use the contextual information provided in…
-
arxiv preprint – Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation
In this episode, we discuss Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation by Jaemin Cho, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna, Jason Baldridge, Mohit Bansal, Jordi Pont-Tuset, Su Wang. The abstract discusses the evaluation of text-to-image models, focusing on ensuring the accuracy between text prompts and generated images through…
-
arxiv preprint – PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
In this episode, we discuss PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning by Lin Xu, Yilin Zhao, Daquan Zhou, Zhijie Lin, See Kiong Ng, Jiashi Feng. The paper introduces PLLaVA, a model that expands image captioning techniques to video dense captioning, effectively describing various elements, including motions and attires.…
-
arxiv preprint – Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare
In this episode, we discuss Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare by Emre Can Acikgoz, Osman Batur İnce, Rayene Bench, Arda Anıl Boz, İlker Kesen, Aykut Erdem, Erkut Erdem. The paper discusses the integration of Large Language Models (LLMs) in healthcare, focusing on their application in diagnostics, research, and patient…
-
arxiv preprint – SnapKV: LLM Knows What You are Looking for Before Generation
In this episode, we discuss SnapKV: LLM Knows What You are Looking for Before Generation by Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, Deming Chen. The paper introduces SnapKV, a method designed to efficiently reduce the size of Key-Value (KV) caches in Large Language Models (LLMs)…
-
arxiv preprint – CATS: Contextually-Aware Thresholding for Sparsity in Large Language Models
In this episode, we discuss CATS: Contextually-Aware Thresholding for Sparsity in Large Language Models by Je-Yong Lee, Donghyun Lee, Genghan Zhang, Mo Tiwari, Azalia Mirhoseini. The paper presents “Contextually-Aware Thresholding for Sparsity (CATS),” a method intended to reduce the operational costs of Large Language Models (LLMs) by increasing activation sparsity while maintaining high performance levels.…
-
arxiv preprint – SpaceByte: Towards Deleting Tokenization from Large Language Modeling
In this episode, we discuss SpaceByte: Towards Deleting Tokenization from Large Language Modeling by Kevin Slagle. Tokenization in large language models, while improving performance, presents challenges such as bias, increased adversarial vulnerability, and complexity. The new byte-level decoder architecture, SpaceByte, significantly diminishes these issues by integrating larger transformer blocks selectively at critical bytes like spaces,…
-
arxiv preprint – TextSquare: Scaling up Text-Centric Visual Instruction Tuning
In this episode, we discuss TextSquare: Scaling up Text-Centric Visual Instruction Tuning by Jingqun Tang, Chunhui Lin, Zhen Zhao, Shu Wei, Binghong Wu, Qi Liu, Hao Feng, Yang Li, Siqi Wang, Lei Liao, Wei Shi, Yuliang Liu, Hao Liu, Yuan Xie, Xiang Bai, Can Huang. The paper describes advancements in text-centric visual question answering using…
-
arxiv preprint – EdgeFusion: On-Device Text-to-Image Generation
In this episode, we discuss EdgeFusion: On-Device Text-to-Image Generation by Thibault Castells, Hyoung-Kyu Song, Tairen Piao, Shinkook Choi, Bo-Kyeong Kim, Hanyoung Yim, Changgwun Lee, Jae Gon Kim, Tae-Ho Kim. The paper “EdgeFusion: On-Device Text-to-Image Generation” explores the difficulties of using Stable Diffusion models in text-to-image generation due to their intensive computational needs. It proposes a…
-
arxiv preprint – VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time
In this episode, we discuss VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time by Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, Baining Guo. VASA is a new framework designed to create realistic talking faces from a static image and audio clip, featuring lip synchronization, facial…
-
arxiv preprint – Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
In this episode, we discuss Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models by Yanwei Li, Yuechen Zhang, Chengyao Wang, Zhisheng Zhong, Yixin Chen, Ruihang Chu, Shaoteng Liu, Jiaya Jia. The paper introduces Mini-Gemini, a framework aimed at improving Vision Language Models (VLMs) by addressing the performance gap with advanced models like GPT-4. Mini-Gemini…
-
arxiv preprint – High-Dimension Human Value Representation in Large Language Models
In this episode, we discuss High-Dimension Human Value Representation in Large Language Models by Samuel Cahyawijaya, Delong Chen, Yejin Bang, Leila Khalatbari, Bryan Wilie, Ziwei Ji, Etsuko Ishii, Pascale Fung. The paper addresses the importance of aligning large language models (LLMs) with human values, introducing a new method called UniVaR for representing human value distributions…
-
arxiv preprint – Why do small language models underperform? Studying Language Model Saturation via the Softmax Bottleneck
In this episode, we discuss Why do small language models underperform? Studying Language Model Saturation via the Softmax Bottleneck by Nathan Godey, Éric de la Clergerie, Benoît Sagot. This paper investigates the phenomenon of performance saturation in small language models, attributing the issue to a mismatch between the model’s hidden layer size and the complexity…
-
arxiv preprint – Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
In this episode, we discuss Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention by Tsendsuren Munkhdalai, Manaal Faruqui, Siddharth Gopal. The paper presents a novel method for enabling Transformer-based Large Language Models to process extremely long inputs while keeping memory and computational requirements fixed. The technique introduced, called Infini-attention, blends a new form…
-
arxiv preprint – Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs
In this episode, we discuss Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs by Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeffrey Nichols, Yinfei Yang, Zhe Gan. The paper presents Ferret-UI, a new multimodal large language model tailored for interpreting and interacting with mobile user interface screens, which overcomes common challenges through…
-
arxiv preprint – Evaluating Text-to-Visual Generation with Image-to-Text Generation
In this episode, we discuss Evaluating Text-to-Visual Generation with Image-to-Text Generation by Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, Deva Ramanan. The paper introduces VQAScore, a novel metric for evaluating the alignment of generated images to text prompts, utilizing a visual-question-answering model to score the relevance of images…
-
arxiv preprint – Future Lens: Anticipating Subsequent Tokens from a Single Hidden State
In this episode, we discuss Future Lens: Anticipating Subsequent Tokens from a Single Hidden State by Koyena Pal, Jiuding Sun, Andrew Yuan, Byron C. Wallace, David Bau. The paper investigates if single hidden state vectors from an input token in a model like GPT-J-6B can predict multiple future tokens in a sequence. Using linear approximation…
-
arxiv preprint – Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity
In this episode, we discuss Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity by Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park. The paper introduces an adaptive QA model that optimizes the balance between efficiency and accuracy by choosing the appropriate response strategy for questions of varying complexity.…
-
arxiv preprint – Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
In this episode, we discuss Mixture-of-Depths: Dynamically allocating compute in transformer-based language models by David Raposo, Sam Ritter, Blake Richards, Timothy Lillicrap, Peter Conway Humphreys, Adam Santoro. The study presents a method for transformers that allows for the dynamic allocation of computational resources within sequences by limiting the number of tokens processed at each layer…
-
arxiv preprint – WavLLM: Towards Robust and Adaptive Speech Large Language Model
In this episode, we discuss WavLLM: Towards Robust and Adaptive Speech Large Language Model by Shujie Hu, Long Zhou, Shujie Liu, Sanyuan Chen, Hongkun Hao, Jing Pan, Xunying Liu, Jinyu Li, Sunit Sivasankaran, Linquan Liu, Furu Wei. The paper introduces WavLLM, a robust speech large language model with a unique dual-encoder system—one for semantic content…