Arxiv Paper – Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models

In this episode, we discuss Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models by Matt Deitke, Christopher Clark, Sangho Lee, Rohun Tripathi, Yue Yang, Jae Sung Park, Mohammadreza Salehi, Niklas Muennighoff, Kyle Lo, Luca Soldaini, Jiasen Lu, Taira Anderson, Erin Bransom, Kiana Ehsani, Huong Ngo, YenSung Chen, Ajay Patel, Mark Yatskar, Chris Callison-Burch, Andrew Head, Rose Hendrix, Favyen Bastani, Eli VanderBilt, Nathan Lambert, Yvonne Chou, Arnavi Chheda, Jenna Sparks, Sam Skjonsberg, Michael Schmitz, Aaron Sarnat, Byron Bischoff, Pete Walsh, Chris Newell, Piper Wolters, Tanmay Gupta, Kuo-Hao Zeng, Jon Borchardt, Dirk Groeneveld, Jen Dumas, Crystal Nam, Sophie Lebrecht, Caitlin Wittlif, Carissa Schoenick, Oscar Michel, Ranjay Krishna, Luca Weihs, Noah A. Smith, Hannaneh Hajishirzi, Ross Girshick, Ali Farhadi, Aniruddha Kembhavi. The paper presents Molmo, a new family of open visual language models (VLMs) designed to foster transparency and accessibility. Molmo’s development includes a unique image caption dataset created using human speech-based descriptions and a mixed dataset for fine-tuning, incorporating Q&A and 2D pointing data. The 72B Molmo model surpasses both open-source and proprietary systems in performance, with plans to release all model weights, data, and source code.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *