ICLR 2023 – DreamFusion: Text-to-3D using 2D Diffusion


In this episode we discuss DreamFusion: Text-to-3D using 2D Diffusion
by Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall. The paper presents DREAMFUSION, a method that uses a pretrained 2D text-to-image diffusion model to synthesize 3D objects from text. By optimizing a randomly-initialized 3D model using gradient descent and a loss based on probability density distillation, the authors generate 2D renderings of the model that closely match the input text. This approach eliminates the need for large-scale labeled 3D datasets and modifications to the image diffusion model, showcasing the power of pretrained models as priors for text-to-3D synthesis.


Posted

in

by

Tags: