CVPR 2023 – Consistent View Synthesis with Pose-Guided Diffusion Models


In this episode we discuss Consistent View Synthesis with Pose-Guided Diffusion Models
by Hung-Yu Tseng, Qinbo Li, Changil Kim, Suhib Alsisan, Jia-Bin Huang, Johannes Kopf. The paper proposes a new technique for synthesizing novel views from a single image for virtual reality applications. The proposed method, called pose-guided diffusion, generates consistent and high-quality views from significant camera movement. The approach includes an attention layer which uses epipolar lines as constraints to facilitate viewpoint association. The results demonstrate the effectiveness of the proposed method compared to state-of-the-art transformer-based and GAN-based models on synthetic and real-world datasets.


Posted

in

by

Tags: