CVPR 2023, Honorable mention award winner – DynIBaR: Neural Dynamic Image-Based Rendering


In this episode we discuss DynIBaR: Neural Dynamic Image-Based Rendering
by Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely. The paper presents a new approach called “DynIBaR” that can generate novel views from a monocular video of a dynamic scene. Existing methods struggle with complex object motions and uncontrolled camera paths, resulting in blurry or inaccurate renderings. DynIBaR addresses these limitations by using a volumetric image-based rendering framework that combines features from nearby views in a motion-aware manner, enabling the synthesis of photo-realistic views from long videos with complex dynamics and varied camera movements. The approach outperforms existing methods on dynamic scene datasets and is also applied successfully to challenging real-world videos with difficult camera and object motion.


Posted

in

by

Tags: