CVPR 2023 – Learning Neural Duplex Radiance Fields for Real-Time View Synthesis


In this episode we discuss Learning Neural Duplex Radiance Fields for Real-Time View Synthesis
by Ziyu Wan, Christian Richardt, Aljaž Božič, Chao Li, Vijay Rengarajan, Seonghyeon Nam, Xiaoyu Xiang, Tuotuo Li, Bo Zhu, Rakesh Ranjan, Jing Liao. The paper proposes a novel approach to rendering photorealistic images using Neural Radiance Fields (NeRFs) in a more efficient manner. NeRFs require hundreds of deep MLP evaluations for each pixel, which is prohibitively expensive for real-time rendering. The proposed approach overcomes this by distilling and baking NeRFs into highly efficient mesh-based neural representations that are compatible with the massively parallel graphics rendering pipeline. The approach uses screen-space convolutions instead of MLPs to exploit local geometric relationships between nearby pixels and is further boosted by a multi-view distillation optimization strategy. Extensive experiments demonstrate the effectiveness and superiority of the approach on a range of standard datasets.


Posted

in

by

Tags: