CoR-GS: Sparse-View 3D Gaussian Splatting via Co-Regularization

1Beihang University, 2SKLCCSE, Institute of Artificial Intelligence, Beihang University, 3School of Computing, Macquarie University,
4RIKEN AIP, 5The University of Tokyo
ECCV 2024

Illustration of how the different behaviors between two 3D Gaussian radiance fields correlated to construction quality. Gaussians with different behaviors tend to not fit the ground-truth shape well. Therefore, inaccurate reconstructions can be identified by measuring the differences without accessing ground-truth information.

Abstract

3D Gaussian Splatting (3DGS) creates a radiance field consisting of 3D Gaussians to represent a scene. With sparse training views, 3DGS easily suffers from overfitting, negatively impacting the reconstruction quality.

This paper introduces a new co-regularization perspective for improving sparse-view 3DGS. When training two 3D Gaussian radiance fields with the same sparse views of a scene, we observe that the two radiance fields exhibit \textit{point disagreement} and \textit{rendering disagreement} that can unsupervisedly predict reconstruction quality, stemming from the sampling implementation in densification. We further quantify the point disagreement and rendering disagreement by evaluating the registration between Gaussians' point representations and calculating differences in their rendered pixels. The empirical study demonstrates the negative correlation between the two disagreements and accurate reconstruction, which allows us to identify inaccurate reconstruction without accessing ground-truth information.

Based on the study, we propose CoR-GS, which identifies and suppresses inaccurate reconstruction based on the two disagreements: (\romannumeral1) Co-pruning considers Gaussians that exhibit high point disagreement in inaccurate positions and prunes them. (\romannumeral2) Pseudo-view co-regularization considers pixels that exhibit high rendering disagreement are inaccurately rendered and suppress the disagreement.

Results on LLFF, Mip-NeRF360, DTU, and Blender demonstrate that CoR-GS effectively regularizes the scene geometry, reconstructs the compact representations, and achieves state-of-the-art novel view synthesis quality under sparse training views.

Video

-->

Method

CoR-GS trains two 3D Gaussian radiance fields with the same views and conducts co-regularization during training. It improves sparse-view 3DGS by identifying and suppressing inaccurate reconstruction based on the point disagreement and rendering disagreement.

Comparison

Comarison with current SOTA baselines. Zoom in for better visualization.

LLFF

BibTeX

@article{zhang2024cor,
      title={CoR-GS: Sparse-View 3D Gaussian Splatting via Co-Regularization},
      author={Zhang, Jiawei and Li, Jiahe and Yu, Xiaohan and Huang, Lei and Gu, Lin and Zheng, Jin and Bai, Xiao},
      journal={arXiv preprint arXiv:2405.12110},
      year={2024}
    }