GS^3: Efficient Relighting with Triple Gaussian Splatting

SIGGRAPH Asia 2024

State Key Lab of CAD and CG, Zhejiang University
*contributed equally

Free viewpoint relighting results trained on 500-2,000 photographs per scene captured with a lightstage.

Abstract

We present a spatial and angular Gaussian based representation and a triple splatting process, for real-time, high-quality novel lighting-and-view synthesis from multi-view point-lit input images. To describe complex appearance, we employ a Lambertian plus a mixture of angular Gaussians as an effective reflectance function for each spatial Gaussian. To generate self-shadow, we splat all spatial Gaussians towards the light source to obtain shadow values, which are further refined by a small multi-layer perceptron. To compensate for other effects like global illumination, another network is trained to compute and add a per-spatial-Gaussian RGB tuple. The effectiveness of our representation is demonstrated on 30 samples with a wide variation in geometry (from solid to fluffy) and appearance (from translucent to anisotropic), as well as using different forms of input data, including rendered images of synthetic/reconstructed objects, photographs captured with a handheld camera and a flash, or from a professional lightstage. We achieve a training time of 40-70 minutes and a rendering speed of 90 fps on a single commodity GPU. Our results compare favorably with state-of-the-art techniques in terms of quality/performance.

Video

Concurrent Work

BibTeX

Our source code and data are released under the GPLv3 license for acadmic purposes. For commercial licensing options, please email hwu at acm.org.
@inproceedings {bi2024rgs,
    title      = {GS\textsuperscript{3}: Efficient Relighting with Triple Gaussian Splatting},
    author     = {Zoubin Bi and Yixin Zeng and Chong Zeng and Fan Pei and Xiang Feng and Kun Zhou and Hongzhi Wu},
    booktitle  = {SIGGRAPH Asia 2024 Conference Papers},
    year       = {2024}
}