We present a simple algorithm for differentiable rendering of surfaces represented by Signed Distance Fields (SDF), which makes it easy to integrate rendering into gradient-based optimization pipelines. To tackle visibility-related derivatives that make rendering non-differentiable, existing physically based differentiable rendering methods often rely on elaborate guiding data structures or reparameterization with a global impact on variance. In this article, we investigate an alternative that embraces nonzero bias in exchange for low variance and architectural simplicity. Our method expands the lower-dimensional boundary integral into a thin band that is easy to sample when the underlying surface is represented by an SDF. We demonstrate the performance and robustness of our formulation in end-to-end inverse rendering tasks, where it obtains results that are competitive with or superior to existing work.
We compare the final inverse rendering of SDF Convolution (8 auxiliary spp), SDF Reparameterization, and our method using the same optimization setup: in total we run 5000 iterations optimizing 50 views; in each iteration, we optimize a batch of 5 views, rendered with 512 × 512 resolution and 64 spp. In all test cases, our method results in comparable or more accurate reconstructions.
We quantitatively measure the performance of different methods on synthetic Chair, Lego, Hotdog, Ficus, and Drum. For 2D evaluations, we test novel view rendering and relighting on a high-contrast and a low-contrast environment map. For 3D evaluations, we test the Chamfer L1 distance using random sample points on the ground truth mesh. We use the same optimization setup as in above and additionally run SDF Reparameterization using their released hqq setup. Since previous methods require tricubic interpolation of the SDF grid, we further test our method on both trilinear and tricubic interpolation for better reference.
The physically-based nature of our differentiable renderer enables joint optimization of geometry and material. Here we visualize the geometry, normal, and albedo of our final inverse rendering. Note how we can largely disentangle shading effects from the albedo (in comparison with the top left rendering).
Physically based differentiable renderers also make relighting easy. We demonstrate the rendering of the reconstructed Chair under various lighting conditions. In all cases, the shadows and highlights of our rendering are consistent with ground truth.
@inproceedings{zichen2024relaxedboundary,
author = {Wang, Zichen and Deng, Xi and Zhang, Ziyi and Jakob, Wenzel and Marschner, Steve},
title = {A Simple Approach to Differentiable Rendering of SDFs},
booktitle = {ACM SIGGRAPH Asia 2024 Conference Proceedings},
year = {2024},
}