Skip to content
Massachusetts Institute of Technology
  • on: June 17, 2020
  • in: NeurIPS

MetaSDF: Meta-learning Signed Distance Functions

  • Vincent Sitzmann
  • Eric Ryan Chan
  • Richard Tucker
  • Noah Snavely
  • Gordon Wetzstein
@inproceedings{sitzmann2019metasdf,
    author = {Sitzmann, Vincent
              and Chan, Eric R.
              and Tucker, Richard
              and Snavely, Noah
              and Wetzstein, Gordon},
    title = {MetaSDF: Meta-Learning Signed
             Distance Functions},
    booktitle = {arXiv},
    year={2020}
}
  • Copy to Clipboard

Unsupervised learning with generative models has the potential of discovering rich representations of 3D scenes. While geometric deep learning has explored 3D-structure-aware representations of scene geometry, these models typically require explicit 3D supervision. Emerging neural scene representations can be trained only with posed 2D images, but existing methods ignore the three-dimensional structure of scenes. We propose Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-to-end from only 2D images and their camera poses, without access to depth or shape. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model.