Skip to content
Massachusetts Institute of Technology
  • on: May 8, 2022
  • in: TMLR

Unsupervised Discovery and Composition of Object Light Fields

  • Cameron Smith
  • Hong-Xing Yu
  • Sergey Zakharov
  • Frédo Durand
  • Joshua B. Tenenbaum
  • Jiajun Wu
  • Vincent Sitzmann
@inproceedings{smith2022colf,
    title = { Unsupervised Discovery and Composition of Object Light Fields },
    author = { Smith, Cameron and 
               Yu, Hong-Xing and 
               Zakharov, Sergey and 
               Durand, Frédo and 
               Tenenbaum, Joshua B. and 
               Wu, Jiajun and 
               Sitzmann, Vincent },
    year = { 2022 },
    booktitle = { TMLR },
}
  • Copy to Clipboard

Neural scene representations, both continuous and discrete, have recently emerged as a powerful new paradigm for 3D scene understanding. Recent efforts have tackled unsupervised discovery of object-centric neural scene representations. However, the high cost of ray-marching, exacerbated by the fact that each object representation has to be ray-marched separately, leads to insufficiently sampled radiance fields and thus, noisy renderings, poor framerates, and high memory and time complexity during training and rendering. Here, we propose to represent objects in an object-centric, compositional scene representation as light fields. We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields. Dubbed Compositional Object Light Fields (COLF), our method enables unsupervised learning of object-centric neural scene representations, state-of-the-art reconstruction and novel view synthesis performance on standard datasets, and rendering and training speeds at orders of magnitude faster than existing 3D approaches.