ScanNet++ Novel View Synthesis and 3D Semantic Understanding Challenge

CVPR 2024 Workshop, Seattle, USA



Recent advances in generative modeling and semantic understanding have spurred significant interest in synthesis and understanding of 3D scenes. In 3D, there is significant potential in application areas, for instance augmented and virtual reality, computational photography, interior design, and autonomous mobile robots all require a deep understanding of 3D scene spaces. We propose to offer the first benchmark challenge for novel view synthesis in large-scale 3D scenes, along with high-fidelity, large-vocabulary 3D semantic scene understanding -- where very complete, high-fidelity ground truth scene data is available. This is enabled through the new ScanNet++ dataset, which offers 1mm resolution laser scan geometry, high-quality DSLR image capture, and dense semantic annotations over 1000 class categories. In particular, existing view synthesis leverages data captured from a single continuous trajectory, where evaluation of novel views outside of the original trajectory capture is impossible. In contrast, our novel view synthesis challenge leverages test images captured intentionally outside of the train image trajectory, allowing for comprehensive evaluation of methods to test new, challenging scenarios for state-of-the-art methods.


Welcome and Introduction 8:50am - 9:00am
Invited Talk 1 9:00am - 9:40am
Invited Talk 2 9:40am - 10:20am
Winner Talks 10:20am - 10:40am
Invited Talk 4 10:40am - 11:20am
Invited Talk 5 11:20pm - 12:00pm
Panel Discussion and Conclusion 12:00pm - 12:30pm

Invited Speakers


Angela Dai
Technical University of Munich
Yueh-Cheng Liu
Technical University of Munich
Chandan Yeshwanth
Technical University of Munich
Matthias Niessner
Technical University of Munich