Submitted by Maximilian Jaritz.

Submission data

Full nameMulti-view PointNet
DescriptionWe propose a sequential 2D-3D Fusion pipeline: First, image features are computed with a 2D CNN and lifted to the 3D point cloud using depth map, pose and calibration data such that each point in the 3D point cloud is augmented with an image feature vector of size 64. Then, we input the feature augmented point cloud into PointNet++ for the final 3D segmentation.
Publication titleMulti-view PointNet for 3D Scene Understanding
Publication authorsMaximilian Jaritz, Jiayuan Gu, Hao Su
Publication venueGMDL Workshop, ICCV 2019
Publication URLhttps://arxiv.org/abs/1909.13603
Input Data TypesUses Color,Uses Geometry        Uses 2D,Uses 3D
Programming language(s)Python
HardwareGTX 1080Ti
Source code or download URLhttps://github.com/maxjaritz/mvpnet
Submission creation date1 Mar, 2019
Last edited12 Jan, 2020

3D semantic label results

Infoavg ioubathtubbedbookshelfcabinetchaircountercurtaindeskdoorfloorotherfurniturepicturerefrigeratorshower curtainsinksofatabletoiletwallwindow
permissive0.6410.8310.7150.6710.5900.7810.3940.6790.6420.5530.9370.4620.2560.6490.4060.6260.6910.6660.8770.7920.608