Full name | Multi-view PointNet |
Description | We propose a sequential 2D-3D Fusion pipeline: First, image features are computed with a 2D CNN and lifted to the 3D point cloud using depth map, pose and calibration data such that each point in the 3D point cloud is augmented with an image feature vector of size 64. Then, we input the feature augmented point cloud into PointNet++ for the final 3D segmentation. |
Publication title | Multi-view PointNet for 3D Scene Understanding |
Publication authors | Maximilian Jaritz, Jiayuan Gu, Hao Su |
Publication venue | GMDL Workshop, ICCV 2019 |
Publication URL | https://arxiv.org/abs/1909.13603 |
Input Data Types | Uses Color,Uses Geometry Uses 2D,Uses 3D |
Programming language(s) | Python |
Hardware | GTX 1080Ti |
Source code or download URL | https://github.com/maxjaritz/mvpnet |
Submission creation date | 1 Mar, 2019 |
Last edited | 12 Jan, 2020 |