3D Semantic label benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
OccuSeg+Semantic | 0.764 1 | 0.758 23 | 0.796 6 | 0.839 3 | 0.746 2 | 0.907 1 | 0.562 1 | 0.850 4 | 0.680 2 | 0.672 1 | 0.978 1 | 0.610 1 | 0.335 2 | 0.777 1 | 0.819 11 | 0.847 1 | 0.830 1 | 0.691 3 | 0.972 1 | 0.885 1 | 0.727 2 | |
BPNet | 0.749 2 | 0.909 1 | 0.818 3 | 0.811 7 | 0.752 1 | 0.839 5 | 0.485 11 | 0.842 6 | 0.673 3 | 0.644 3 | 0.957 2 | 0.528 5 | 0.305 7 | 0.773 2 | 0.859 3 | 0.788 2 | 0.818 3 | 0.693 2 | 0.916 5 | 0.856 4 | 0.723 4 | |
Virtual MVFusion | 0.746 3 | 0.771 20 | 0.819 2 | 0.848 1 | 0.702 6 | 0.865 3 | 0.397 36 | 0.899 1 | 0.699 1 | 0.664 2 | 0.948 17 | 0.588 2 | 0.330 3 | 0.746 4 | 0.851 6 | 0.764 3 | 0.796 5 | 0.704 1 | 0.935 3 | 0.866 2 | 0.728 1 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
MinkowskiNet | ![]() | 0.736 4 | 0.859 5 | 0.818 3 | 0.832 4 | 0.709 5 | 0.840 4 | 0.521 4 | 0.853 3 | 0.660 4 | 0.643 4 | 0.951 8 | 0.544 4 | 0.286 12 | 0.731 5 | 0.893 1 | 0.675 14 | 0.772 9 | 0.683 4 | 0.874 20 | 0.852 5 | 0.727 2 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
SparseConvNet | 0.725 5 | 0.647 39 | 0.821 1 | 0.846 2 | 0.721 4 | 0.869 2 | 0.533 2 | 0.754 15 | 0.603 13 | 0.614 5 | 0.955 3 | 0.572 3 | 0.325 4 | 0.710 6 | 0.870 2 | 0.724 6 | 0.823 2 | 0.628 8 | 0.934 4 | 0.865 3 | 0.683 7 | |
MatchingNet | 0.724 6 | 0.812 14 | 0.812 5 | 0.810 8 | 0.735 3 | 0.834 6 | 0.495 9 | 0.860 2 | 0.572 19 | 0.602 8 | 0.954 4 | 0.512 7 | 0.280 13 | 0.757 3 | 0.845 9 | 0.725 5 | 0.780 7 | 0.606 15 | 0.937 2 | 0.851 6 | 0.700 6 | |
RFCR | 0.702 7 | 0.889 2 | 0.745 16 | 0.813 6 | 0.672 8 | 0.818 14 | 0.493 10 | 0.815 7 | 0.623 8 | 0.610 6 | 0.947 20 | 0.470 15 | 0.249 21 | 0.594 17 | 0.848 7 | 0.705 9 | 0.779 8 | 0.646 7 | 0.892 12 | 0.823 9 | 0.611 18 | |
JSENet | 0.699 8 | 0.881 3 | 0.762 12 | 0.821 5 | 0.667 9 | 0.800 21 | 0.522 3 | 0.792 9 | 0.613 9 | 0.607 7 | 0.935 36 | 0.492 10 | 0.205 29 | 0.576 23 | 0.853 5 | 0.691 10 | 0.758 12 | 0.652 6 | 0.872 23 | 0.828 8 | 0.649 11 | |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
CU-Hybrid Net | 0.693 9 | 0.596 44 | 0.789 7 | 0.803 10 | 0.677 7 | 0.800 21 | 0.469 15 | 0.846 5 | 0.554 27 | 0.591 10 | 0.948 17 | 0.500 8 | 0.316 6 | 0.609 15 | 0.847 8 | 0.732 4 | 0.808 4 | 0.593 17 | 0.894 10 | 0.839 7 | 0.652 10 | |
FusionNet | 0.688 10 | 0.704 32 | 0.741 19 | 0.754 18 | 0.656 10 | 0.829 8 | 0.501 7 | 0.741 18 | 0.609 11 | 0.548 14 | 0.950 12 | 0.522 6 | 0.371 1 | 0.633 12 | 0.756 16 | 0.715 7 | 0.771 10 | 0.623 9 | 0.861 29 | 0.814 11 | 0.658 9 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
KP-FCNN | 0.684 11 | 0.847 7 | 0.758 15 | 0.784 12 | 0.647 13 | 0.814 15 | 0.473 13 | 0.772 11 | 0.605 12 | 0.594 9 | 0.935 36 | 0.450 23 | 0.181 37 | 0.587 18 | 0.805 13 | 0.690 11 | 0.785 6 | 0.614 11 | 0.882 15 | 0.819 10 | 0.632 14 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
SALANet | 0.670 12 | 0.816 13 | 0.770 11 | 0.768 15 | 0.652 12 | 0.807 18 | 0.451 19 | 0.747 16 | 0.659 5 | 0.545 15 | 0.924 42 | 0.473 14 | 0.149 47 | 0.571 25 | 0.811 12 | 0.635 25 | 0.746 15 | 0.623 9 | 0.892 12 | 0.794 23 | 0.570 32 | |
PointConv | ![]() | 0.666 13 | 0.781 16 | 0.759 14 | 0.699 25 | 0.644 14 | 0.822 12 | 0.475 12 | 0.779 10 | 0.564 23 | 0.504 27 | 0.953 5 | 0.428 31 | 0.203 31 | 0.586 20 | 0.754 17 | 0.661 18 | 0.753 13 | 0.588 18 | 0.902 7 | 0.813 13 | 0.642 12 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointASNL | ![]() | 0.666 13 | 0.703 33 | 0.781 8 | 0.751 20 | 0.655 11 | 0.830 7 | 0.471 14 | 0.769 12 | 0.474 41 | 0.537 16 | 0.951 8 | 0.475 13 | 0.279 14 | 0.635 10 | 0.698 27 | 0.675 14 | 0.751 14 | 0.553 30 | 0.816 38 | 0.806 14 | 0.703 5 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
DCM-Net | 0.658 15 | 0.778 17 | 0.702 28 | 0.806 9 | 0.619 16 | 0.813 16 | 0.468 16 | 0.693 28 | 0.494 37 | 0.524 21 | 0.941 30 | 0.449 24 | 0.298 8 | 0.510 32 | 0.821 10 | 0.675 14 | 0.727 19 | 0.568 23 | 0.826 35 | 0.803 16 | 0.637 13 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
HPGCNN | 0.656 16 | 0.698 34 | 0.743 18 | 0.650 37 | 0.564 32 | 0.820 13 | 0.505 6 | 0.758 14 | 0.631 7 | 0.479 34 | 0.945 24 | 0.480 11 | 0.226 24 | 0.572 24 | 0.774 15 | 0.690 11 | 0.735 18 | 0.614 11 | 0.853 31 | 0.776 34 | 0.597 25 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
RandLA-Net | ![]() | 0.645 17 | 0.778 17 | 0.731 21 | 0.699 25 | 0.577 27 | 0.829 8 | 0.446 23 | 0.736 19 | 0.477 40 | 0.523 23 | 0.945 24 | 0.454 21 | 0.269 16 | 0.484 39 | 0.749 18 | 0.618 29 | 0.738 17 | 0.599 16 | 0.827 34 | 0.792 25 | 0.621 16 |
MVPNet | ![]() | 0.641 18 | 0.831 9 | 0.715 24 | 0.671 34 | 0.590 23 | 0.781 31 | 0.394 37 | 0.679 31 | 0.642 6 | 0.553 13 | 0.937 34 | 0.462 18 | 0.256 17 | 0.649 7 | 0.406 44 | 0.626 26 | 0.691 30 | 0.666 5 | 0.877 16 | 0.792 25 | 0.608 21 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
PointConv-SFPN | 0.641 18 | 0.776 19 | 0.703 27 | 0.721 21 | 0.557 34 | 0.826 10 | 0.451 19 | 0.672 33 | 0.563 24 | 0.483 33 | 0.943 28 | 0.425 33 | 0.162 41 | 0.644 8 | 0.726 19 | 0.659 19 | 0.709 22 | 0.572 20 | 0.875 18 | 0.786 29 | 0.559 37 | |
PointMRNet | 0.640 20 | 0.717 31 | 0.701 29 | 0.692 28 | 0.576 28 | 0.801 19 | 0.467 17 | 0.716 24 | 0.563 24 | 0.459 37 | 0.953 5 | 0.429 30 | 0.169 39 | 0.581 21 | 0.854 4 | 0.605 31 | 0.710 21 | 0.550 31 | 0.894 10 | 0.793 24 | 0.575 30 | |
FPConv | ![]() | 0.639 21 | 0.785 15 | 0.760 13 | 0.713 24 | 0.603 19 | 0.798 23 | 0.392 38 | 0.534 46 | 0.603 13 | 0.524 21 | 0.948 17 | 0.457 19 | 0.250 20 | 0.538 28 | 0.723 20 | 0.598 35 | 0.696 28 | 0.614 11 | 0.872 23 | 0.799 18 | 0.567 34 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
VACNN++ | 0.638 22 | 0.820 12 | 0.701 29 | 0.687 29 | 0.594 21 | 0.791 28 | 0.430 29 | 0.587 41 | 0.569 21 | 0.529 18 | 0.950 12 | 0.467 16 | 0.253 19 | 0.524 30 | 0.722 21 | 0.618 29 | 0.694 29 | 0.570 21 | 0.793 42 | 0.802 17 | 0.659 8 | |
PointSPNet | 0.637 23 | 0.734 26 | 0.692 35 | 0.714 23 | 0.576 28 | 0.797 24 | 0.446 23 | 0.743 17 | 0.598 15 | 0.437 40 | 0.942 29 | 0.403 35 | 0.150 46 | 0.626 13 | 0.800 14 | 0.649 20 | 0.697 27 | 0.557 28 | 0.846 32 | 0.777 33 | 0.563 35 | |
SConv | 0.636 24 | 0.830 10 | 0.697 33 | 0.752 19 | 0.572 31 | 0.780 32 | 0.445 25 | 0.716 24 | 0.529 30 | 0.530 17 | 0.951 8 | 0.446 25 | 0.170 38 | 0.507 34 | 0.666 29 | 0.636 24 | 0.682 32 | 0.541 35 | 0.886 14 | 0.799 18 | 0.594 26 | |
Supervoxel-CNN | 0.635 25 | 0.656 37 | 0.711 25 | 0.719 22 | 0.613 17 | 0.757 37 | 0.444 27 | 0.765 13 | 0.534 29 | 0.566 11 | 0.928 40 | 0.478 12 | 0.272 15 | 0.636 9 | 0.531 35 | 0.664 17 | 0.645 39 | 0.508 40 | 0.864 28 | 0.792 25 | 0.611 18 | |
joint point-based | ![]() | 0.634 26 | 0.614 42 | 0.778 9 | 0.667 36 | 0.633 15 | 0.825 11 | 0.420 30 | 0.804 8 | 0.467 43 | 0.561 12 | 0.951 8 | 0.494 9 | 0.291 9 | 0.566 26 | 0.458 39 | 0.579 39 | 0.764 11 | 0.559 27 | 0.838 33 | 0.814 11 | 0.598 24 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
MCCNN | ![]() | 0.633 27 | 0.866 4 | 0.731 21 | 0.771 13 | 0.576 28 | 0.809 17 | 0.410 32 | 0.684 29 | 0.497 36 | 0.491 30 | 0.949 14 | 0.466 17 | 0.105 51 | 0.581 21 | 0.646 30 | 0.620 27 | 0.680 33 | 0.542 34 | 0.817 37 | 0.795 21 | 0.618 17 |
P. Hermosilla, T. Ritschel, P.P. Vazquez, A. Vinacua, T. Ropinski: Monte Carlo Convolution for Learning on Non-Uniformly Sampled Point Clouds. SIGGRAPH Asia 2018 | ||||||||||||||||||||||
PointMTL | 0.632 28 | 0.731 27 | 0.688 37 | 0.675 32 | 0.591 22 | 0.784 30 | 0.444 27 | 0.565 44 | 0.610 10 | 0.492 29 | 0.949 14 | 0.456 20 | 0.254 18 | 0.587 18 | 0.706 24 | 0.599 34 | 0.665 36 | 0.612 14 | 0.868 27 | 0.791 28 | 0.579 29 | |
3DSM_DMMF | 0.631 29 | 0.626 41 | 0.745 16 | 0.801 11 | 0.607 18 | 0.751 38 | 0.506 5 | 0.729 22 | 0.565 22 | 0.491 30 | 0.866 53 | 0.434 27 | 0.197 33 | 0.595 16 | 0.630 31 | 0.709 8 | 0.705 24 | 0.560 26 | 0.875 18 | 0.740 43 | 0.491 45 | |
APCF-Net | 0.631 29 | 0.742 24 | 0.687 39 | 0.672 33 | 0.557 34 | 0.792 26 | 0.408 33 | 0.665 35 | 0.545 28 | 0.508 25 | 0.952 7 | 0.428 31 | 0.186 35 | 0.634 11 | 0.702 25 | 0.620 27 | 0.706 23 | 0.555 29 | 0.873 22 | 0.798 20 | 0.581 28 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
FusionAwareConv | 0.630 31 | 0.604 43 | 0.741 19 | 0.766 16 | 0.590 23 | 0.747 39 | 0.501 7 | 0.734 20 | 0.503 35 | 0.527 19 | 0.919 46 | 0.454 21 | 0.323 5 | 0.550 27 | 0.420 43 | 0.678 13 | 0.688 31 | 0.544 33 | 0.896 9 | 0.795 21 | 0.627 15 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
PointMRNet-lite | 0.625 32 | 0.643 40 | 0.711 25 | 0.697 27 | 0.581 26 | 0.801 19 | 0.408 33 | 0.670 34 | 0.558 26 | 0.497 28 | 0.944 26 | 0.436 26 | 0.152 45 | 0.617 14 | 0.708 23 | 0.603 32 | 0.743 16 | 0.532 38 | 0.870 26 | 0.784 30 | 0.545 38 | |
SIConv | 0.625 32 | 0.830 10 | 0.694 34 | 0.757 17 | 0.563 33 | 0.772 34 | 0.448 21 | 0.647 38 | 0.520 32 | 0.509 24 | 0.949 14 | 0.431 29 | 0.191 34 | 0.496 37 | 0.614 32 | 0.647 22 | 0.672 34 | 0.535 37 | 0.876 17 | 0.783 31 | 0.571 31 | |
HPEIN | 0.618 34 | 0.729 28 | 0.668 40 | 0.647 39 | 0.597 20 | 0.766 35 | 0.414 31 | 0.680 30 | 0.520 32 | 0.525 20 | 0.946 22 | 0.432 28 | 0.215 27 | 0.493 38 | 0.599 33 | 0.638 23 | 0.617 44 | 0.570 21 | 0.897 8 | 0.806 14 | 0.605 22 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
SPH3D-GCN | ![]() | 0.610 35 | 0.858 6 | 0.772 10 | 0.489 50 | 0.532 36 | 0.792 26 | 0.404 35 | 0.643 39 | 0.570 20 | 0.507 26 | 0.935 36 | 0.414 34 | 0.046 56 | 0.510 32 | 0.702 25 | 0.602 33 | 0.705 24 | 0.549 32 | 0.859 30 | 0.773 35 | 0.534 40 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
AttAN | 0.609 36 | 0.760 22 | 0.667 41 | 0.649 38 | 0.521 37 | 0.793 25 | 0.457 18 | 0.648 37 | 0.528 31 | 0.434 42 | 0.947 20 | 0.401 36 | 0.153 44 | 0.454 40 | 0.721 22 | 0.648 21 | 0.717 20 | 0.536 36 | 0.904 6 | 0.765 37 | 0.485 46 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
LAP-D | 0.594 37 | 0.720 29 | 0.692 35 | 0.637 41 | 0.456 44 | 0.773 33 | 0.391 40 | 0.730 21 | 0.587 16 | 0.445 39 | 0.940 32 | 0.381 39 | 0.288 10 | 0.434 42 | 0.453 40 | 0.591 37 | 0.649 37 | 0.581 19 | 0.777 43 | 0.749 42 | 0.610 20 | |
DPC | 0.592 38 | 0.720 29 | 0.700 31 | 0.602 44 | 0.480 41 | 0.762 36 | 0.380 42 | 0.713 26 | 0.585 17 | 0.437 40 | 0.940 32 | 0.369 41 | 0.288 10 | 0.434 42 | 0.509 37 | 0.590 38 | 0.639 42 | 0.567 24 | 0.772 44 | 0.755 40 | 0.592 27 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
CCRFNet | 0.589 39 | 0.766 21 | 0.659 44 | 0.683 31 | 0.470 43 | 0.740 41 | 0.387 41 | 0.620 40 | 0.490 38 | 0.476 35 | 0.922 44 | 0.355 44 | 0.245 22 | 0.511 31 | 0.511 36 | 0.571 40 | 0.643 40 | 0.493 43 | 0.872 23 | 0.762 38 | 0.600 23 | |
SegGCN | ![]() | 0.589 39 | 0.833 8 | 0.731 21 | 0.539 48 | 0.514 38 | 0.789 29 | 0.448 21 | 0.467 47 | 0.573 18 | 0.484 32 | 0.936 35 | 0.396 37 | 0.061 55 | 0.501 35 | 0.507 38 | 0.594 36 | 0.700 26 | 0.563 25 | 0.874 20 | 0.771 36 | 0.493 44 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: SegGCN: Efficient 3D Point Cloud Segmentation with Fuzzy Spherical Kernel. CVPR 2020 | ||||||||||||||||||||||
TextureNet | ![]() | 0.566 41 | 0.672 36 | 0.664 42 | 0.671 34 | 0.494 39 | 0.719 42 | 0.445 25 | 0.678 32 | 0.411 48 | 0.396 43 | 0.935 36 | 0.356 43 | 0.225 25 | 0.412 44 | 0.535 34 | 0.565 41 | 0.636 43 | 0.464 45 | 0.794 41 | 0.680 48 | 0.568 33 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
DVVNet | 0.562 42 | 0.648 38 | 0.700 31 | 0.770 14 | 0.586 25 | 0.687 46 | 0.333 44 | 0.650 36 | 0.514 34 | 0.475 36 | 0.906 50 | 0.359 42 | 0.223 26 | 0.340 47 | 0.442 42 | 0.422 50 | 0.668 35 | 0.501 41 | 0.708 48 | 0.779 32 | 0.534 40 | |
Pointnet++ & Feature | ![]() | 0.557 43 | 0.735 25 | 0.661 43 | 0.686 30 | 0.491 40 | 0.744 40 | 0.392 38 | 0.539 45 | 0.451 44 | 0.375 46 | 0.946 22 | 0.376 40 | 0.205 29 | 0.403 45 | 0.356 46 | 0.553 42 | 0.643 40 | 0.497 42 | 0.824 36 | 0.756 39 | 0.515 42 |
PanopticFusion-label | 0.529 44 | 0.491 51 | 0.688 37 | 0.604 43 | 0.386 47 | 0.632 51 | 0.225 55 | 0.705 27 | 0.434 46 | 0.293 50 | 0.815 54 | 0.348 45 | 0.241 23 | 0.499 36 | 0.669 28 | 0.507 43 | 0.649 37 | 0.442 49 | 0.796 40 | 0.602 54 | 0.561 36 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
3DMV, FTSDF | 0.501 45 | 0.558 48 | 0.608 50 | 0.424 55 | 0.478 42 | 0.690 45 | 0.246 51 | 0.586 42 | 0.468 42 | 0.450 38 | 0.911 48 | 0.394 38 | 0.160 42 | 0.438 41 | 0.212 52 | 0.432 49 | 0.541 50 | 0.475 44 | 0.742 46 | 0.727 44 | 0.477 47 | |
PCNN | 0.498 46 | 0.559 47 | 0.644 47 | 0.560 47 | 0.420 46 | 0.711 44 | 0.229 53 | 0.414 48 | 0.436 45 | 0.352 47 | 0.941 30 | 0.324 46 | 0.155 43 | 0.238 52 | 0.387 45 | 0.493 44 | 0.529 51 | 0.509 39 | 0.813 39 | 0.751 41 | 0.504 43 | |
3DMV | 0.484 47 | 0.484 52 | 0.538 53 | 0.643 40 | 0.424 45 | 0.606 54 | 0.310 45 | 0.574 43 | 0.433 47 | 0.378 45 | 0.796 55 | 0.301 47 | 0.214 28 | 0.537 29 | 0.208 53 | 0.472 48 | 0.507 54 | 0.413 52 | 0.693 49 | 0.602 54 | 0.539 39 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
PointCNN with RGB | ![]() | 0.458 48 | 0.577 46 | 0.611 49 | 0.356 57 | 0.321 53 | 0.715 43 | 0.299 47 | 0.376 51 | 0.328 54 | 0.319 48 | 0.944 26 | 0.285 49 | 0.164 40 | 0.216 55 | 0.229 51 | 0.484 46 | 0.545 49 | 0.456 47 | 0.755 45 | 0.709 45 | 0.475 48 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
FCPN | ![]() | 0.447 49 | 0.679 35 | 0.604 51 | 0.578 46 | 0.380 48 | 0.682 47 | 0.291 48 | 0.106 57 | 0.483 39 | 0.258 55 | 0.920 45 | 0.258 51 | 0.025 57 | 0.231 54 | 0.325 47 | 0.480 47 | 0.560 48 | 0.463 46 | 0.725 47 | 0.666 50 | 0.231 57 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
PNET2 | 0.442 50 | 0.548 49 | 0.548 52 | 0.597 45 | 0.363 50 | 0.628 52 | 0.300 46 | 0.292 52 | 0.374 51 | 0.307 49 | 0.881 52 | 0.268 50 | 0.186 35 | 0.238 52 | 0.204 54 | 0.407 51 | 0.506 55 | 0.449 48 | 0.667 50 | 0.620 52 | 0.462 49 | |
SurfaceConvPF | 0.442 50 | 0.505 50 | 0.622 48 | 0.380 56 | 0.342 52 | 0.654 49 | 0.227 54 | 0.397 50 | 0.367 52 | 0.276 52 | 0.924 42 | 0.240 53 | 0.198 32 | 0.359 46 | 0.262 49 | 0.366 52 | 0.581 46 | 0.435 50 | 0.640 51 | 0.668 49 | 0.398 50 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
Tangent Convolutions | ![]() | 0.438 52 | 0.437 55 | 0.646 46 | 0.474 52 | 0.369 49 | 0.645 50 | 0.353 43 | 0.258 54 | 0.282 56 | 0.279 51 | 0.918 47 | 0.298 48 | 0.147 48 | 0.283 49 | 0.294 48 | 0.487 45 | 0.562 47 | 0.427 51 | 0.619 52 | 0.633 51 | 0.352 52 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
subcloud_weak | 0.411 53 | 0.479 53 | 0.650 45 | 0.475 51 | 0.285 56 | 0.519 57 | 0.087 58 | 0.725 23 | 0.396 50 | 0.386 44 | 0.621 58 | 0.250 52 | 0.117 49 | 0.338 48 | 0.443 41 | 0.188 58 | 0.594 45 | 0.369 55 | 0.377 58 | 0.616 53 | 0.306 53 | |
SPLAT Net | ![]() | 0.393 54 | 0.472 54 | 0.511 54 | 0.606 42 | 0.311 54 | 0.656 48 | 0.245 52 | 0.405 49 | 0.328 54 | 0.197 56 | 0.927 41 | 0.227 55 | 0.000 59 | 0.001 59 | 0.249 50 | 0.271 57 | 0.510 52 | 0.383 54 | 0.593 53 | 0.699 46 | 0.267 55 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 55 | 0.297 57 | 0.491 55 | 0.432 54 | 0.358 51 | 0.612 53 | 0.274 49 | 0.116 56 | 0.411 48 | 0.265 53 | 0.904 51 | 0.229 54 | 0.079 53 | 0.250 50 | 0.185 55 | 0.320 55 | 0.510 52 | 0.385 53 | 0.548 54 | 0.597 56 | 0.394 51 | |
PointNet++ | ![]() | 0.339 56 | 0.584 45 | 0.478 56 | 0.458 53 | 0.256 57 | 0.360 58 | 0.250 50 | 0.247 55 | 0.278 57 | 0.261 54 | 0.677 57 | 0.183 56 | 0.117 49 | 0.212 56 | 0.145 57 | 0.364 53 | 0.346 58 | 0.232 58 | 0.548 54 | 0.523 57 | 0.252 56 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
SSC-UNet | ![]() | 0.308 57 | 0.353 56 | 0.290 58 | 0.278 58 | 0.166 58 | 0.553 55 | 0.169 57 | 0.286 53 | 0.147 58 | 0.148 58 | 0.908 49 | 0.182 57 | 0.064 54 | 0.023 58 | 0.018 59 | 0.354 54 | 0.363 56 | 0.345 56 | 0.546 56 | 0.685 47 | 0.278 54 |
ScanNet | ![]() | 0.306 58 | 0.203 58 | 0.366 57 | 0.501 49 | 0.311 54 | 0.524 56 | 0.211 56 | 0.002 59 | 0.342 53 | 0.189 57 | 0.786 56 | 0.145 58 | 0.102 52 | 0.245 51 | 0.152 56 | 0.318 56 | 0.348 57 | 0.300 57 | 0.460 57 | 0.437 58 | 0.182 58 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
ERROR | 0.054 59 | 0.000 59 | 0.041 59 | 0.172 59 | 0.030 59 | 0.062 59 | 0.001 59 | 0.035 58 | 0.004 59 | 0.051 59 | 0.143 59 | 0.019 59 | 0.003 58 | 0.041 57 | 0.050 58 | 0.003 59 | 0.054 59 | 0.018 59 | 0.005 59 | 0.264 59 | 0.082 59 | |