3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
OccuSeg+Semantic | 0.764 2 | 0.758 44 | 0.796 18 | 0.839 12 | 0.746 11 | 0.907 1 | 0.562 5 | 0.850 12 | 0.680 9 | 0.672 5 | 0.978 2 | 0.610 1 | 0.335 8 | 0.777 4 | 0.819 31 | 0.847 1 | 0.830 1 | 0.691 8 | 0.972 1 | 0.885 2 | 0.727 10 | |
SparseConvNet | 0.725 19 | 0.647 72 | 0.821 5 | 0.846 7 | 0.721 19 | 0.869 3 | 0.533 11 | 0.754 40 | 0.603 34 | 0.614 21 | 0.955 17 | 0.572 10 | 0.325 11 | 0.710 20 | 0.870 13 | 0.724 18 | 0.823 2 | 0.628 27 | 0.934 11 | 0.865 11 | 0.683 24 | |
Mix3D | ![]() | 0.781 1 | 0.964 1 | 0.855 1 | 0.843 10 | 0.781 1 | 0.858 7 | 0.575 3 | 0.831 18 | 0.685 7 | 0.714 1 | 0.979 1 | 0.594 3 | 0.310 16 | 0.801 1 | 0.892 8 | 0.841 2 | 0.819 3 | 0.723 3 | 0.940 7 | 0.887 1 | 0.725 12 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
BPNet | ![]() | 0.749 7 | 0.909 4 | 0.818 8 | 0.811 21 | 0.752 8 | 0.839 17 | 0.485 30 | 0.842 15 | 0.673 10 | 0.644 10 | 0.957 12 | 0.528 23 | 0.305 18 | 0.773 6 | 0.859 17 | 0.788 4 | 0.818 4 | 0.693 7 | 0.916 19 | 0.856 17 | 0.723 13 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
CU-Hybrid Net | 0.764 2 | 0.924 2 | 0.819 6 | 0.840 11 | 0.757 6 | 0.853 9 | 0.580 1 | 0.848 13 | 0.709 2 | 0.643 11 | 0.958 9 | 0.587 7 | 0.295 22 | 0.753 14 | 0.884 12 | 0.758 9 | 0.815 5 | 0.725 2 | 0.927 15 | 0.867 9 | 0.743 5 | |
O-CNN | ![]() | 0.762 4 | 0.924 2 | 0.823 4 | 0.844 9 | 0.770 2 | 0.852 10 | 0.577 2 | 0.847 14 | 0.711 1 | 0.640 14 | 0.958 9 | 0.592 4 | 0.217 56 | 0.762 10 | 0.888 9 | 0.758 9 | 0.813 6 | 0.726 1 | 0.932 13 | 0.868 8 | 0.744 4 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
VMNet | ![]() | 0.746 11 | 0.870 11 | 0.838 2 | 0.858 4 | 0.729 16 | 0.850 11 | 0.501 22 | 0.874 7 | 0.587 39 | 0.658 7 | 0.956 13 | 0.564 12 | 0.299 20 | 0.765 9 | 0.900 5 | 0.716 23 | 0.812 7 | 0.631 26 | 0.939 8 | 0.858 15 | 0.709 18 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
One Thing One Click | 0.701 26 | 0.825 23 | 0.796 18 | 0.723 46 | 0.716 20 | 0.832 23 | 0.433 56 | 0.816 22 | 0.634 20 | 0.609 23 | 0.969 6 | 0.418 64 | 0.344 5 | 0.559 51 | 0.833 28 | 0.715 24 | 0.808 8 | 0.560 55 | 0.902 27 | 0.847 24 | 0.680 25 | |
MSP | 0.748 9 | 0.623 75 | 0.804 14 | 0.859 3 | 0.745 12 | 0.824 31 | 0.501 22 | 0.912 2 | 0.690 6 | 0.685 3 | 0.956 13 | 0.567 11 | 0.320 13 | 0.768 7 | 0.918 3 | 0.720 20 | 0.802 9 | 0.676 10 | 0.921 16 | 0.881 4 | 0.779 1 | |
Retro-FPN | 0.744 13 | 0.842 18 | 0.800 16 | 0.767 39 | 0.740 13 | 0.836 21 | 0.541 9 | 0.914 1 | 0.672 11 | 0.626 17 | 0.958 9 | 0.552 17 | 0.272 34 | 0.777 4 | 0.886 11 | 0.696 30 | 0.801 10 | 0.674 12 | 0.941 6 | 0.858 15 | 0.717 15 | |
EQ-Net | 0.743 14 | 0.620 76 | 0.799 17 | 0.849 5 | 0.730 15 | 0.822 33 | 0.493 28 | 0.897 4 | 0.664 12 | 0.681 4 | 0.955 17 | 0.562 13 | 0.378 1 | 0.760 11 | 0.903 4 | 0.738 14 | 0.801 10 | 0.673 13 | 0.907 23 | 0.877 5 | 0.745 3 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
PicassoNet-II | ![]() | 0.696 28 | 0.704 62 | 0.790 22 | 0.787 31 | 0.709 21 | 0.837 19 | 0.459 39 | 0.815 24 | 0.543 54 | 0.615 20 | 0.956 13 | 0.529 21 | 0.250 45 | 0.551 56 | 0.790 36 | 0.703 28 | 0.799 12 | 0.619 32 | 0.908 22 | 0.848 23 | 0.700 21 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
PointTransformerV2 | 0.752 5 | 0.742 51 | 0.809 12 | 0.872 1 | 0.758 5 | 0.860 6 | 0.552 7 | 0.891 5 | 0.610 28 | 0.687 2 | 0.960 7 | 0.559 14 | 0.304 19 | 0.766 8 | 0.926 2 | 0.767 6 | 0.797 13 | 0.644 20 | 0.942 5 | 0.876 7 | 0.722 14 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
Virtual MVFusion | 0.746 11 | 0.771 38 | 0.819 6 | 0.848 6 | 0.702 24 | 0.865 5 | 0.397 67 | 0.899 3 | 0.699 3 | 0.664 6 | 0.948 38 | 0.588 6 | 0.330 9 | 0.746 17 | 0.851 23 | 0.764 7 | 0.796 14 | 0.704 6 | 0.935 10 | 0.866 10 | 0.728 8 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
contrastBoundary | ![]() | 0.705 24 | 0.769 41 | 0.775 29 | 0.809 23 | 0.687 27 | 0.820 36 | 0.439 54 | 0.812 27 | 0.661 14 | 0.591 34 | 0.945 47 | 0.515 26 | 0.171 74 | 0.633 32 | 0.856 19 | 0.720 20 | 0.796 14 | 0.668 14 | 0.889 37 | 0.847 24 | 0.689 23 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
PointConvFormer | 0.749 7 | 0.793 30 | 0.790 22 | 0.807 24 | 0.750 10 | 0.856 8 | 0.524 15 | 0.881 6 | 0.588 38 | 0.642 13 | 0.977 4 | 0.591 5 | 0.274 32 | 0.781 2 | 0.929 1 | 0.804 3 | 0.796 14 | 0.642 21 | 0.947 3 | 0.885 2 | 0.715 17 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
DMF-Net | 0.752 5 | 0.906 5 | 0.793 21 | 0.802 27 | 0.689 26 | 0.825 29 | 0.556 6 | 0.867 8 | 0.681 8 | 0.602 28 | 0.960 7 | 0.555 16 | 0.365 3 | 0.779 3 | 0.859 17 | 0.747 12 | 0.795 17 | 0.717 4 | 0.917 18 | 0.856 17 | 0.764 2 | |
PointContrast_LA_SEM | 0.683 35 | 0.757 45 | 0.784 25 | 0.786 32 | 0.639 40 | 0.824 31 | 0.408 61 | 0.775 34 | 0.604 33 | 0.541 43 | 0.934 70 | 0.532 20 | 0.269 38 | 0.552 54 | 0.777 37 | 0.645 52 | 0.793 18 | 0.640 22 | 0.913 21 | 0.824 33 | 0.671 27 | |
INS-Conv-semantic | 0.717 22 | 0.751 47 | 0.759 36 | 0.812 20 | 0.704 23 | 0.868 4 | 0.537 10 | 0.842 15 | 0.609 30 | 0.608 24 | 0.953 24 | 0.534 19 | 0.293 23 | 0.616 37 | 0.864 15 | 0.719 22 | 0.793 18 | 0.640 22 | 0.933 12 | 0.845 26 | 0.663 29 | |
One-Thing-One-Click | 0.693 29 | 0.743 50 | 0.794 20 | 0.655 69 | 0.684 28 | 0.822 33 | 0.497 26 | 0.719 50 | 0.622 23 | 0.617 19 | 0.977 4 | 0.447 51 | 0.339 6 | 0.750 16 | 0.664 59 | 0.703 28 | 0.790 20 | 0.596 41 | 0.946 4 | 0.855 19 | 0.647 34 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
SimConv | 0.410 91 | 0.000 96 | 0.782 26 | 0.772 36 | 0.722 18 | 0.838 18 | 0.407 63 | 0.000 97 | 0.000 97 | 0.595 32 | 0.947 40 | 0.000 97 | 0.270 37 | 0.000 97 | 0.000 97 | 0.000 97 | 0.786 21 | 0.621 31 | 0.000 97 | 0.841 28 | 0.621 42 | |
KP-FCNN | 0.684 33 | 0.847 17 | 0.758 38 | 0.784 33 | 0.647 36 | 0.814 43 | 0.473 32 | 0.772 35 | 0.605 32 | 0.594 33 | 0.935 66 | 0.450 49 | 0.181 72 | 0.587 41 | 0.805 34 | 0.690 33 | 0.785 22 | 0.614 33 | 0.882 41 | 0.819 38 | 0.632 39 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
MatchingNet | 0.724 21 | 0.812 27 | 0.812 10 | 0.810 22 | 0.735 14 | 0.834 22 | 0.495 27 | 0.860 10 | 0.572 45 | 0.602 28 | 0.954 21 | 0.512 27 | 0.280 29 | 0.757 12 | 0.845 25 | 0.725 17 | 0.780 23 | 0.606 37 | 0.937 9 | 0.851 22 | 0.700 21 | |
RFCR | 0.702 25 | 0.889 8 | 0.745 45 | 0.813 19 | 0.672 30 | 0.818 40 | 0.493 28 | 0.815 24 | 0.623 22 | 0.610 22 | 0.947 40 | 0.470 39 | 0.249 47 | 0.594 40 | 0.848 24 | 0.705 27 | 0.779 24 | 0.646 19 | 0.892 35 | 0.823 34 | 0.611 44 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
IPCA | 0.731 18 | 0.890 7 | 0.837 3 | 0.864 2 | 0.726 17 | 0.873 2 | 0.530 14 | 0.824 21 | 0.489 69 | 0.647 8 | 0.978 2 | 0.609 2 | 0.336 7 | 0.624 35 | 0.733 44 | 0.758 9 | 0.776 25 | 0.570 51 | 0.949 2 | 0.877 5 | 0.728 8 | |
ROSMRF3D | 0.673 38 | 0.789 31 | 0.748 42 | 0.763 41 | 0.635 42 | 0.814 43 | 0.407 63 | 0.747 42 | 0.581 43 | 0.573 36 | 0.950 33 | 0.484 33 | 0.271 36 | 0.607 38 | 0.754 40 | 0.649 47 | 0.774 26 | 0.596 41 | 0.883 40 | 0.823 34 | 0.606 48 | |
LRPNet | 0.742 15 | 0.816 25 | 0.806 13 | 0.807 24 | 0.752 8 | 0.828 27 | 0.575 3 | 0.839 17 | 0.699 3 | 0.637 15 | 0.954 21 | 0.520 25 | 0.320 13 | 0.755 13 | 0.834 27 | 0.760 8 | 0.772 27 | 0.676 10 | 0.915 20 | 0.862 12 | 0.717 15 | |
MinkowskiNet | ![]() | 0.736 17 | 0.859 14 | 0.818 8 | 0.832 13 | 0.709 21 | 0.840 16 | 0.521 17 | 0.853 11 | 0.660 15 | 0.643 11 | 0.951 29 | 0.544 18 | 0.286 27 | 0.731 19 | 0.893 7 | 0.675 37 | 0.772 27 | 0.683 9 | 0.874 48 | 0.852 21 | 0.727 10 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
FusionNet | 0.688 31 | 0.704 62 | 0.741 49 | 0.754 43 | 0.656 32 | 0.829 25 | 0.501 22 | 0.741 45 | 0.609 30 | 0.548 41 | 0.950 33 | 0.522 24 | 0.371 2 | 0.633 32 | 0.756 39 | 0.715 24 | 0.771 29 | 0.623 29 | 0.861 58 | 0.814 39 | 0.658 30 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
StratifiedFormer | ![]() | 0.747 10 | 0.901 6 | 0.803 15 | 0.845 8 | 0.757 6 | 0.846 13 | 0.512 18 | 0.825 20 | 0.696 5 | 0.645 9 | 0.956 13 | 0.576 9 | 0.262 42 | 0.744 18 | 0.861 16 | 0.742 13 | 0.770 30 | 0.705 5 | 0.899 30 | 0.860 14 | 0.734 6 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
Feature_GeometricNet | ![]() | 0.690 30 | 0.884 9 | 0.754 40 | 0.795 30 | 0.647 36 | 0.818 40 | 0.422 58 | 0.802 30 | 0.612 27 | 0.604 26 | 0.945 47 | 0.462 42 | 0.189 69 | 0.563 50 | 0.853 21 | 0.726 16 | 0.765 31 | 0.632 25 | 0.904 25 | 0.821 37 | 0.606 48 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
PointMetaBase | 0.714 23 | 0.835 19 | 0.785 24 | 0.821 14 | 0.684 28 | 0.846 13 | 0.531 13 | 0.865 9 | 0.614 25 | 0.596 31 | 0.953 24 | 0.500 30 | 0.246 48 | 0.674 21 | 0.888 9 | 0.692 31 | 0.764 32 | 0.624 28 | 0.849 62 | 0.844 27 | 0.675 26 | |
joint point-based | ![]() | 0.634 55 | 0.614 77 | 0.778 28 | 0.667 66 | 0.633 43 | 0.825 29 | 0.420 59 | 0.804 28 | 0.467 74 | 0.561 38 | 0.951 29 | 0.494 31 | 0.291 24 | 0.566 48 | 0.458 74 | 0.579 71 | 0.764 32 | 0.559 57 | 0.838 64 | 0.814 39 | 0.598 53 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
Superpoint Network | 0.683 35 | 0.851 16 | 0.728 53 | 0.800 29 | 0.653 34 | 0.806 48 | 0.468 34 | 0.804 28 | 0.572 45 | 0.602 28 | 0.946 44 | 0.453 48 | 0.239 51 | 0.519 62 | 0.822 29 | 0.689 35 | 0.762 34 | 0.595 43 | 0.895 33 | 0.827 32 | 0.630 40 | |
SAT | 0.742 15 | 0.860 13 | 0.765 33 | 0.819 16 | 0.769 3 | 0.848 12 | 0.533 11 | 0.829 19 | 0.663 13 | 0.631 16 | 0.955 17 | 0.586 8 | 0.274 32 | 0.753 14 | 0.896 6 | 0.729 15 | 0.760 35 | 0.666 15 | 0.921 16 | 0.855 19 | 0.733 7 | |
DenSeR | 0.628 61 | 0.800 28 | 0.625 82 | 0.719 49 | 0.545 67 | 0.806 48 | 0.445 48 | 0.597 71 | 0.448 78 | 0.519 54 | 0.938 62 | 0.481 34 | 0.328 10 | 0.489 70 | 0.499 73 | 0.657 45 | 0.759 36 | 0.592 44 | 0.881 42 | 0.797 49 | 0.634 38 | |
JSENet | ![]() | 0.699 27 | 0.881 10 | 0.762 34 | 0.821 14 | 0.667 31 | 0.800 52 | 0.522 16 | 0.792 32 | 0.613 26 | 0.607 25 | 0.935 66 | 0.492 32 | 0.205 61 | 0.576 45 | 0.853 21 | 0.691 32 | 0.758 37 | 0.652 17 | 0.872 51 | 0.828 31 | 0.649 33 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
PointTransformer++ | 0.725 19 | 0.727 58 | 0.811 11 | 0.819 16 | 0.765 4 | 0.841 15 | 0.502 21 | 0.814 26 | 0.621 24 | 0.623 18 | 0.955 17 | 0.556 15 | 0.284 28 | 0.620 36 | 0.866 14 | 0.781 5 | 0.757 38 | 0.648 18 | 0.932 13 | 0.862 12 | 0.709 18 | |
PointConv | ![]() | 0.666 40 | 0.781 33 | 0.759 36 | 0.699 54 | 0.644 39 | 0.822 33 | 0.475 31 | 0.779 33 | 0.564 50 | 0.504 59 | 0.953 24 | 0.428 58 | 0.203 63 | 0.586 43 | 0.754 40 | 0.661 43 | 0.753 39 | 0.588 47 | 0.902 27 | 0.813 41 | 0.642 35 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointASNL | ![]() | 0.666 40 | 0.703 64 | 0.781 27 | 0.751 45 | 0.655 33 | 0.830 24 | 0.471 33 | 0.769 36 | 0.474 72 | 0.537 45 | 0.951 29 | 0.475 37 | 0.279 30 | 0.635 30 | 0.698 53 | 0.675 37 | 0.751 40 | 0.553 60 | 0.816 69 | 0.806 43 | 0.703 20 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
VACNN++ | 0.684 33 | 0.728 57 | 0.757 39 | 0.776 35 | 0.690 25 | 0.804 50 | 0.464 37 | 0.816 22 | 0.577 44 | 0.587 35 | 0.945 47 | 0.508 29 | 0.276 31 | 0.671 22 | 0.710 49 | 0.663 42 | 0.750 41 | 0.589 46 | 0.881 42 | 0.832 30 | 0.653 32 | |
SALANet | 0.670 39 | 0.816 25 | 0.770 31 | 0.768 38 | 0.652 35 | 0.807 47 | 0.451 41 | 0.747 42 | 0.659 16 | 0.545 42 | 0.924 76 | 0.473 38 | 0.149 84 | 0.571 47 | 0.811 33 | 0.635 55 | 0.746 42 | 0.623 29 | 0.892 35 | 0.794 51 | 0.570 61 | |
RandLA-Net | ![]() | 0.645 46 | 0.778 34 | 0.731 52 | 0.699 54 | 0.577 58 | 0.829 25 | 0.446 46 | 0.736 46 | 0.477 71 | 0.523 53 | 0.945 47 | 0.454 46 | 0.269 38 | 0.484 71 | 0.749 43 | 0.618 58 | 0.738 43 | 0.599 40 | 0.827 66 | 0.792 56 | 0.621 42 |
PointNet2-SFPN | 0.631 57 | 0.771 38 | 0.692 67 | 0.672 62 | 0.524 69 | 0.837 19 | 0.440 53 | 0.706 56 | 0.538 55 | 0.446 70 | 0.944 53 | 0.421 63 | 0.219 55 | 0.552 54 | 0.751 42 | 0.591 67 | 0.737 44 | 0.543 65 | 0.901 29 | 0.768 67 | 0.557 67 | |
HPGCNN | 0.656 44 | 0.698 65 | 0.743 47 | 0.650 70 | 0.564 62 | 0.820 36 | 0.505 20 | 0.758 38 | 0.631 21 | 0.479 63 | 0.945 47 | 0.480 35 | 0.226 52 | 0.572 46 | 0.774 38 | 0.690 33 | 0.735 45 | 0.614 33 | 0.853 61 | 0.776 65 | 0.597 54 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
wsss-transformer | 0.600 67 | 0.634 73 | 0.743 47 | 0.697 56 | 0.601 50 | 0.781 61 | 0.437 55 | 0.585 74 | 0.493 66 | 0.446 70 | 0.933 71 | 0.394 69 | 0.011 95 | 0.654 25 | 0.661 60 | 0.603 62 | 0.733 46 | 0.526 70 | 0.832 65 | 0.761 70 | 0.480 81 | |
PD-Net | 0.638 51 | 0.797 29 | 0.769 32 | 0.641 75 | 0.590 53 | 0.820 36 | 0.461 38 | 0.537 80 | 0.637 19 | 0.536 46 | 0.947 40 | 0.388 71 | 0.206 60 | 0.656 24 | 0.668 57 | 0.647 50 | 0.732 47 | 0.585 48 | 0.868 55 | 0.793 53 | 0.473 84 | |
Feature-Geometry Net | ![]() | 0.685 32 | 0.866 12 | 0.748 42 | 0.819 16 | 0.645 38 | 0.794 55 | 0.450 44 | 0.802 30 | 0.587 39 | 0.604 26 | 0.945 47 | 0.464 41 | 0.201 64 | 0.554 53 | 0.840 26 | 0.723 19 | 0.732 47 | 0.602 39 | 0.907 23 | 0.822 36 | 0.603 51 |
DCM-Net | 0.658 43 | 0.778 34 | 0.702 59 | 0.806 26 | 0.619 45 | 0.813 46 | 0.468 34 | 0.693 58 | 0.494 65 | 0.524 51 | 0.941 58 | 0.449 50 | 0.298 21 | 0.510 64 | 0.821 30 | 0.675 37 | 0.727 49 | 0.568 53 | 0.826 67 | 0.803 45 | 0.637 37 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
VI-PointConv | 0.676 37 | 0.770 40 | 0.754 40 | 0.783 34 | 0.621 44 | 0.814 43 | 0.552 7 | 0.758 38 | 0.571 47 | 0.557 39 | 0.954 21 | 0.529 21 | 0.268 40 | 0.530 60 | 0.682 54 | 0.675 37 | 0.719 50 | 0.603 38 | 0.888 38 | 0.833 29 | 0.665 28 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
AttAN | 0.609 66 | 0.760 43 | 0.667 74 | 0.649 71 | 0.521 70 | 0.793 56 | 0.457 40 | 0.648 65 | 0.528 58 | 0.434 75 | 0.947 40 | 0.401 68 | 0.153 82 | 0.454 73 | 0.721 48 | 0.648 49 | 0.717 51 | 0.536 68 | 0.904 25 | 0.765 68 | 0.485 80 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
GMLPs | 0.538 77 | 0.495 87 | 0.693 66 | 0.647 72 | 0.471 77 | 0.793 56 | 0.300 83 | 0.477 83 | 0.505 63 | 0.358 82 | 0.903 86 | 0.327 80 | 0.081 90 | 0.472 72 | 0.529 69 | 0.448 84 | 0.710 52 | 0.509 71 | 0.746 76 | 0.737 76 | 0.554 69 | |
PointMRNet | 0.640 49 | 0.717 61 | 0.701 60 | 0.692 57 | 0.576 59 | 0.801 51 | 0.467 36 | 0.716 51 | 0.563 51 | 0.459 68 | 0.953 24 | 0.429 57 | 0.169 76 | 0.581 44 | 0.854 20 | 0.605 61 | 0.710 52 | 0.550 61 | 0.894 34 | 0.793 53 | 0.575 59 | |
PointConv-SFPN | 0.641 47 | 0.776 36 | 0.703 58 | 0.721 48 | 0.557 65 | 0.826 28 | 0.451 41 | 0.672 62 | 0.563 51 | 0.483 62 | 0.943 55 | 0.425 61 | 0.162 79 | 0.644 28 | 0.726 45 | 0.659 44 | 0.709 54 | 0.572 50 | 0.875 46 | 0.786 60 | 0.559 66 | |
SAFNet-seg | ![]() | 0.654 45 | 0.752 46 | 0.734 51 | 0.664 67 | 0.583 57 | 0.815 42 | 0.399 66 | 0.754 40 | 0.639 18 | 0.535 47 | 0.942 56 | 0.470 39 | 0.309 17 | 0.665 23 | 0.539 66 | 0.650 46 | 0.708 55 | 0.635 24 | 0.857 60 | 0.793 53 | 0.642 35 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
APCF-Net | 0.631 57 | 0.742 51 | 0.687 72 | 0.672 62 | 0.557 65 | 0.792 58 | 0.408 61 | 0.665 63 | 0.545 53 | 0.508 56 | 0.952 28 | 0.428 58 | 0.186 70 | 0.634 31 | 0.702 51 | 0.620 57 | 0.706 56 | 0.555 59 | 0.873 49 | 0.798 48 | 0.581 57 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
SPH3D-GCN | ![]() | 0.610 65 | 0.858 15 | 0.772 30 | 0.489 87 | 0.532 68 | 0.792 58 | 0.404 65 | 0.643 67 | 0.570 48 | 0.507 58 | 0.935 66 | 0.414 65 | 0.046 93 | 0.510 64 | 0.702 51 | 0.602 63 | 0.705 57 | 0.549 62 | 0.859 59 | 0.773 66 | 0.534 73 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
3DSM_DMMF | 0.631 57 | 0.626 74 | 0.745 45 | 0.801 28 | 0.607 47 | 0.751 73 | 0.506 19 | 0.729 49 | 0.565 49 | 0.491 61 | 0.866 90 | 0.434 53 | 0.197 67 | 0.595 39 | 0.630 61 | 0.709 26 | 0.705 57 | 0.560 55 | 0.875 46 | 0.740 75 | 0.491 79 | |
PointSPNet | 0.637 52 | 0.734 54 | 0.692 67 | 0.714 51 | 0.576 59 | 0.797 54 | 0.446 46 | 0.743 44 | 0.598 37 | 0.437 73 | 0.942 56 | 0.403 67 | 0.150 83 | 0.626 34 | 0.800 35 | 0.649 47 | 0.697 59 | 0.557 58 | 0.846 63 | 0.777 64 | 0.563 64 | |
FPConv | ![]() | 0.639 50 | 0.785 32 | 0.760 35 | 0.713 52 | 0.603 48 | 0.798 53 | 0.392 69 | 0.534 81 | 0.603 34 | 0.524 51 | 0.948 38 | 0.457 44 | 0.250 45 | 0.538 58 | 0.723 47 | 0.598 65 | 0.696 60 | 0.614 33 | 0.872 51 | 0.799 46 | 0.567 63 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
SegGroup_sem | ![]() | 0.627 62 | 0.818 24 | 0.747 44 | 0.701 53 | 0.602 49 | 0.764 69 | 0.385 73 | 0.629 68 | 0.490 67 | 0.508 56 | 0.931 73 | 0.409 66 | 0.201 64 | 0.564 49 | 0.725 46 | 0.618 58 | 0.692 61 | 0.539 67 | 0.873 49 | 0.794 51 | 0.548 70 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
MVPNet | ![]() | 0.641 47 | 0.831 20 | 0.715 54 | 0.671 64 | 0.590 53 | 0.781 61 | 0.394 68 | 0.679 60 | 0.642 17 | 0.553 40 | 0.937 63 | 0.462 42 | 0.256 43 | 0.649 26 | 0.406 79 | 0.626 56 | 0.691 62 | 0.666 15 | 0.877 44 | 0.792 56 | 0.608 47 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
FusionAwareConv | 0.630 60 | 0.604 79 | 0.741 49 | 0.766 40 | 0.590 53 | 0.747 74 | 0.501 22 | 0.734 47 | 0.503 64 | 0.527 49 | 0.919 80 | 0.454 46 | 0.323 12 | 0.550 57 | 0.420 78 | 0.678 36 | 0.688 63 | 0.544 63 | 0.896 32 | 0.795 50 | 0.627 41 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
SD-DETR | 0.576 72 | 0.746 48 | 0.609 86 | 0.445 91 | 0.517 71 | 0.643 87 | 0.366 75 | 0.714 53 | 0.456 76 | 0.468 66 | 0.870 89 | 0.432 54 | 0.264 41 | 0.558 52 | 0.674 55 | 0.586 70 | 0.688 63 | 0.482 79 | 0.739 78 | 0.733 77 | 0.537 72 | |
PPCNN++ | ![]() | 0.663 42 | 0.746 48 | 0.708 56 | 0.722 47 | 0.638 41 | 0.820 36 | 0.451 41 | 0.566 76 | 0.599 36 | 0.541 43 | 0.950 33 | 0.510 28 | 0.313 15 | 0.648 27 | 0.819 31 | 0.616 60 | 0.682 65 | 0.590 45 | 0.869 54 | 0.810 42 | 0.656 31 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
SConv | 0.636 53 | 0.830 21 | 0.697 63 | 0.752 44 | 0.572 61 | 0.780 63 | 0.445 48 | 0.716 51 | 0.529 57 | 0.530 48 | 0.951 29 | 0.446 52 | 0.170 75 | 0.507 66 | 0.666 58 | 0.636 54 | 0.682 65 | 0.541 66 | 0.886 39 | 0.799 46 | 0.594 55 | |
subcloud_weak | 0.516 79 | 0.676 67 | 0.591 89 | 0.609 77 | 0.442 80 | 0.774 65 | 0.335 79 | 0.597 71 | 0.422 83 | 0.357 83 | 0.932 72 | 0.341 79 | 0.094 89 | 0.298 84 | 0.528 70 | 0.473 82 | 0.676 67 | 0.495 76 | 0.602 89 | 0.721 80 | 0.349 91 | |
ROSMRF | 0.580 71 | 0.772 37 | 0.707 57 | 0.681 60 | 0.563 63 | 0.764 69 | 0.362 76 | 0.515 82 | 0.465 75 | 0.465 67 | 0.936 65 | 0.427 60 | 0.207 59 | 0.438 74 | 0.577 64 | 0.536 75 | 0.675 68 | 0.486 78 | 0.723 80 | 0.779 62 | 0.524 75 | |
SIConv | 0.625 63 | 0.830 21 | 0.694 65 | 0.757 42 | 0.563 63 | 0.772 67 | 0.448 45 | 0.647 66 | 0.520 59 | 0.509 55 | 0.949 36 | 0.431 56 | 0.191 68 | 0.496 68 | 0.614 62 | 0.647 50 | 0.672 69 | 0.535 69 | 0.876 45 | 0.783 61 | 0.571 60 | |
DVVNet | 0.562 75 | 0.648 71 | 0.700 61 | 0.770 37 | 0.586 56 | 0.687 81 | 0.333 80 | 0.650 64 | 0.514 62 | 0.475 65 | 0.906 84 | 0.359 75 | 0.223 54 | 0.340 82 | 0.442 77 | 0.422 86 | 0.668 70 | 0.501 74 | 0.708 81 | 0.779 62 | 0.534 73 | |
PointMTL | 0.632 56 | 0.731 55 | 0.688 70 | 0.675 61 | 0.591 52 | 0.784 60 | 0.444 51 | 0.565 77 | 0.610 28 | 0.492 60 | 0.949 36 | 0.456 45 | 0.254 44 | 0.587 41 | 0.706 50 | 0.599 64 | 0.665 71 | 0.612 36 | 0.868 55 | 0.791 59 | 0.579 58 | |
SQN_0.1% | 0.569 73 | 0.676 67 | 0.696 64 | 0.657 68 | 0.497 72 | 0.779 64 | 0.424 57 | 0.548 78 | 0.515 61 | 0.376 80 | 0.902 87 | 0.422 62 | 0.357 4 | 0.379 80 | 0.456 75 | 0.596 66 | 0.659 72 | 0.544 63 | 0.685 83 | 0.665 88 | 0.556 68 | |
PanopticFusion-label | 0.529 78 | 0.491 88 | 0.688 70 | 0.604 79 | 0.386 84 | 0.632 88 | 0.225 93 | 0.705 57 | 0.434 81 | 0.293 88 | 0.815 91 | 0.348 78 | 0.241 50 | 0.499 67 | 0.669 56 | 0.507 77 | 0.649 73 | 0.442 87 | 0.796 71 | 0.602 91 | 0.561 65 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
LAP-D | 0.594 68 | 0.720 59 | 0.692 67 | 0.637 76 | 0.456 79 | 0.773 66 | 0.391 71 | 0.730 48 | 0.587 39 | 0.445 72 | 0.940 60 | 0.381 72 | 0.288 25 | 0.434 76 | 0.453 76 | 0.591 67 | 0.649 73 | 0.581 49 | 0.777 73 | 0.749 74 | 0.610 46 | |
Supervoxel-CNN | 0.635 54 | 0.656 70 | 0.711 55 | 0.719 49 | 0.613 46 | 0.757 72 | 0.444 51 | 0.765 37 | 0.534 56 | 0.566 37 | 0.928 74 | 0.478 36 | 0.272 34 | 0.636 29 | 0.531 68 | 0.664 41 | 0.645 75 | 0.508 73 | 0.864 57 | 0.792 56 | 0.611 44 | |
CCRFNet | 0.589 70 | 0.766 42 | 0.659 77 | 0.683 59 | 0.470 78 | 0.740 76 | 0.387 72 | 0.620 70 | 0.490 67 | 0.476 64 | 0.922 78 | 0.355 77 | 0.245 49 | 0.511 63 | 0.511 71 | 0.571 72 | 0.643 76 | 0.493 77 | 0.872 51 | 0.762 69 | 0.600 52 | |
Pointnet++ & Feature | ![]() | 0.557 76 | 0.735 53 | 0.661 76 | 0.686 58 | 0.491 74 | 0.744 75 | 0.392 69 | 0.539 79 | 0.451 77 | 0.375 81 | 0.946 44 | 0.376 73 | 0.205 61 | 0.403 79 | 0.356 82 | 0.553 74 | 0.643 76 | 0.497 75 | 0.824 68 | 0.756 71 | 0.515 76 |
DPC | 0.592 69 | 0.720 59 | 0.700 61 | 0.602 80 | 0.480 75 | 0.762 71 | 0.380 74 | 0.713 54 | 0.585 42 | 0.437 73 | 0.940 60 | 0.369 74 | 0.288 25 | 0.434 76 | 0.509 72 | 0.590 69 | 0.639 78 | 0.567 54 | 0.772 74 | 0.755 72 | 0.592 56 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
TextureNet | ![]() | 0.566 74 | 0.672 69 | 0.664 75 | 0.671 64 | 0.494 73 | 0.719 77 | 0.445 48 | 0.678 61 | 0.411 84 | 0.396 78 | 0.935 66 | 0.356 76 | 0.225 53 | 0.412 78 | 0.535 67 | 0.565 73 | 0.636 79 | 0.464 81 | 0.794 72 | 0.680 85 | 0.568 62 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
HPEIN | 0.618 64 | 0.729 56 | 0.668 73 | 0.647 72 | 0.597 51 | 0.766 68 | 0.414 60 | 0.680 59 | 0.520 59 | 0.525 50 | 0.946 44 | 0.432 54 | 0.215 57 | 0.493 69 | 0.599 63 | 0.638 53 | 0.617 80 | 0.570 51 | 0.897 31 | 0.806 43 | 0.605 50 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
SurfaceConvPF | 0.442 87 | 0.505 86 | 0.622 84 | 0.380 94 | 0.342 90 | 0.654 84 | 0.227 92 | 0.397 86 | 0.367 87 | 0.276 90 | 0.924 76 | 0.240 90 | 0.198 66 | 0.359 81 | 0.262 85 | 0.366 88 | 0.581 81 | 0.435 88 | 0.640 86 | 0.668 86 | 0.398 86 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
DGCNN_reproduce | ![]() | 0.446 86 | 0.474 90 | 0.623 83 | 0.463 89 | 0.366 87 | 0.651 85 | 0.310 81 | 0.389 87 | 0.349 89 | 0.330 85 | 0.937 63 | 0.271 87 | 0.126 86 | 0.285 85 | 0.224 88 | 0.350 91 | 0.577 82 | 0.445 86 | 0.625 87 | 0.723 79 | 0.394 87 |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
Online SegFusion | 0.515 80 | 0.607 78 | 0.644 80 | 0.579 82 | 0.434 81 | 0.630 89 | 0.353 77 | 0.628 69 | 0.440 79 | 0.410 76 | 0.762 94 | 0.307 82 | 0.167 77 | 0.520 61 | 0.403 80 | 0.516 76 | 0.565 83 | 0.447 85 | 0.678 84 | 0.701 82 | 0.514 77 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
Tangent Convolutions | ![]() | 0.438 89 | 0.437 92 | 0.646 79 | 0.474 88 | 0.369 86 | 0.645 86 | 0.353 77 | 0.258 91 | 0.282 93 | 0.279 89 | 0.918 81 | 0.298 84 | 0.147 85 | 0.283 86 | 0.294 84 | 0.487 79 | 0.562 84 | 0.427 89 | 0.619 88 | 0.633 89 | 0.352 90 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
FCPN | ![]() | 0.447 85 | 0.679 66 | 0.604 88 | 0.578 83 | 0.380 85 | 0.682 82 | 0.291 86 | 0.106 94 | 0.483 70 | 0.258 93 | 0.920 79 | 0.258 89 | 0.025 94 | 0.231 91 | 0.325 83 | 0.480 81 | 0.560 85 | 0.463 82 | 0.725 79 | 0.666 87 | 0.231 95 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
3DWSSS | 0.425 90 | 0.525 85 | 0.647 78 | 0.522 85 | 0.324 91 | 0.488 95 | 0.077 96 | 0.712 55 | 0.353 88 | 0.401 77 | 0.636 96 | 0.281 86 | 0.176 73 | 0.340 82 | 0.565 65 | 0.175 95 | 0.551 86 | 0.398 91 | 0.370 95 | 0.602 91 | 0.361 89 | |
PointCNN with RGB | ![]() | 0.458 84 | 0.577 81 | 0.611 85 | 0.356 95 | 0.321 92 | 0.715 78 | 0.299 85 | 0.376 88 | 0.328 91 | 0.319 86 | 0.944 53 | 0.285 85 | 0.164 78 | 0.216 92 | 0.229 87 | 0.484 80 | 0.545 87 | 0.456 83 | 0.755 75 | 0.709 81 | 0.475 83 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
3DMV, FTSDF | 0.501 81 | 0.558 83 | 0.608 87 | 0.424 93 | 0.478 76 | 0.690 80 | 0.246 89 | 0.586 73 | 0.468 73 | 0.450 69 | 0.911 82 | 0.394 69 | 0.160 80 | 0.438 74 | 0.212 89 | 0.432 85 | 0.541 88 | 0.475 80 | 0.742 77 | 0.727 78 | 0.477 82 | |
PCNN | 0.498 82 | 0.559 82 | 0.644 80 | 0.560 84 | 0.420 83 | 0.711 79 | 0.229 91 | 0.414 84 | 0.436 80 | 0.352 84 | 0.941 58 | 0.324 81 | 0.155 81 | 0.238 89 | 0.387 81 | 0.493 78 | 0.529 89 | 0.509 71 | 0.813 70 | 0.751 73 | 0.504 78 | |
SPLAT Net | ![]() | 0.393 92 | 0.472 91 | 0.511 92 | 0.606 78 | 0.311 93 | 0.656 83 | 0.245 90 | 0.405 85 | 0.328 91 | 0.197 94 | 0.927 75 | 0.227 92 | 0.000 97 | 0.001 96 | 0.249 86 | 0.271 94 | 0.510 90 | 0.383 93 | 0.593 90 | 0.699 83 | 0.267 93 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 93 | 0.297 94 | 0.491 93 | 0.432 92 | 0.358 89 | 0.612 91 | 0.274 87 | 0.116 93 | 0.411 84 | 0.265 91 | 0.904 85 | 0.229 91 | 0.079 91 | 0.250 87 | 0.185 92 | 0.320 92 | 0.510 90 | 0.385 92 | 0.548 91 | 0.597 94 | 0.394 87 | |
3DMV | 0.484 83 | 0.484 89 | 0.538 91 | 0.643 74 | 0.424 82 | 0.606 92 | 0.310 81 | 0.574 75 | 0.433 82 | 0.378 79 | 0.796 92 | 0.301 83 | 0.214 58 | 0.537 59 | 0.208 90 | 0.472 83 | 0.507 92 | 0.413 90 | 0.693 82 | 0.602 91 | 0.539 71 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
PNET2 | 0.442 87 | 0.548 84 | 0.548 90 | 0.597 81 | 0.363 88 | 0.628 90 | 0.300 83 | 0.292 89 | 0.374 86 | 0.307 87 | 0.881 88 | 0.268 88 | 0.186 70 | 0.238 89 | 0.204 91 | 0.407 87 | 0.506 93 | 0.449 84 | 0.667 85 | 0.620 90 | 0.462 85 | |
SSC-UNet | ![]() | 0.308 95 | 0.353 93 | 0.290 96 | 0.278 96 | 0.166 96 | 0.553 93 | 0.169 95 | 0.286 90 | 0.147 95 | 0.148 96 | 0.908 83 | 0.182 94 | 0.064 92 | 0.023 95 | 0.018 96 | 0.354 90 | 0.363 94 | 0.345 94 | 0.546 93 | 0.685 84 | 0.278 92 |
ScanNet | ![]() | 0.306 96 | 0.203 95 | 0.366 95 | 0.501 86 | 0.311 93 | 0.524 94 | 0.211 94 | 0.002 96 | 0.342 90 | 0.189 95 | 0.786 93 | 0.145 95 | 0.102 88 | 0.245 88 | 0.152 93 | 0.318 93 | 0.348 95 | 0.300 95 | 0.460 94 | 0.437 96 | 0.182 96 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
PointNet++ | ![]() | 0.339 94 | 0.584 80 | 0.478 94 | 0.458 90 | 0.256 95 | 0.360 96 | 0.250 88 | 0.247 92 | 0.278 94 | 0.261 92 | 0.677 95 | 0.183 93 | 0.117 87 | 0.212 93 | 0.145 94 | 0.364 89 | 0.346 96 | 0.232 96 | 0.548 91 | 0.523 95 | 0.252 94 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
ERROR | 0.054 97 | 0.000 96 | 0.041 97 | 0.172 97 | 0.030 97 | 0.062 97 | 0.001 97 | 0.035 95 | 0.004 96 | 0.051 97 | 0.143 97 | 0.019 96 | 0.003 96 | 0.041 94 | 0.050 95 | 0.003 96 | 0.054 97 | 0.018 97 | 0.005 96 | 0.264 97 | 0.082 97 | |