3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
PTv3-PPT-ALC | ![]() | 0.798 1 | 0.911 11 | 0.812 22 | 0.854 8 | 0.770 12 | 0.856 15 | 0.555 17 | 0.943 1 | 0.660 26 | 0.735 2 | 0.979 1 | 0.606 7 | 0.492 1 | 0.792 4 | 0.934 4 | 0.841 2 | 0.819 6 | 0.716 9 | 0.947 10 | 0.906 1 | 0.822 1 |
Guangda Ji, Silvan Weder, Francis Engelmann, Marc Pollefeys, Hermann Blum: ARKit LabelMaker: A New Scale for Indoor 3D Scene Understanding. CVPR 2025 | ||||||||||||||||||||||
PonderV2 | 0.785 4 | 0.978 1 | 0.800 30 | 0.833 29 | 0.788 4 | 0.853 20 | 0.545 21 | 0.910 9 | 0.713 3 | 0.705 6 | 0.979 1 | 0.596 9 | 0.390 2 | 0.769 15 | 0.832 45 | 0.821 5 | 0.792 35 | 0.730 2 | 0.975 2 | 0.897 6 | 0.785 7 | |
Haoyi Zhu, Honghui Yang, Xiaoyang Wu, Di Huang, Sha Zhang, Xianglong He, Tong He, Hengshuang Zhao, Chunhua Shen, Yu Qiao, Wanli Ouyang: PonderV2: Pave the Way for 3D Foundataion Model with A Universal Pre-training Paradigm. | ||||||||||||||||||||||
PTv3 ScanNet | 0.794 3 | 0.941 3 | 0.813 21 | 0.851 11 | 0.782 7 | 0.890 2 | 0.597 1 | 0.916 6 | 0.696 11 | 0.713 5 | 0.979 1 | 0.635 1 | 0.384 3 | 0.793 3 | 0.907 10 | 0.821 5 | 0.790 36 | 0.696 14 | 0.967 4 | 0.903 3 | 0.805 2 | |
Xiaoyang Wu, Li Jiang, Peng-Shuai Wang, Zhijian Liu, Xihui Liu, Yu Qiao, Wanli Ouyang, Tong He, Hengshuang Zhao: Point Transformer V3: Simpler, Faster, Stronger. CVPR 2024 (Oral) | ||||||||||||||||||||||
EQ-Net | 0.743 31 | 0.620 101 | 0.799 33 | 0.849 13 | 0.730 35 | 0.822 56 | 0.493 50 | 0.897 14 | 0.664 23 | 0.681 12 | 0.955 34 | 0.562 29 | 0.378 4 | 0.760 21 | 0.903 12 | 0.738 30 | 0.801 24 | 0.673 30 | 0.907 43 | 0.877 16 | 0.745 17 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
FusionNet | 0.688 52 | 0.704 85 | 0.741 74 | 0.754 65 | 0.656 54 | 0.829 48 | 0.501 42 | 0.741 69 | 0.609 48 | 0.548 64 | 0.950 55 | 0.522 45 | 0.371 5 | 0.633 53 | 0.756 59 | 0.715 43 | 0.771 47 | 0.623 48 | 0.861 84 | 0.814 61 | 0.658 52 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
O3DSeg | 0.668 62 | 0.822 36 | 0.771 52 | 0.496 112 | 0.651 58 | 0.833 44 | 0.541 23 | 0.761 61 | 0.555 75 | 0.611 43 | 0.966 15 | 0.489 55 | 0.370 6 | 0.388 105 | 0.580 88 | 0.776 17 | 0.751 62 | 0.570 71 | 0.956 7 | 0.817 60 | 0.646 57 | |
PNE | 0.755 17 | 0.786 45 | 0.835 5 | 0.834 28 | 0.758 19 | 0.849 25 | 0.570 10 | 0.836 38 | 0.648 32 | 0.668 19 | 0.978 6 | 0.581 20 | 0.367 7 | 0.683 40 | 0.856 33 | 0.804 8 | 0.801 24 | 0.678 22 | 0.961 6 | 0.889 7 | 0.716 35 | |
P. Hermosilla: Point Neighborhood Embeddings. | ||||||||||||||||||||||
DMF-Net | 0.752 20 | 0.906 14 | 0.793 38 | 0.802 47 | 0.689 46 | 0.825 52 | 0.556 16 | 0.867 23 | 0.681 18 | 0.602 50 | 0.960 19 | 0.555 32 | 0.365 8 | 0.779 8 | 0.859 30 | 0.747 27 | 0.795 32 | 0.717 8 | 0.917 38 | 0.856 35 | 0.764 12 | |
C.Yang, Y.Yan, W.Zhao, J.Ye, X.Yang, A.Hussain, B.Dong, K.Huang: Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation. ICONIP 2023 | ||||||||||||||||||||||
DITR ScanNet | 0.797 2 | 0.727 76 | 0.869 1 | 0.882 1 | 0.785 6 | 0.868 7 | 0.578 5 | 0.943 1 | 0.744 1 | 0.727 3 | 0.979 1 | 0.627 2 | 0.364 9 | 0.824 1 | 0.949 2 | 0.779 15 | 0.844 1 | 0.757 1 | 0.982 1 | 0.905 2 | 0.802 3 | |
Karim Abou Zeid, Kadir Yilmaz, Daan de Geus, Alexander Hermans, David Adrian, Timm Linder, Bastian Leibe: DINO in the Room: Leveraging 2D Foundation Models for 3D Segmentation. | ||||||||||||||||||||||
PPT-SpUNet-Joint | 0.766 9 | 0.932 5 | 0.794 36 | 0.829 31 | 0.751 26 | 0.854 18 | 0.540 25 | 0.903 11 | 0.630 39 | 0.672 17 | 0.963 16 | 0.565 26 | 0.357 10 | 0.788 5 | 0.900 14 | 0.737 31 | 0.802 20 | 0.685 20 | 0.950 8 | 0.887 8 | 0.780 8 | |
Xiaoyang Wu, Zhuotao Tian, Xin Wen, Bohao Peng, Xihui Liu, Kaicheng Yu, Hengshuang Zhao: Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training. CVPR 2024 | ||||||||||||||||||||||
SQN_0.1% | 0.569 99 | 0.676 90 | 0.696 89 | 0.657 90 | 0.497 97 | 0.779 88 | 0.424 82 | 0.548 104 | 0.515 84 | 0.376 105 | 0.902 111 | 0.422 87 | 0.357 10 | 0.379 106 | 0.456 101 | 0.596 92 | 0.659 96 | 0.544 87 | 0.685 109 | 0.665 113 | 0.556 91 | |
Swin3D | ![]() | 0.779 6 | 0.861 23 | 0.818 16 | 0.836 26 | 0.790 3 | 0.875 4 | 0.576 7 | 0.905 10 | 0.704 7 | 0.739 1 | 0.969 12 | 0.611 3 | 0.349 12 | 0.756 25 | 0.958 1 | 0.702 51 | 0.805 19 | 0.708 10 | 0.916 39 | 0.898 5 | 0.801 4 |
ResLFE_HDS | 0.772 8 | 0.939 4 | 0.824 7 | 0.854 8 | 0.771 11 | 0.840 35 | 0.564 13 | 0.900 12 | 0.686 16 | 0.677 14 | 0.961 18 | 0.537 36 | 0.348 13 | 0.769 15 | 0.903 12 | 0.785 13 | 0.815 9 | 0.676 26 | 0.939 16 | 0.880 13 | 0.772 11 | |
One Thing One Click | 0.701 47 | 0.825 35 | 0.796 34 | 0.723 68 | 0.716 39 | 0.832 45 | 0.433 81 | 0.816 45 | 0.634 37 | 0.609 45 | 0.969 12 | 0.418 89 | 0.344 14 | 0.559 75 | 0.833 44 | 0.715 43 | 0.808 18 | 0.560 77 | 0.902 48 | 0.847 44 | 0.680 46 | |
ODIN | ![]() | 0.744 29 | 0.658 93 | 0.752 64 | 0.870 3 | 0.714 40 | 0.843 33 | 0.569 11 | 0.919 5 | 0.703 8 | 0.622 40 | 0.949 59 | 0.591 12 | 0.343 15 | 0.736 34 | 0.784 56 | 0.816 7 | 0.838 2 | 0.672 31 | 0.918 37 | 0.854 39 | 0.725 28 |
Ayush Jain, Pushkal Katara, Nikolaos Gkanatsios, Adam W. Harley, Gabriel Sarch, Kriti Aggarwal, Vishrav Chaudhary, Katerina Fragkiadaki: ODIN: A Single Model for 2D and 3D Segmentation. CVPR 2024 | ||||||||||||||||||||||
RPN | 0.736 35 | 0.776 51 | 0.790 39 | 0.851 11 | 0.754 23 | 0.854 18 | 0.491 52 | 0.866 25 | 0.596 56 | 0.686 9 | 0.955 34 | 0.536 37 | 0.342 16 | 0.624 56 | 0.869 26 | 0.787 11 | 0.802 20 | 0.628 45 | 0.927 27 | 0.875 20 | 0.704 39 | |
One-Thing-One-Click | 0.693 49 | 0.743 67 | 0.794 36 | 0.655 91 | 0.684 48 | 0.822 56 | 0.497 47 | 0.719 74 | 0.622 41 | 0.617 41 | 0.977 10 | 0.447 76 | 0.339 17 | 0.750 30 | 0.664 81 | 0.703 50 | 0.790 36 | 0.596 60 | 0.946 12 | 0.855 37 | 0.647 56 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
DiffSegNet | 0.758 14 | 0.725 78 | 0.789 41 | 0.843 20 | 0.762 17 | 0.856 15 | 0.562 14 | 0.920 4 | 0.657 29 | 0.658 21 | 0.958 23 | 0.589 14 | 0.337 18 | 0.782 6 | 0.879 24 | 0.787 11 | 0.779 41 | 0.678 22 | 0.926 29 | 0.880 13 | 0.799 5 | |
TTT-KD | 0.773 7 | 0.646 97 | 0.818 16 | 0.809 41 | 0.774 10 | 0.878 3 | 0.581 3 | 0.943 1 | 0.687 15 | 0.704 7 | 0.978 6 | 0.607 6 | 0.336 19 | 0.775 11 | 0.912 8 | 0.838 4 | 0.823 4 | 0.694 15 | 0.967 4 | 0.899 4 | 0.794 6 | |
Lisa Weijler, Muhammad Jehanzeb Mirza, Leon Sick, Can Ekkazan, Pedro Hermosilla: TTT-KD: Test-Time Training for 3D Semantic Segmentation through Knowledge Distillation from Foundation Models. | ||||||||||||||||||||||
IPCA | 0.731 37 | 0.890 17 | 0.837 4 | 0.864 4 | 0.726 37 | 0.873 5 | 0.530 30 | 0.824 43 | 0.489 93 | 0.647 24 | 0.978 6 | 0.609 5 | 0.336 19 | 0.624 56 | 0.733 64 | 0.758 23 | 0.776 43 | 0.570 71 | 0.949 9 | 0.877 16 | 0.728 24 | |
OccuSeg+Semantic | 0.764 11 | 0.758 61 | 0.796 34 | 0.839 24 | 0.746 30 | 0.907 1 | 0.562 14 | 0.850 30 | 0.680 19 | 0.672 17 | 0.978 6 | 0.610 4 | 0.335 21 | 0.777 9 | 0.819 49 | 0.847 1 | 0.830 3 | 0.691 17 | 0.972 3 | 0.885 10 | 0.727 26 | |
OA-CNN-L_ScanNet20 | 0.756 16 | 0.783 47 | 0.826 6 | 0.858 6 | 0.776 9 | 0.837 39 | 0.548 20 | 0.896 15 | 0.649 31 | 0.675 15 | 0.962 17 | 0.586 17 | 0.335 21 | 0.771 14 | 0.802 54 | 0.770 19 | 0.787 38 | 0.691 17 | 0.936 20 | 0.880 13 | 0.761 13 | |
Virtual MVFusion | 0.746 26 | 0.771 55 | 0.819 14 | 0.848 15 | 0.702 43 | 0.865 10 | 0.397 91 | 0.899 13 | 0.699 9 | 0.664 20 | 0.948 62 | 0.588 15 | 0.330 23 | 0.746 32 | 0.851 39 | 0.764 21 | 0.796 29 | 0.704 12 | 0.935 21 | 0.866 28 | 0.728 24 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
DenSeR | 0.628 85 | 0.800 41 | 0.625 107 | 0.719 71 | 0.545 91 | 0.806 72 | 0.445 73 | 0.597 97 | 0.448 103 | 0.519 78 | 0.938 86 | 0.481 58 | 0.328 24 | 0.489 94 | 0.499 99 | 0.657 69 | 0.759 57 | 0.592 63 | 0.881 64 | 0.797 73 | 0.634 61 | |
SparseConvNet | 0.725 39 | 0.647 96 | 0.821 11 | 0.846 17 | 0.721 38 | 0.869 6 | 0.533 27 | 0.754 64 | 0.603 52 | 0.614 42 | 0.955 34 | 0.572 24 | 0.325 25 | 0.710 39 | 0.870 25 | 0.724 37 | 0.823 4 | 0.628 45 | 0.934 22 | 0.865 29 | 0.683 45 | |
FusionAwareConv | 0.630 84 | 0.604 104 | 0.741 74 | 0.766 62 | 0.590 76 | 0.747 98 | 0.501 42 | 0.734 71 | 0.503 88 | 0.527 72 | 0.919 104 | 0.454 70 | 0.323 26 | 0.550 80 | 0.420 104 | 0.678 60 | 0.688 87 | 0.544 87 | 0.896 53 | 0.795 74 | 0.627 64 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
LRPNet | 0.742 32 | 0.816 38 | 0.806 27 | 0.807 43 | 0.752 24 | 0.828 50 | 0.575 8 | 0.839 37 | 0.699 9 | 0.637 34 | 0.954 40 | 0.520 46 | 0.320 27 | 0.755 26 | 0.834 43 | 0.760 22 | 0.772 45 | 0.676 26 | 0.915 41 | 0.862 30 | 0.717 33 | |
MSP | 0.748 24 | 0.623 100 | 0.804 28 | 0.859 5 | 0.745 31 | 0.824 54 | 0.501 42 | 0.912 8 | 0.690 13 | 0.685 10 | 0.956 30 | 0.567 25 | 0.320 27 | 0.768 17 | 0.918 7 | 0.720 39 | 0.802 20 | 0.676 26 | 0.921 33 | 0.881 12 | 0.779 9 | |
PPCNN++ | ![]() | 0.663 65 | 0.746 65 | 0.708 81 | 0.722 69 | 0.638 64 | 0.820 59 | 0.451 66 | 0.566 102 | 0.599 54 | 0.541 66 | 0.950 55 | 0.510 49 | 0.313 29 | 0.648 47 | 0.819 49 | 0.616 86 | 0.682 89 | 0.590 64 | 0.869 80 | 0.810 64 | 0.656 53 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
Mix3D | ![]() | 0.781 5 | 0.964 2 | 0.855 2 | 0.843 20 | 0.781 8 | 0.858 13 | 0.575 8 | 0.831 39 | 0.685 17 | 0.714 4 | 0.979 1 | 0.594 10 | 0.310 30 | 0.801 2 | 0.892 19 | 0.841 2 | 0.819 6 | 0.723 6 | 0.940 15 | 0.887 8 | 0.725 28 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
SAFNet-seg | ![]() | 0.654 69 | 0.752 63 | 0.734 76 | 0.664 89 | 0.583 80 | 0.815 65 | 0.399 90 | 0.754 64 | 0.639 35 | 0.535 70 | 0.942 80 | 0.470 63 | 0.309 31 | 0.665 43 | 0.539 92 | 0.650 70 | 0.708 79 | 0.635 42 | 0.857 86 | 0.793 77 | 0.642 58 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
BPNet | ![]() | 0.749 22 | 0.909 12 | 0.818 16 | 0.811 39 | 0.752 24 | 0.839 37 | 0.485 53 | 0.842 35 | 0.673 21 | 0.644 26 | 0.957 28 | 0.528 42 | 0.305 32 | 0.773 12 | 0.859 30 | 0.788 10 | 0.818 8 | 0.693 16 | 0.916 39 | 0.856 35 | 0.723 30 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
PointTransformerV2 | 0.752 20 | 0.742 68 | 0.809 25 | 0.872 2 | 0.758 19 | 0.860 12 | 0.552 18 | 0.891 17 | 0.610 46 | 0.687 8 | 0.960 19 | 0.559 30 | 0.304 33 | 0.766 18 | 0.926 6 | 0.767 20 | 0.797 28 | 0.644 38 | 0.942 13 | 0.876 19 | 0.722 31 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
DiffSeg3D2 | 0.745 28 | 0.725 78 | 0.814 20 | 0.837 25 | 0.751 26 | 0.831 46 | 0.514 36 | 0.896 15 | 0.674 20 | 0.684 11 | 0.960 19 | 0.564 27 | 0.303 34 | 0.773 12 | 0.820 48 | 0.713 45 | 0.798 27 | 0.690 19 | 0.923 31 | 0.875 20 | 0.757 14 | |
VMNet | ![]() | 0.746 26 | 0.870 21 | 0.838 3 | 0.858 6 | 0.729 36 | 0.850 24 | 0.501 42 | 0.874 20 | 0.587 59 | 0.658 21 | 0.956 30 | 0.564 27 | 0.299 35 | 0.765 19 | 0.900 14 | 0.716 42 | 0.812 15 | 0.631 44 | 0.939 16 | 0.858 33 | 0.709 37 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
DCM-Net | 0.658 66 | 0.778 49 | 0.702 84 | 0.806 45 | 0.619 68 | 0.813 69 | 0.468 59 | 0.693 82 | 0.494 89 | 0.524 74 | 0.941 82 | 0.449 75 | 0.298 36 | 0.510 88 | 0.821 47 | 0.675 61 | 0.727 73 | 0.568 74 | 0.826 93 | 0.803 68 | 0.637 60 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
online3d | 0.727 38 | 0.715 83 | 0.777 48 | 0.854 8 | 0.748 29 | 0.858 13 | 0.497 47 | 0.872 21 | 0.572 66 | 0.639 32 | 0.957 28 | 0.523 43 | 0.297 37 | 0.750 30 | 0.803 53 | 0.744 28 | 0.810 16 | 0.587 67 | 0.938 18 | 0.871 25 | 0.719 32 | |
CU-Hybrid Net | 0.764 11 | 0.924 8 | 0.819 14 | 0.840 23 | 0.757 21 | 0.853 20 | 0.580 4 | 0.848 31 | 0.709 5 | 0.643 27 | 0.958 23 | 0.587 16 | 0.295 38 | 0.753 27 | 0.884 23 | 0.758 23 | 0.815 9 | 0.725 5 | 0.927 27 | 0.867 27 | 0.743 19 | |
INS-Conv-semantic | 0.717 42 | 0.751 64 | 0.759 58 | 0.812 38 | 0.704 42 | 0.868 7 | 0.537 26 | 0.842 35 | 0.609 48 | 0.608 46 | 0.953 44 | 0.534 39 | 0.293 39 | 0.616 59 | 0.864 28 | 0.719 41 | 0.793 33 | 0.640 40 | 0.933 23 | 0.845 47 | 0.663 51 | |
MVF-GNN | 0.658 66 | 0.558 108 | 0.751 65 | 0.655 91 | 0.690 44 | 0.722 101 | 0.453 65 | 0.867 23 | 0.579 64 | 0.576 58 | 0.893 112 | 0.523 43 | 0.293 39 | 0.733 35 | 0.571 90 | 0.692 53 | 0.659 96 | 0.606 55 | 0.875 70 | 0.804 67 | 0.668 49 | |
joint point-based | ![]() | 0.634 79 | 0.614 102 | 0.778 47 | 0.667 88 | 0.633 66 | 0.825 52 | 0.420 84 | 0.804 50 | 0.467 98 | 0.561 61 | 0.951 51 | 0.494 52 | 0.291 41 | 0.566 72 | 0.458 100 | 0.579 97 | 0.764 52 | 0.559 79 | 0.838 90 | 0.814 61 | 0.598 75 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
LAP-D | 0.594 94 | 0.720 80 | 0.692 92 | 0.637 99 | 0.456 104 | 0.773 90 | 0.391 96 | 0.730 72 | 0.587 59 | 0.445 97 | 0.940 84 | 0.381 97 | 0.288 42 | 0.434 101 | 0.453 102 | 0.591 93 | 0.649 98 | 0.581 69 | 0.777 99 | 0.749 99 | 0.610 68 | |
DPC | 0.592 95 | 0.720 80 | 0.700 86 | 0.602 104 | 0.480 100 | 0.762 95 | 0.380 99 | 0.713 78 | 0.585 62 | 0.437 98 | 0.940 84 | 0.369 99 | 0.288 42 | 0.434 101 | 0.509 98 | 0.590 95 | 0.639 103 | 0.567 75 | 0.772 100 | 0.755 97 | 0.592 78 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
MinkowskiNet | ![]() | 0.736 35 | 0.859 25 | 0.818 16 | 0.832 30 | 0.709 41 | 0.840 35 | 0.521 33 | 0.853 29 | 0.660 26 | 0.643 27 | 0.951 51 | 0.544 34 | 0.286 44 | 0.731 36 | 0.893 18 | 0.675 61 | 0.772 45 | 0.683 21 | 0.874 73 | 0.852 41 | 0.727 26 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
PointTransformer++ | 0.725 39 | 0.727 76 | 0.811 24 | 0.819 34 | 0.765 15 | 0.841 34 | 0.502 41 | 0.814 48 | 0.621 42 | 0.623 39 | 0.955 34 | 0.556 31 | 0.284 45 | 0.620 58 | 0.866 27 | 0.781 14 | 0.757 60 | 0.648 36 | 0.932 25 | 0.862 30 | 0.709 37 | |
ConDaFormer | 0.755 17 | 0.927 6 | 0.822 10 | 0.836 26 | 0.801 1 | 0.849 25 | 0.516 35 | 0.864 27 | 0.651 30 | 0.680 13 | 0.958 23 | 0.584 19 | 0.282 46 | 0.759 23 | 0.855 35 | 0.728 34 | 0.802 20 | 0.678 22 | 0.880 66 | 0.873 23 | 0.756 16 | |
Lunhao Duan, Shanshan Zhao, Nan Xue, Mingming Gong, Guisong Xia, Dacheng Tao: ConDaFormer : Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding. Neurips, 2023 | ||||||||||||||||||||||
DGNet | 0.684 54 | 0.712 84 | 0.784 44 | 0.782 57 | 0.658 53 | 0.835 42 | 0.499 46 | 0.823 44 | 0.641 34 | 0.597 53 | 0.950 55 | 0.487 56 | 0.281 47 | 0.575 69 | 0.619 85 | 0.647 74 | 0.764 52 | 0.620 50 | 0.871 79 | 0.846 46 | 0.688 44 | |
MatchingNet | 0.724 41 | 0.812 40 | 0.812 22 | 0.810 40 | 0.735 34 | 0.834 43 | 0.495 49 | 0.860 28 | 0.572 66 | 0.602 50 | 0.954 40 | 0.512 48 | 0.280 48 | 0.757 24 | 0.845 41 | 0.725 36 | 0.780 40 | 0.606 55 | 0.937 19 | 0.851 42 | 0.700 41 | |
DTC | 0.757 15 | 0.843 29 | 0.820 12 | 0.847 16 | 0.791 2 | 0.862 11 | 0.511 38 | 0.870 22 | 0.707 6 | 0.652 23 | 0.954 40 | 0.604 8 | 0.279 49 | 0.760 21 | 0.942 3 | 0.734 32 | 0.766 50 | 0.701 13 | 0.884 61 | 0.874 22 | 0.736 20 | |
PointASNL | ![]() | 0.666 63 | 0.703 86 | 0.781 46 | 0.751 67 | 0.655 55 | 0.830 47 | 0.471 57 | 0.769 59 | 0.474 96 | 0.537 68 | 0.951 51 | 0.475 61 | 0.279 49 | 0.635 51 | 0.698 74 | 0.675 61 | 0.751 62 | 0.553 82 | 0.816 95 | 0.806 65 | 0.703 40 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
VACNN++ | 0.684 54 | 0.728 75 | 0.757 61 | 0.776 58 | 0.690 44 | 0.804 74 | 0.464 62 | 0.816 45 | 0.577 65 | 0.587 57 | 0.945 70 | 0.508 50 | 0.276 51 | 0.671 42 | 0.710 69 | 0.663 66 | 0.750 64 | 0.589 65 | 0.881 64 | 0.832 51 | 0.653 54 | |
SAT | 0.742 32 | 0.860 24 | 0.765 55 | 0.819 34 | 0.769 14 | 0.848 27 | 0.533 27 | 0.829 40 | 0.663 24 | 0.631 36 | 0.955 34 | 0.586 17 | 0.274 52 | 0.753 27 | 0.896 17 | 0.729 33 | 0.760 56 | 0.666 33 | 0.921 33 | 0.855 37 | 0.733 22 | |
PointConvFormer | 0.749 22 | 0.793 43 | 0.790 39 | 0.807 43 | 0.750 28 | 0.856 15 | 0.524 31 | 0.881 18 | 0.588 58 | 0.642 30 | 0.977 10 | 0.591 12 | 0.274 52 | 0.781 7 | 0.929 5 | 0.804 8 | 0.796 29 | 0.642 39 | 0.947 10 | 0.885 10 | 0.715 36 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
Supervoxel-CNN | 0.635 78 | 0.656 94 | 0.711 80 | 0.719 71 | 0.613 69 | 0.757 96 | 0.444 76 | 0.765 60 | 0.534 78 | 0.566 60 | 0.928 98 | 0.478 60 | 0.272 54 | 0.636 50 | 0.531 94 | 0.664 65 | 0.645 100 | 0.508 98 | 0.864 83 | 0.792 80 | 0.611 66 | |
Retro-FPN | 0.744 29 | 0.842 30 | 0.800 30 | 0.767 61 | 0.740 32 | 0.836 41 | 0.541 23 | 0.914 7 | 0.672 22 | 0.626 37 | 0.958 23 | 0.552 33 | 0.272 54 | 0.777 9 | 0.886 22 | 0.696 52 | 0.801 24 | 0.674 29 | 0.941 14 | 0.858 33 | 0.717 33 | |
Peng Xiang*, Xin Wen*, Yu-Shen Liu, Hui Zhang, Yi Fang, Zhizhong Han: Retrospective Feature Pyramid Network for Point Cloud Semantic Segmentation. ICCV 2023 | ||||||||||||||||||||||
ClickSeg_Semantic | 0.703 45 | 0.774 53 | 0.800 30 | 0.793 52 | 0.760 18 | 0.847 29 | 0.471 57 | 0.802 52 | 0.463 100 | 0.634 35 | 0.968 14 | 0.491 54 | 0.271 56 | 0.726 37 | 0.910 9 | 0.706 47 | 0.815 9 | 0.551 83 | 0.878 67 | 0.833 49 | 0.570 83 | |
ROSMRF3D | 0.673 60 | 0.789 44 | 0.748 67 | 0.763 63 | 0.635 65 | 0.814 66 | 0.407 88 | 0.747 66 | 0.581 63 | 0.573 59 | 0.950 55 | 0.484 57 | 0.271 56 | 0.607 60 | 0.754 60 | 0.649 71 | 0.774 44 | 0.596 60 | 0.883 62 | 0.823 55 | 0.606 70 | |
PointContrast_LA_SEM | 0.683 57 | 0.757 62 | 0.784 44 | 0.786 53 | 0.639 63 | 0.824 54 | 0.408 86 | 0.775 57 | 0.604 51 | 0.541 66 | 0.934 94 | 0.532 40 | 0.269 58 | 0.552 78 | 0.777 57 | 0.645 77 | 0.793 33 | 0.640 40 | 0.913 42 | 0.824 54 | 0.671 48 | |
RandLA-Net | ![]() | 0.645 70 | 0.778 49 | 0.731 77 | 0.699 76 | 0.577 81 | 0.829 48 | 0.446 71 | 0.736 70 | 0.477 95 | 0.523 76 | 0.945 70 | 0.454 70 | 0.269 58 | 0.484 95 | 0.749 63 | 0.618 84 | 0.738 67 | 0.599 59 | 0.827 92 | 0.792 80 | 0.621 65 |
VI-PointConv | 0.676 59 | 0.770 57 | 0.754 62 | 0.783 56 | 0.621 67 | 0.814 66 | 0.552 18 | 0.758 62 | 0.571 69 | 0.557 62 | 0.954 40 | 0.529 41 | 0.268 60 | 0.530 84 | 0.682 75 | 0.675 61 | 0.719 74 | 0.603 57 | 0.888 59 | 0.833 49 | 0.665 50 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
SD-DETR | 0.576 98 | 0.746 65 | 0.609 111 | 0.445 117 | 0.517 96 | 0.643 112 | 0.366 100 | 0.714 77 | 0.456 101 | 0.468 91 | 0.870 114 | 0.432 79 | 0.264 61 | 0.558 76 | 0.674 76 | 0.586 96 | 0.688 87 | 0.482 104 | 0.739 104 | 0.733 102 | 0.537 95 | |
LargeKernel3D | 0.739 34 | 0.909 12 | 0.820 12 | 0.806 45 | 0.740 32 | 0.852 22 | 0.545 21 | 0.826 41 | 0.594 57 | 0.643 27 | 0.955 34 | 0.541 35 | 0.263 62 | 0.723 38 | 0.858 32 | 0.775 18 | 0.767 49 | 0.678 22 | 0.933 23 | 0.848 43 | 0.694 42 | |
Yukang Chen*, Jianhui Liu*, Xiangyu Zhang, Xiaojuan Qi, Jiaya Jia: LargeKernel3D: Scaling up Kernels in 3D Sparse CNNs. CVPR 2023 | ||||||||||||||||||||||
StratifiedFormer | ![]() | 0.747 25 | 0.901 15 | 0.803 29 | 0.845 18 | 0.757 21 | 0.846 30 | 0.512 37 | 0.825 42 | 0.696 11 | 0.645 25 | 0.956 30 | 0.576 22 | 0.262 63 | 0.744 33 | 0.861 29 | 0.742 29 | 0.770 48 | 0.705 11 | 0.899 51 | 0.860 32 | 0.734 21 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
MVPNet | ![]() | 0.641 71 | 0.831 32 | 0.715 79 | 0.671 86 | 0.590 76 | 0.781 85 | 0.394 92 | 0.679 84 | 0.642 33 | 0.553 63 | 0.937 87 | 0.462 66 | 0.256 64 | 0.649 46 | 0.406 105 | 0.626 82 | 0.691 86 | 0.666 33 | 0.877 68 | 0.792 80 | 0.608 69 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
PointMTL | 0.632 80 | 0.731 73 | 0.688 95 | 0.675 83 | 0.591 75 | 0.784 84 | 0.444 76 | 0.565 103 | 0.610 46 | 0.492 84 | 0.949 59 | 0.456 69 | 0.254 65 | 0.587 64 | 0.706 70 | 0.599 90 | 0.665 95 | 0.612 54 | 0.868 81 | 0.791 83 | 0.579 80 | |
FPConv | ![]() | 0.639 74 | 0.785 46 | 0.760 57 | 0.713 74 | 0.603 71 | 0.798 77 | 0.392 94 | 0.534 107 | 0.603 52 | 0.524 74 | 0.948 62 | 0.457 68 | 0.250 66 | 0.538 82 | 0.723 67 | 0.598 91 | 0.696 84 | 0.614 51 | 0.872 76 | 0.799 70 | 0.567 86 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
RFCR | 0.702 46 | 0.889 18 | 0.745 70 | 0.813 37 | 0.672 51 | 0.818 63 | 0.493 50 | 0.815 47 | 0.623 40 | 0.610 44 | 0.947 64 | 0.470 63 | 0.249 67 | 0.594 63 | 0.848 40 | 0.705 48 | 0.779 41 | 0.646 37 | 0.892 56 | 0.823 55 | 0.611 66 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
PointMetaBase | 0.714 43 | 0.835 31 | 0.785 43 | 0.821 32 | 0.684 48 | 0.846 30 | 0.531 29 | 0.865 26 | 0.614 43 | 0.596 54 | 0.953 44 | 0.500 51 | 0.246 68 | 0.674 41 | 0.888 20 | 0.692 53 | 0.764 52 | 0.624 47 | 0.849 88 | 0.844 48 | 0.675 47 | |
CCRFNet | 0.589 96 | 0.766 59 | 0.659 102 | 0.683 81 | 0.470 103 | 0.740 100 | 0.387 97 | 0.620 96 | 0.490 91 | 0.476 88 | 0.922 102 | 0.355 102 | 0.245 69 | 0.511 87 | 0.511 97 | 0.571 98 | 0.643 101 | 0.493 102 | 0.872 76 | 0.762 94 | 0.600 74 | |
PanopticFusion-label | 0.529 104 | 0.491 114 | 0.688 95 | 0.604 103 | 0.386 109 | 0.632 113 | 0.225 119 | 0.705 81 | 0.434 106 | 0.293 113 | 0.815 117 | 0.348 103 | 0.241 70 | 0.499 91 | 0.669 78 | 0.507 103 | 0.649 98 | 0.442 112 | 0.796 97 | 0.602 117 | 0.561 88 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
Superpoint Network | 0.683 57 | 0.851 27 | 0.728 78 | 0.800 49 | 0.653 56 | 0.806 72 | 0.468 59 | 0.804 50 | 0.572 66 | 0.602 50 | 0.946 67 | 0.453 73 | 0.239 71 | 0.519 86 | 0.822 46 | 0.689 58 | 0.762 55 | 0.595 62 | 0.895 54 | 0.827 53 | 0.630 63 | |
LSK3DNet | ![]() | 0.755 17 | 0.899 16 | 0.823 8 | 0.843 20 | 0.764 16 | 0.838 38 | 0.584 2 | 0.845 34 | 0.717 2 | 0.638 33 | 0.956 30 | 0.580 21 | 0.229 72 | 0.640 49 | 0.900 14 | 0.750 26 | 0.813 13 | 0.729 3 | 0.920 35 | 0.872 24 | 0.757 14 |
Tuo Feng, Wenguan Wang, Fan Ma, Yi Yang: LSK3DNet: Towards Effective and Efficient 3D Perception with Large Sparse Kernels. CVPR 2024 | ||||||||||||||||||||||
HPGCNN | 0.656 68 | 0.698 88 | 0.743 72 | 0.650 93 | 0.564 85 | 0.820 59 | 0.505 40 | 0.758 62 | 0.631 38 | 0.479 87 | 0.945 70 | 0.480 59 | 0.226 73 | 0.572 70 | 0.774 58 | 0.690 56 | 0.735 69 | 0.614 51 | 0.853 87 | 0.776 90 | 0.597 76 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
OctFormer | ![]() | 0.766 9 | 0.925 7 | 0.808 26 | 0.849 13 | 0.786 5 | 0.846 30 | 0.566 12 | 0.876 19 | 0.690 13 | 0.674 16 | 0.960 19 | 0.576 22 | 0.226 73 | 0.753 27 | 0.904 11 | 0.777 16 | 0.815 9 | 0.722 7 | 0.923 31 | 0.877 16 | 0.776 10 |
Peng-Shuai Wang: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH 2023 | ||||||||||||||||||||||
TextureNet | ![]() | 0.566 100 | 0.672 92 | 0.664 100 | 0.671 86 | 0.494 98 | 0.719 102 | 0.445 73 | 0.678 85 | 0.411 109 | 0.396 103 | 0.935 90 | 0.356 101 | 0.225 75 | 0.412 103 | 0.535 93 | 0.565 99 | 0.636 104 | 0.464 106 | 0.794 98 | 0.680 110 | 0.568 85 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
PicassoNet-II | ![]() | 0.692 50 | 0.732 72 | 0.772 50 | 0.786 53 | 0.677 50 | 0.866 9 | 0.517 34 | 0.848 31 | 0.509 86 | 0.626 37 | 0.952 49 | 0.536 37 | 0.225 75 | 0.545 81 | 0.704 71 | 0.689 58 | 0.810 16 | 0.564 76 | 0.903 47 | 0.854 39 | 0.729 23 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
DVVNet | 0.562 101 | 0.648 95 | 0.700 86 | 0.770 59 | 0.586 79 | 0.687 106 | 0.333 105 | 0.650 90 | 0.514 85 | 0.475 89 | 0.906 108 | 0.359 100 | 0.223 77 | 0.340 108 | 0.442 103 | 0.422 112 | 0.668 94 | 0.501 99 | 0.708 107 | 0.779 87 | 0.534 96 | |
PointNet2-SFPN | 0.631 81 | 0.771 55 | 0.692 92 | 0.672 84 | 0.524 94 | 0.837 39 | 0.440 78 | 0.706 80 | 0.538 77 | 0.446 95 | 0.944 76 | 0.421 88 | 0.219 78 | 0.552 78 | 0.751 62 | 0.591 93 | 0.737 68 | 0.543 89 | 0.901 50 | 0.768 92 | 0.557 90 | |
O-CNN | ![]() | 0.762 13 | 0.924 8 | 0.823 8 | 0.844 19 | 0.770 12 | 0.852 22 | 0.577 6 | 0.847 33 | 0.711 4 | 0.640 31 | 0.958 23 | 0.592 11 | 0.217 79 | 0.762 20 | 0.888 20 | 0.758 23 | 0.813 13 | 0.726 4 | 0.932 25 | 0.868 26 | 0.744 18 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
HPEIN | 0.618 90 | 0.729 74 | 0.668 98 | 0.647 95 | 0.597 74 | 0.766 92 | 0.414 85 | 0.680 83 | 0.520 82 | 0.525 73 | 0.946 67 | 0.432 79 | 0.215 80 | 0.493 93 | 0.599 87 | 0.638 79 | 0.617 105 | 0.570 71 | 0.897 52 | 0.806 65 | 0.605 72 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
3DMV | 0.484 109 | 0.484 115 | 0.538 117 | 0.643 97 | 0.424 107 | 0.606 117 | 0.310 106 | 0.574 101 | 0.433 107 | 0.378 104 | 0.796 118 | 0.301 108 | 0.214 81 | 0.537 83 | 0.208 116 | 0.472 109 | 0.507 117 | 0.413 115 | 0.693 108 | 0.602 117 | 0.539 94 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
Weakly-Openseg v3 | 0.625 87 | 0.924 8 | 0.787 42 | 0.620 100 | 0.555 90 | 0.811 70 | 0.393 93 | 0.666 88 | 0.382 111 | 0.520 77 | 0.953 44 | 0.250 115 | 0.208 82 | 0.604 61 | 0.670 77 | 0.644 78 | 0.742 66 | 0.538 92 | 0.919 36 | 0.803 68 | 0.513 101 | |
ROSMRF | 0.580 97 | 0.772 54 | 0.707 82 | 0.681 82 | 0.563 86 | 0.764 93 | 0.362 101 | 0.515 108 | 0.465 99 | 0.465 92 | 0.936 89 | 0.427 85 | 0.207 83 | 0.438 99 | 0.577 89 | 0.536 101 | 0.675 92 | 0.486 103 | 0.723 106 | 0.779 87 | 0.524 98 | |
PD-Net | 0.638 75 | 0.797 42 | 0.769 54 | 0.641 98 | 0.590 76 | 0.820 59 | 0.461 63 | 0.537 106 | 0.637 36 | 0.536 69 | 0.947 64 | 0.388 96 | 0.206 84 | 0.656 44 | 0.668 79 | 0.647 74 | 0.732 71 | 0.585 68 | 0.868 81 | 0.793 77 | 0.473 109 | |
JSENet | ![]() | 0.699 48 | 0.881 20 | 0.762 56 | 0.821 32 | 0.667 52 | 0.800 76 | 0.522 32 | 0.792 55 | 0.613 44 | 0.607 47 | 0.935 90 | 0.492 53 | 0.205 85 | 0.576 68 | 0.853 37 | 0.691 55 | 0.758 58 | 0.652 35 | 0.872 76 | 0.828 52 | 0.649 55 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
Pointnet++ & Feature | ![]() | 0.557 102 | 0.735 70 | 0.661 101 | 0.686 80 | 0.491 99 | 0.744 99 | 0.392 94 | 0.539 105 | 0.451 102 | 0.375 106 | 0.946 67 | 0.376 98 | 0.205 85 | 0.403 104 | 0.356 108 | 0.553 100 | 0.643 101 | 0.497 100 | 0.824 94 | 0.756 96 | 0.515 99 |
PointConv | ![]() | 0.666 63 | 0.781 48 | 0.759 58 | 0.699 76 | 0.644 62 | 0.822 56 | 0.475 55 | 0.779 56 | 0.564 72 | 0.504 83 | 0.953 44 | 0.428 83 | 0.203 87 | 0.586 66 | 0.754 60 | 0.661 67 | 0.753 61 | 0.588 66 | 0.902 48 | 0.813 63 | 0.642 58 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
SegGroup_sem | ![]() | 0.627 86 | 0.818 37 | 0.747 69 | 0.701 75 | 0.602 72 | 0.764 93 | 0.385 98 | 0.629 94 | 0.490 91 | 0.508 80 | 0.931 97 | 0.409 91 | 0.201 88 | 0.564 73 | 0.725 66 | 0.618 84 | 0.692 85 | 0.539 91 | 0.873 74 | 0.794 75 | 0.548 93 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
Feature-Geometry Net | ![]() | 0.685 53 | 0.866 22 | 0.748 67 | 0.819 34 | 0.645 61 | 0.794 79 | 0.450 69 | 0.802 52 | 0.587 59 | 0.604 48 | 0.945 70 | 0.464 65 | 0.201 88 | 0.554 77 | 0.840 42 | 0.723 38 | 0.732 71 | 0.602 58 | 0.907 43 | 0.822 57 | 0.603 73 |
SurfaceConvPF | 0.442 113 | 0.505 112 | 0.622 109 | 0.380 120 | 0.342 115 | 0.654 109 | 0.227 118 | 0.397 112 | 0.367 113 | 0.276 115 | 0.924 100 | 0.240 116 | 0.198 90 | 0.359 107 | 0.262 111 | 0.366 114 | 0.581 106 | 0.435 113 | 0.640 112 | 0.668 111 | 0.398 112 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
3DSM_DMMF | 0.631 81 | 0.626 99 | 0.745 70 | 0.801 48 | 0.607 70 | 0.751 97 | 0.506 39 | 0.729 73 | 0.565 71 | 0.491 85 | 0.866 115 | 0.434 78 | 0.197 91 | 0.595 62 | 0.630 84 | 0.709 46 | 0.705 81 | 0.560 77 | 0.875 70 | 0.740 100 | 0.491 104 | |
SIConv | 0.625 87 | 0.830 33 | 0.694 90 | 0.757 64 | 0.563 86 | 0.772 91 | 0.448 70 | 0.647 92 | 0.520 82 | 0.509 79 | 0.949 59 | 0.431 81 | 0.191 92 | 0.496 92 | 0.614 86 | 0.647 74 | 0.672 93 | 0.535 94 | 0.876 69 | 0.783 86 | 0.571 82 | |
Feature_GeometricNet | ![]() | 0.690 51 | 0.884 19 | 0.754 62 | 0.795 50 | 0.647 59 | 0.818 63 | 0.422 83 | 0.802 52 | 0.612 45 | 0.604 48 | 0.945 70 | 0.462 66 | 0.189 93 | 0.563 74 | 0.853 37 | 0.726 35 | 0.765 51 | 0.632 43 | 0.904 45 | 0.821 58 | 0.606 70 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
PNET2 | 0.442 113 | 0.548 110 | 0.548 116 | 0.597 105 | 0.363 113 | 0.628 115 | 0.300 108 | 0.292 116 | 0.374 112 | 0.307 112 | 0.881 113 | 0.268 113 | 0.186 94 | 0.238 115 | 0.204 117 | 0.407 113 | 0.506 118 | 0.449 109 | 0.667 111 | 0.620 116 | 0.462 111 | |
APCF-Net | 0.631 81 | 0.742 68 | 0.687 97 | 0.672 84 | 0.557 88 | 0.792 82 | 0.408 86 | 0.665 89 | 0.545 76 | 0.508 80 | 0.952 49 | 0.428 83 | 0.186 94 | 0.634 52 | 0.702 72 | 0.620 83 | 0.706 80 | 0.555 81 | 0.873 74 | 0.798 72 | 0.581 79 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
KP-FCNN | 0.684 54 | 0.847 28 | 0.758 60 | 0.784 55 | 0.647 59 | 0.814 66 | 0.473 56 | 0.772 58 | 0.605 50 | 0.594 55 | 0.935 90 | 0.450 74 | 0.181 96 | 0.587 64 | 0.805 52 | 0.690 56 | 0.785 39 | 0.614 51 | 0.882 63 | 0.819 59 | 0.632 62 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
3DWSSS | 0.425 116 | 0.525 111 | 0.647 103 | 0.522 109 | 0.324 116 | 0.488 121 | 0.077 122 | 0.712 79 | 0.353 114 | 0.401 102 | 0.636 122 | 0.281 111 | 0.176 97 | 0.340 108 | 0.565 91 | 0.175 121 | 0.551 111 | 0.398 116 | 0.370 122 | 0.602 117 | 0.361 115 | |
contrastBoundary | ![]() | 0.705 44 | 0.769 58 | 0.775 49 | 0.809 41 | 0.687 47 | 0.820 59 | 0.439 79 | 0.812 49 | 0.661 25 | 0.591 56 | 0.945 70 | 0.515 47 | 0.171 98 | 0.633 53 | 0.856 33 | 0.720 39 | 0.796 29 | 0.668 32 | 0.889 58 | 0.847 44 | 0.689 43 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
SConv | 0.636 77 | 0.830 33 | 0.697 88 | 0.752 66 | 0.572 84 | 0.780 87 | 0.445 73 | 0.716 75 | 0.529 79 | 0.530 71 | 0.951 51 | 0.446 77 | 0.170 99 | 0.507 90 | 0.666 80 | 0.636 80 | 0.682 89 | 0.541 90 | 0.886 60 | 0.799 70 | 0.594 77 | |
PointMRNet | 0.640 73 | 0.717 82 | 0.701 85 | 0.692 79 | 0.576 82 | 0.801 75 | 0.467 61 | 0.716 75 | 0.563 73 | 0.459 93 | 0.953 44 | 0.429 82 | 0.169 100 | 0.581 67 | 0.854 36 | 0.605 87 | 0.710 76 | 0.550 84 | 0.894 55 | 0.793 77 | 0.575 81 | |
Online SegFusion | 0.515 106 | 0.607 103 | 0.644 105 | 0.579 106 | 0.434 106 | 0.630 114 | 0.353 102 | 0.628 95 | 0.440 104 | 0.410 101 | 0.762 120 | 0.307 107 | 0.167 101 | 0.520 85 | 0.403 106 | 0.516 102 | 0.565 108 | 0.447 110 | 0.678 110 | 0.701 107 | 0.514 100 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
PointCNN with RGB | ![]() | 0.458 110 | 0.577 106 | 0.611 110 | 0.356 121 | 0.321 117 | 0.715 103 | 0.299 110 | 0.376 114 | 0.328 117 | 0.319 111 | 0.944 76 | 0.285 110 | 0.164 102 | 0.216 118 | 0.229 113 | 0.484 106 | 0.545 112 | 0.456 108 | 0.755 101 | 0.709 106 | 0.475 108 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
PointConv-SFPN | 0.641 71 | 0.776 51 | 0.703 83 | 0.721 70 | 0.557 88 | 0.826 51 | 0.451 66 | 0.672 87 | 0.563 73 | 0.483 86 | 0.943 79 | 0.425 86 | 0.162 103 | 0.644 48 | 0.726 65 | 0.659 68 | 0.709 78 | 0.572 70 | 0.875 70 | 0.786 85 | 0.559 89 | |
3DMV, FTSDF | 0.501 107 | 0.558 108 | 0.608 112 | 0.424 119 | 0.478 101 | 0.690 105 | 0.246 115 | 0.586 99 | 0.468 97 | 0.450 94 | 0.911 106 | 0.394 94 | 0.160 104 | 0.438 99 | 0.212 115 | 0.432 111 | 0.541 113 | 0.475 105 | 0.742 103 | 0.727 103 | 0.477 107 | |
PCNN | 0.498 108 | 0.559 107 | 0.644 105 | 0.560 108 | 0.420 108 | 0.711 104 | 0.229 117 | 0.414 110 | 0.436 105 | 0.352 109 | 0.941 82 | 0.324 106 | 0.155 105 | 0.238 115 | 0.387 107 | 0.493 104 | 0.529 114 | 0.509 96 | 0.813 96 | 0.751 98 | 0.504 102 | |
AttAN | 0.609 92 | 0.760 60 | 0.667 99 | 0.649 94 | 0.521 95 | 0.793 80 | 0.457 64 | 0.648 91 | 0.528 80 | 0.434 100 | 0.947 64 | 0.401 93 | 0.153 106 | 0.454 98 | 0.721 68 | 0.648 73 | 0.717 75 | 0.536 93 | 0.904 45 | 0.765 93 | 0.485 105 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
PointSPNet | 0.637 76 | 0.734 71 | 0.692 92 | 0.714 73 | 0.576 82 | 0.797 78 | 0.446 71 | 0.743 68 | 0.598 55 | 0.437 98 | 0.942 80 | 0.403 92 | 0.150 107 | 0.626 55 | 0.800 55 | 0.649 71 | 0.697 83 | 0.557 80 | 0.846 89 | 0.777 89 | 0.563 87 | |
SALANet | 0.670 61 | 0.816 38 | 0.770 53 | 0.768 60 | 0.652 57 | 0.807 71 | 0.451 66 | 0.747 66 | 0.659 28 | 0.545 65 | 0.924 100 | 0.473 62 | 0.149 108 | 0.571 71 | 0.811 51 | 0.635 81 | 0.746 65 | 0.623 48 | 0.892 56 | 0.794 75 | 0.570 83 | |
Tangent Convolutions | ![]() | 0.438 115 | 0.437 118 | 0.646 104 | 0.474 114 | 0.369 111 | 0.645 111 | 0.353 102 | 0.258 118 | 0.282 120 | 0.279 114 | 0.918 105 | 0.298 109 | 0.147 109 | 0.283 112 | 0.294 110 | 0.487 105 | 0.562 109 | 0.427 114 | 0.619 114 | 0.633 115 | 0.352 116 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
DGCNN_reproduce | ![]() | 0.446 112 | 0.474 116 | 0.623 108 | 0.463 115 | 0.366 112 | 0.651 110 | 0.310 106 | 0.389 113 | 0.349 115 | 0.330 110 | 0.937 87 | 0.271 112 | 0.126 110 | 0.285 111 | 0.224 114 | 0.350 117 | 0.577 107 | 0.445 111 | 0.625 113 | 0.723 104 | 0.394 113 |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
PointNet++ | ![]() | 0.339 119 | 0.584 105 | 0.478 120 | 0.458 116 | 0.256 120 | 0.360 122 | 0.250 114 | 0.247 119 | 0.278 121 | 0.261 118 | 0.677 121 | 0.183 119 | 0.117 111 | 0.212 119 | 0.145 120 | 0.364 115 | 0.346 122 | 0.232 122 | 0.548 117 | 0.523 121 | 0.252 120 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
ScanNet | ![]() | 0.306 122 | 0.203 121 | 0.366 121 | 0.501 110 | 0.311 118 | 0.524 120 | 0.211 120 | 0.002 123 | 0.342 116 | 0.189 121 | 0.786 119 | 0.145 121 | 0.102 112 | 0.245 114 | 0.152 119 | 0.318 119 | 0.348 121 | 0.300 121 | 0.460 120 | 0.437 122 | 0.182 122 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
subcloud_weak | 0.516 105 | 0.676 90 | 0.591 114 | 0.609 101 | 0.442 105 | 0.774 89 | 0.335 104 | 0.597 97 | 0.422 108 | 0.357 108 | 0.932 96 | 0.341 104 | 0.094 113 | 0.298 110 | 0.528 96 | 0.473 108 | 0.676 91 | 0.495 101 | 0.602 115 | 0.721 105 | 0.349 117 | |
GMLPs | 0.538 103 | 0.495 113 | 0.693 91 | 0.647 95 | 0.471 102 | 0.793 80 | 0.300 108 | 0.477 109 | 0.505 87 | 0.358 107 | 0.903 110 | 0.327 105 | 0.081 114 | 0.472 96 | 0.529 95 | 0.448 110 | 0.710 76 | 0.509 96 | 0.746 102 | 0.737 101 | 0.554 92 | |
ScanNet+FTSDF | 0.383 118 | 0.297 120 | 0.491 119 | 0.432 118 | 0.358 114 | 0.612 116 | 0.274 113 | 0.116 120 | 0.411 109 | 0.265 116 | 0.904 109 | 0.229 117 | 0.079 115 | 0.250 113 | 0.185 118 | 0.320 118 | 0.510 115 | 0.385 117 | 0.548 117 | 0.597 120 | 0.394 113 | |
SSC-UNet | ![]() | 0.308 121 | 0.353 119 | 0.290 122 | 0.278 122 | 0.166 121 | 0.553 119 | 0.169 121 | 0.286 117 | 0.147 122 | 0.148 122 | 0.908 107 | 0.182 120 | 0.064 116 | 0.023 122 | 0.018 122 | 0.354 116 | 0.363 120 | 0.345 120 | 0.546 119 | 0.685 109 | 0.278 118 |
SPH3D-GCN | ![]() | 0.610 91 | 0.858 26 | 0.772 50 | 0.489 113 | 0.532 93 | 0.792 82 | 0.404 89 | 0.643 93 | 0.570 70 | 0.507 82 | 0.935 90 | 0.414 90 | 0.046 117 | 0.510 88 | 0.702 72 | 0.602 89 | 0.705 81 | 0.549 85 | 0.859 85 | 0.773 91 | 0.534 96 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
FCPN | ![]() | 0.447 111 | 0.679 89 | 0.604 113 | 0.578 107 | 0.380 110 | 0.682 107 | 0.291 111 | 0.106 121 | 0.483 94 | 0.258 119 | 0.920 103 | 0.258 114 | 0.025 118 | 0.231 117 | 0.325 109 | 0.480 107 | 0.560 110 | 0.463 107 | 0.725 105 | 0.666 112 | 0.231 121 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
wsss-transformer | 0.600 93 | 0.634 98 | 0.743 72 | 0.697 78 | 0.601 73 | 0.781 85 | 0.437 80 | 0.585 100 | 0.493 90 | 0.446 95 | 0.933 95 | 0.394 94 | 0.011 119 | 0.654 45 | 0.661 82 | 0.603 88 | 0.733 70 | 0.526 95 | 0.832 91 | 0.761 95 | 0.480 106 | |
dtc_net | 0.625 87 | 0.703 86 | 0.751 65 | 0.794 51 | 0.535 92 | 0.848 27 | 0.480 54 | 0.676 86 | 0.528 80 | 0.469 90 | 0.944 76 | 0.454 70 | 0.004 120 | 0.464 97 | 0.636 83 | 0.704 49 | 0.758 58 | 0.548 86 | 0.924 30 | 0.787 84 | 0.492 103 | |
ERROR | 0.054 123 | 0.000 123 | 0.041 123 | 0.172 123 | 0.030 123 | 0.062 123 | 0.001 123 | 0.035 122 | 0.004 123 | 0.051 123 | 0.143 123 | 0.019 123 | 0.003 121 | 0.041 120 | 0.050 121 | 0.003 122 | 0.054 123 | 0.018 123 | 0.005 123 | 0.264 123 | 0.082 123 | |
GrowSP++ | 0.323 120 | 0.114 122 | 0.589 115 | 0.499 111 | 0.147 122 | 0.555 118 | 0.290 112 | 0.336 115 | 0.290 119 | 0.262 117 | 0.865 116 | 0.102 122 | 0.000 122 | 0.037 121 | 0.000 123 | 0.000 123 | 0.462 119 | 0.381 119 | 0.389 121 | 0.664 114 | 0.473 109 | |
SPLAT Net | ![]() | 0.393 117 | 0.472 117 | 0.511 118 | 0.606 102 | 0.311 118 | 0.656 108 | 0.245 116 | 0.405 111 | 0.328 117 | 0.197 120 | 0.927 99 | 0.227 118 | 0.000 122 | 0.001 123 | 0.249 112 | 0.271 120 | 0.510 115 | 0.383 118 | 0.593 116 | 0.699 108 | 0.267 119 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 |