3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
DITR ScanNet | 0.797 2 | 0.727 76 | 0.869 1 | 0.882 1 | 0.785 6 | 0.868 7 | 0.578 5 | 0.943 1 | 0.744 1 | 0.727 3 | 0.979 1 | 0.627 2 | 0.364 9 | 0.824 1 | 0.949 2 | 0.779 15 | 0.844 1 | 0.757 1 | 0.982 1 | 0.905 2 | 0.802 3 | |
Karim Abou Zeid, Kadir Yilmaz, Daan de Geus, Alexander Hermans, David Adrian, Timm Linder, Bastian Leibe: DINO in the Room: Leveraging 2D Foundation Models for 3D Segmentation. | ||||||||||||||||||||||
ODIN | ![]() | 0.744 29 | 0.658 93 | 0.752 64 | 0.870 3 | 0.714 40 | 0.843 33 | 0.569 11 | 0.919 5 | 0.703 8 | 0.622 40 | 0.949 59 | 0.591 12 | 0.343 15 | 0.736 34 | 0.784 56 | 0.816 7 | 0.838 2 | 0.672 31 | 0.918 37 | 0.854 39 | 0.725 28 |
Ayush Jain, Pushkal Katara, Nikolaos Gkanatsios, Adam W. Harley, Gabriel Sarch, Kriti Aggarwal, Vishrav Chaudhary, Katerina Fragkiadaki: ODIN: A Single Model for 2D and 3D Segmentation. CVPR 2024 | ||||||||||||||||||||||
OccuSeg+Semantic | 0.764 11 | 0.758 61 | 0.796 34 | 0.839 24 | 0.746 30 | 0.907 1 | 0.562 14 | 0.850 29 | 0.680 19 | 0.672 17 | 0.978 6 | 0.610 4 | 0.335 21 | 0.777 9 | 0.819 49 | 0.847 1 | 0.830 3 | 0.691 17 | 0.972 3 | 0.885 10 | 0.727 26 | |
SparseConvNet | 0.725 39 | 0.647 96 | 0.821 11 | 0.846 17 | 0.721 38 | 0.869 6 | 0.533 27 | 0.754 63 | 0.603 52 | 0.614 42 | 0.955 34 | 0.572 24 | 0.325 25 | 0.710 38 | 0.870 25 | 0.724 37 | 0.823 4 | 0.628 45 | 0.934 22 | 0.865 29 | 0.683 45 | |
TTT-KD | 0.773 7 | 0.646 97 | 0.818 16 | 0.809 41 | 0.774 10 | 0.878 3 | 0.581 3 | 0.943 1 | 0.687 15 | 0.704 7 | 0.978 6 | 0.607 6 | 0.336 19 | 0.775 11 | 0.912 8 | 0.838 4 | 0.823 4 | 0.694 15 | 0.967 4 | 0.899 4 | 0.794 6 | |
Lisa Weijler, Muhammad Jehanzeb Mirza, Leon Sick, Can Ekkazan, Pedro Hermosilla: TTT-KD: Test-Time Training for 3D Semantic Segmentation through Knowledge Distillation from Foundation Models. | ||||||||||||||||||||||
Mix3D | ![]() | 0.781 5 | 0.964 2 | 0.855 2 | 0.843 20 | 0.781 8 | 0.858 13 | 0.575 8 | 0.831 38 | 0.685 17 | 0.714 4 | 0.979 1 | 0.594 10 | 0.310 30 | 0.801 2 | 0.892 19 | 0.841 2 | 0.819 6 | 0.723 6 | 0.940 15 | 0.887 8 | 0.725 28 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
PTv3-PPT-ALC | ![]() | 0.798 1 | 0.911 11 | 0.812 22 | 0.854 8 | 0.770 12 | 0.856 15 | 0.555 17 | 0.943 1 | 0.660 26 | 0.735 2 | 0.979 1 | 0.606 7 | 0.492 1 | 0.792 4 | 0.934 4 | 0.841 2 | 0.819 6 | 0.716 9 | 0.947 10 | 0.906 1 | 0.822 1 |
Guangda Ji, Silvan Weder, Francis Engelmann, Marc Pollefeys, Hermann Blum: ARKit LabelMaker: A New Scale for Indoor 3D Scene Understanding. arxiv | ||||||||||||||||||||||
BPNet | ![]() | 0.749 22 | 0.909 12 | 0.818 16 | 0.811 39 | 0.752 24 | 0.839 37 | 0.485 53 | 0.842 34 | 0.673 21 | 0.644 26 | 0.957 28 | 0.528 42 | 0.305 32 | 0.773 12 | 0.859 30 | 0.788 10 | 0.818 8 | 0.693 16 | 0.916 39 | 0.856 35 | 0.723 30 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
ClickSeg_Semantic | 0.703 45 | 0.774 53 | 0.800 30 | 0.793 52 | 0.760 18 | 0.847 29 | 0.471 57 | 0.802 51 | 0.463 99 | 0.634 35 | 0.968 14 | 0.491 53 | 0.271 55 | 0.726 36 | 0.910 9 | 0.706 47 | 0.815 9 | 0.551 82 | 0.878 67 | 0.833 49 | 0.570 82 | |
OctFormer | ![]() | 0.766 9 | 0.925 7 | 0.808 26 | 0.849 13 | 0.786 5 | 0.846 30 | 0.566 12 | 0.876 19 | 0.690 13 | 0.674 16 | 0.960 19 | 0.576 22 | 0.226 72 | 0.753 27 | 0.904 11 | 0.777 16 | 0.815 9 | 0.722 7 | 0.923 31 | 0.877 16 | 0.776 10 |
Peng-Shuai Wang: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH 2023 | ||||||||||||||||||||||
ResLFE_HDS | 0.772 8 | 0.939 4 | 0.824 7 | 0.854 8 | 0.771 11 | 0.840 35 | 0.564 13 | 0.900 12 | 0.686 16 | 0.677 14 | 0.961 18 | 0.537 36 | 0.348 13 | 0.769 15 | 0.903 12 | 0.785 13 | 0.815 9 | 0.676 26 | 0.939 16 | 0.880 13 | 0.772 11 | |
CU-Hybrid Net | 0.764 11 | 0.924 8 | 0.819 14 | 0.840 23 | 0.757 21 | 0.853 20 | 0.580 4 | 0.848 30 | 0.709 5 | 0.643 27 | 0.958 23 | 0.587 16 | 0.295 38 | 0.753 27 | 0.884 23 | 0.758 23 | 0.815 9 | 0.725 5 | 0.927 27 | 0.867 27 | 0.743 19 | |
O-CNN | ![]() | 0.762 13 | 0.924 8 | 0.823 8 | 0.844 19 | 0.770 12 | 0.852 22 | 0.577 6 | 0.847 32 | 0.711 4 | 0.640 31 | 0.958 23 | 0.592 11 | 0.217 78 | 0.762 20 | 0.888 20 | 0.758 23 | 0.813 13 | 0.726 4 | 0.932 25 | 0.868 26 | 0.744 18 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
LSK3DNet | ![]() | 0.755 17 | 0.899 16 | 0.823 8 | 0.843 20 | 0.764 16 | 0.838 38 | 0.584 2 | 0.845 33 | 0.717 2 | 0.638 33 | 0.956 30 | 0.580 21 | 0.229 71 | 0.640 48 | 0.900 14 | 0.750 26 | 0.813 13 | 0.729 3 | 0.920 35 | 0.872 24 | 0.757 14 |
Tuo Feng, Wenguan Wang, Fan Ma, Yi Yang: LSK3DNet: Towards Effective and Efficient 3D Perception with Large Sparse Kernels. CVPR 2024 | ||||||||||||||||||||||
VMNet | ![]() | 0.746 26 | 0.870 21 | 0.838 3 | 0.858 6 | 0.729 36 | 0.850 24 | 0.501 42 | 0.874 20 | 0.587 59 | 0.658 21 | 0.956 30 | 0.564 27 | 0.299 35 | 0.765 19 | 0.900 14 | 0.716 42 | 0.812 15 | 0.631 44 | 0.939 16 | 0.858 33 | 0.709 37 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
PicassoNet-II | ![]() | 0.692 50 | 0.732 72 | 0.772 50 | 0.786 53 | 0.677 49 | 0.866 9 | 0.517 34 | 0.848 30 | 0.509 85 | 0.626 37 | 0.952 49 | 0.536 37 | 0.225 74 | 0.545 80 | 0.704 71 | 0.689 57 | 0.810 16 | 0.564 75 | 0.903 47 | 0.854 39 | 0.729 23 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
online3d | 0.727 38 | 0.715 83 | 0.777 48 | 0.854 8 | 0.748 29 | 0.858 13 | 0.497 47 | 0.872 21 | 0.572 65 | 0.639 32 | 0.957 28 | 0.523 43 | 0.297 37 | 0.750 30 | 0.803 53 | 0.744 28 | 0.810 16 | 0.587 66 | 0.938 18 | 0.871 25 | 0.719 32 | |
One Thing One Click | 0.701 47 | 0.825 35 | 0.796 34 | 0.723 68 | 0.716 39 | 0.832 45 | 0.433 80 | 0.816 44 | 0.634 37 | 0.609 45 | 0.969 12 | 0.418 88 | 0.344 14 | 0.559 74 | 0.833 44 | 0.715 43 | 0.808 18 | 0.560 76 | 0.902 48 | 0.847 44 | 0.680 46 | |
Swin3D | ![]() | 0.779 6 | 0.861 23 | 0.818 16 | 0.836 26 | 0.790 3 | 0.875 4 | 0.576 7 | 0.905 10 | 0.704 7 | 0.739 1 | 0.969 12 | 0.611 3 | 0.349 12 | 0.756 25 | 0.958 1 | 0.702 51 | 0.805 19 | 0.708 10 | 0.916 39 | 0.898 5 | 0.801 4 |
RPN | 0.736 35 | 0.776 51 | 0.790 39 | 0.851 11 | 0.754 23 | 0.854 18 | 0.491 52 | 0.866 24 | 0.596 56 | 0.686 9 | 0.955 34 | 0.536 37 | 0.342 16 | 0.624 55 | 0.869 26 | 0.787 11 | 0.802 20 | 0.628 45 | 0.927 27 | 0.875 20 | 0.704 39 | |
PPT-SpUNet-Joint | 0.766 9 | 0.932 5 | 0.794 36 | 0.829 31 | 0.751 26 | 0.854 18 | 0.540 25 | 0.903 11 | 0.630 39 | 0.672 17 | 0.963 16 | 0.565 26 | 0.357 10 | 0.788 5 | 0.900 14 | 0.737 31 | 0.802 20 | 0.685 20 | 0.950 8 | 0.887 8 | 0.780 8 | |
Xiaoyang Wu, Zhuotao Tian, Xin Wen, Bohao Peng, Xihui Liu, Kaicheng Yu, Hengshuang Zhao: Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training. CVPR 2024 | ||||||||||||||||||||||
MSP | 0.748 24 | 0.623 100 | 0.804 28 | 0.859 5 | 0.745 31 | 0.824 54 | 0.501 42 | 0.912 8 | 0.690 13 | 0.685 10 | 0.956 30 | 0.567 25 | 0.320 27 | 0.768 17 | 0.918 7 | 0.720 39 | 0.802 20 | 0.676 26 | 0.921 33 | 0.881 12 | 0.779 9 | |
ConDaFormer | 0.755 17 | 0.927 6 | 0.822 10 | 0.836 26 | 0.801 1 | 0.849 25 | 0.516 35 | 0.864 26 | 0.651 30 | 0.680 13 | 0.958 23 | 0.584 19 | 0.282 45 | 0.759 23 | 0.855 35 | 0.728 34 | 0.802 20 | 0.678 22 | 0.880 66 | 0.873 23 | 0.756 16 | |
Lunhao Duan, Shanshan Zhao, Nan Xue, Mingming Gong, Guisong Xia, Dacheng Tao: ConDaFormer : Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding. Neurips, 2023 | ||||||||||||||||||||||
PNE | 0.755 17 | 0.786 45 | 0.835 5 | 0.834 28 | 0.758 19 | 0.849 25 | 0.570 10 | 0.836 37 | 0.648 32 | 0.668 19 | 0.978 6 | 0.581 20 | 0.367 7 | 0.683 39 | 0.856 33 | 0.804 8 | 0.801 24 | 0.678 22 | 0.961 6 | 0.889 7 | 0.716 35 | |
P. Hermosilla: Point Neighborhood Embeddings. | ||||||||||||||||||||||
EQ-Net | 0.743 31 | 0.620 101 | 0.799 33 | 0.849 13 | 0.730 35 | 0.822 56 | 0.493 50 | 0.897 14 | 0.664 23 | 0.681 12 | 0.955 34 | 0.562 29 | 0.378 4 | 0.760 21 | 0.903 12 | 0.738 30 | 0.801 24 | 0.673 30 | 0.907 43 | 0.877 16 | 0.745 17 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
Retro-FPN | 0.744 29 | 0.842 30 | 0.800 30 | 0.767 61 | 0.740 32 | 0.836 41 | 0.541 23 | 0.914 7 | 0.672 22 | 0.626 37 | 0.958 23 | 0.552 33 | 0.272 53 | 0.777 9 | 0.886 22 | 0.696 52 | 0.801 24 | 0.674 29 | 0.941 14 | 0.858 33 | 0.717 33 | |
Peng Xiang*, Xin Wen*, Yu-Shen Liu, Hui Zhang, Yi Fang, Zhizhong Han: Retrospective Feature Pyramid Network for Point Cloud Semantic Segmentation. ICCV 2023 | ||||||||||||||||||||||
DiffSeg3D2 | 0.745 28 | 0.725 78 | 0.814 20 | 0.837 25 | 0.751 26 | 0.831 46 | 0.514 36 | 0.896 15 | 0.674 20 | 0.684 11 | 0.960 19 | 0.564 27 | 0.303 34 | 0.773 12 | 0.820 48 | 0.713 45 | 0.798 27 | 0.690 19 | 0.923 31 | 0.875 20 | 0.757 14 | |
PointTransformerV2 | 0.752 20 | 0.742 68 | 0.809 25 | 0.872 2 | 0.758 19 | 0.860 12 | 0.552 18 | 0.891 17 | 0.610 46 | 0.687 8 | 0.960 19 | 0.559 30 | 0.304 33 | 0.766 18 | 0.926 6 | 0.767 20 | 0.797 28 | 0.644 38 | 0.942 13 | 0.876 19 | 0.722 31 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
contrastBoundary | ![]() | 0.705 44 | 0.769 58 | 0.775 49 | 0.809 41 | 0.687 46 | 0.820 59 | 0.439 78 | 0.812 48 | 0.661 25 | 0.591 56 | 0.945 70 | 0.515 46 | 0.171 97 | 0.633 52 | 0.856 33 | 0.720 39 | 0.796 29 | 0.668 32 | 0.889 58 | 0.847 44 | 0.689 43 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
Virtual MVFusion | 0.746 26 | 0.771 55 | 0.819 14 | 0.848 15 | 0.702 43 | 0.865 10 | 0.397 90 | 0.899 13 | 0.699 9 | 0.664 20 | 0.948 62 | 0.588 15 | 0.330 23 | 0.746 32 | 0.851 39 | 0.764 21 | 0.796 29 | 0.704 12 | 0.935 21 | 0.866 28 | 0.728 24 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
PointConvFormer | 0.749 22 | 0.793 43 | 0.790 39 | 0.807 43 | 0.750 28 | 0.856 15 | 0.524 31 | 0.881 18 | 0.588 58 | 0.642 30 | 0.977 10 | 0.591 12 | 0.274 51 | 0.781 7 | 0.929 5 | 0.804 8 | 0.796 29 | 0.642 39 | 0.947 10 | 0.885 10 | 0.715 36 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
DMF-Net | 0.752 20 | 0.906 14 | 0.793 38 | 0.802 47 | 0.689 45 | 0.825 52 | 0.556 16 | 0.867 23 | 0.681 18 | 0.602 50 | 0.960 19 | 0.555 32 | 0.365 8 | 0.779 8 | 0.859 30 | 0.747 27 | 0.795 32 | 0.717 8 | 0.917 38 | 0.856 35 | 0.764 12 | |
C.Yang, Y.Yan, W.Zhao, J.Ye, X.Yang, A.Hussain, B.Dong, K.Huang: Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation. ICONIP 2023 | ||||||||||||||||||||||
INS-Conv-semantic | 0.717 42 | 0.751 64 | 0.759 58 | 0.812 38 | 0.704 42 | 0.868 7 | 0.537 26 | 0.842 34 | 0.609 48 | 0.608 46 | 0.953 44 | 0.534 39 | 0.293 39 | 0.616 58 | 0.864 28 | 0.719 41 | 0.793 33 | 0.640 40 | 0.933 23 | 0.845 47 | 0.663 50 | |
PointContrast_LA_SEM | 0.683 57 | 0.757 62 | 0.784 44 | 0.786 53 | 0.639 62 | 0.824 54 | 0.408 85 | 0.775 56 | 0.604 51 | 0.541 65 | 0.934 94 | 0.532 40 | 0.269 57 | 0.552 77 | 0.777 57 | 0.645 76 | 0.793 33 | 0.640 40 | 0.913 42 | 0.824 54 | 0.671 48 | |
PonderV2 | 0.785 4 | 0.978 1 | 0.800 30 | 0.833 29 | 0.788 4 | 0.853 20 | 0.545 21 | 0.910 9 | 0.713 3 | 0.705 6 | 0.979 1 | 0.596 9 | 0.390 2 | 0.769 15 | 0.832 45 | 0.821 5 | 0.792 35 | 0.730 2 | 0.975 2 | 0.897 6 | 0.785 7 | |
Haoyi Zhu, Honghui Yang, Xiaoyang Wu, Di Huang, Sha Zhang, Xianglong He, Tong He, Hengshuang Zhao, Chunhua Shen, Yu Qiao, Wanli Ouyang: PonderV2: Pave the Way for 3D Foundataion Model with A Universal Pre-training Paradigm. | ||||||||||||||||||||||
PTv3 ScanNet | 0.794 3 | 0.941 3 | 0.813 21 | 0.851 11 | 0.782 7 | 0.890 2 | 0.597 1 | 0.916 6 | 0.696 11 | 0.713 5 | 0.979 1 | 0.635 1 | 0.384 3 | 0.793 3 | 0.907 10 | 0.821 5 | 0.790 36 | 0.696 14 | 0.967 4 | 0.903 3 | 0.805 2 | |
Xiaoyang Wu, Li Jiang, Peng-Shuai Wang, Zhijian Liu, Xihui Liu, Yu Qiao, Wanli Ouyang, Tong He, Hengshuang Zhao: Point Transformer V3: Simpler, Faster, Stronger. CVPR 2024 (Oral) | ||||||||||||||||||||||
One-Thing-One-Click | 0.693 49 | 0.743 67 | 0.794 36 | 0.655 91 | 0.684 47 | 0.822 56 | 0.497 47 | 0.719 73 | 0.622 41 | 0.617 41 | 0.977 10 | 0.447 75 | 0.339 17 | 0.750 30 | 0.664 81 | 0.703 50 | 0.790 36 | 0.596 59 | 0.946 12 | 0.855 37 | 0.647 55 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
OA-CNN-L_ScanNet20 | 0.756 16 | 0.783 47 | 0.826 6 | 0.858 6 | 0.776 9 | 0.837 39 | 0.548 20 | 0.896 15 | 0.649 31 | 0.675 15 | 0.962 17 | 0.586 17 | 0.335 21 | 0.771 14 | 0.802 54 | 0.770 19 | 0.787 38 | 0.691 17 | 0.936 20 | 0.880 13 | 0.761 13 | |
KP-FCNN | 0.684 54 | 0.847 28 | 0.758 60 | 0.784 55 | 0.647 58 | 0.814 66 | 0.473 56 | 0.772 57 | 0.605 50 | 0.594 55 | 0.935 90 | 0.450 73 | 0.181 95 | 0.587 63 | 0.805 52 | 0.690 55 | 0.785 39 | 0.614 51 | 0.882 63 | 0.819 59 | 0.632 61 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
MatchingNet | 0.724 41 | 0.812 40 | 0.812 22 | 0.810 40 | 0.735 34 | 0.834 43 | 0.495 49 | 0.860 27 | 0.572 65 | 0.602 50 | 0.954 40 | 0.512 47 | 0.280 47 | 0.757 24 | 0.845 41 | 0.725 36 | 0.780 40 | 0.606 55 | 0.937 19 | 0.851 42 | 0.700 41 | |
DiffSegNet | 0.758 14 | 0.725 78 | 0.789 41 | 0.843 20 | 0.762 17 | 0.856 15 | 0.562 14 | 0.920 4 | 0.657 29 | 0.658 21 | 0.958 23 | 0.589 14 | 0.337 18 | 0.782 6 | 0.879 24 | 0.787 11 | 0.779 41 | 0.678 22 | 0.926 29 | 0.880 13 | 0.799 5 | |
RFCR | 0.702 46 | 0.889 18 | 0.745 69 | 0.813 37 | 0.672 50 | 0.818 63 | 0.493 50 | 0.815 46 | 0.623 40 | 0.610 44 | 0.947 64 | 0.470 62 | 0.249 66 | 0.594 62 | 0.848 40 | 0.705 48 | 0.779 41 | 0.646 37 | 0.892 56 | 0.823 55 | 0.611 65 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
IPCA | 0.731 37 | 0.890 17 | 0.837 4 | 0.864 4 | 0.726 37 | 0.873 5 | 0.530 30 | 0.824 42 | 0.489 92 | 0.647 24 | 0.978 6 | 0.609 5 | 0.336 19 | 0.624 55 | 0.733 64 | 0.758 23 | 0.776 43 | 0.570 70 | 0.949 9 | 0.877 16 | 0.728 24 | |
ROSMRF3D | 0.673 60 | 0.789 44 | 0.748 66 | 0.763 63 | 0.635 64 | 0.814 66 | 0.407 87 | 0.747 65 | 0.581 63 | 0.573 58 | 0.950 55 | 0.484 56 | 0.271 55 | 0.607 59 | 0.754 60 | 0.649 70 | 0.774 44 | 0.596 59 | 0.883 62 | 0.823 55 | 0.606 69 | |
LRPNet | 0.742 32 | 0.816 38 | 0.806 27 | 0.807 43 | 0.752 24 | 0.828 50 | 0.575 8 | 0.839 36 | 0.699 9 | 0.637 34 | 0.954 40 | 0.520 45 | 0.320 27 | 0.755 26 | 0.834 43 | 0.760 22 | 0.772 45 | 0.676 26 | 0.915 41 | 0.862 30 | 0.717 33 | |
MinkowskiNet | ![]() | 0.736 35 | 0.859 25 | 0.818 16 | 0.832 30 | 0.709 41 | 0.840 35 | 0.521 33 | 0.853 28 | 0.660 26 | 0.643 27 | 0.951 51 | 0.544 34 | 0.286 43 | 0.731 35 | 0.893 18 | 0.675 60 | 0.772 45 | 0.683 21 | 0.874 72 | 0.852 41 | 0.727 26 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
FusionNet | 0.688 52 | 0.704 85 | 0.741 73 | 0.754 65 | 0.656 53 | 0.829 48 | 0.501 42 | 0.741 68 | 0.609 48 | 0.548 63 | 0.950 55 | 0.522 44 | 0.371 5 | 0.633 52 | 0.756 59 | 0.715 43 | 0.771 47 | 0.623 48 | 0.861 83 | 0.814 61 | 0.658 51 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
StratifiedFormer | ![]() | 0.747 25 | 0.901 15 | 0.803 29 | 0.845 18 | 0.757 21 | 0.846 30 | 0.512 37 | 0.825 41 | 0.696 11 | 0.645 25 | 0.956 30 | 0.576 22 | 0.262 62 | 0.744 33 | 0.861 29 | 0.742 29 | 0.770 48 | 0.705 11 | 0.899 51 | 0.860 32 | 0.734 21 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
LargeKernel3D | 0.739 34 | 0.909 12 | 0.820 12 | 0.806 45 | 0.740 32 | 0.852 22 | 0.545 21 | 0.826 40 | 0.594 57 | 0.643 27 | 0.955 34 | 0.541 35 | 0.263 61 | 0.723 37 | 0.858 32 | 0.775 18 | 0.767 49 | 0.678 22 | 0.933 23 | 0.848 43 | 0.694 42 | |
Yukang Chen*, Jianhui Liu*, Xiangyu Zhang, Xiaojuan Qi, Jiaya Jia: LargeKernel3D: Scaling up Kernels in 3D Sparse CNNs. CVPR 2023 | ||||||||||||||||||||||
DTC | 0.757 15 | 0.843 29 | 0.820 12 | 0.847 16 | 0.791 2 | 0.862 11 | 0.511 38 | 0.870 22 | 0.707 6 | 0.652 23 | 0.954 40 | 0.604 8 | 0.279 48 | 0.760 21 | 0.942 3 | 0.734 32 | 0.766 50 | 0.701 13 | 0.884 61 | 0.874 22 | 0.736 20 | |
Feature_GeometricNet | ![]() | 0.690 51 | 0.884 19 | 0.754 62 | 0.795 50 | 0.647 58 | 0.818 63 | 0.422 82 | 0.802 51 | 0.612 45 | 0.604 48 | 0.945 70 | 0.462 65 | 0.189 92 | 0.563 73 | 0.853 37 | 0.726 35 | 0.765 51 | 0.632 43 | 0.904 45 | 0.821 58 | 0.606 69 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
PointMetaBase | 0.714 43 | 0.835 31 | 0.785 43 | 0.821 32 | 0.684 47 | 0.846 30 | 0.531 29 | 0.865 25 | 0.614 43 | 0.596 54 | 0.953 44 | 0.500 50 | 0.246 67 | 0.674 40 | 0.888 20 | 0.692 53 | 0.764 52 | 0.624 47 | 0.849 87 | 0.844 48 | 0.675 47 | |
joint point-based | ![]() | 0.634 78 | 0.614 102 | 0.778 47 | 0.667 88 | 0.633 65 | 0.825 52 | 0.420 83 | 0.804 49 | 0.467 97 | 0.561 60 | 0.951 51 | 0.494 51 | 0.291 40 | 0.566 71 | 0.458 99 | 0.579 96 | 0.764 52 | 0.559 78 | 0.838 89 | 0.814 61 | 0.598 74 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
DGNet | 0.684 54 | 0.712 84 | 0.784 44 | 0.782 57 | 0.658 52 | 0.835 42 | 0.499 46 | 0.823 43 | 0.641 34 | 0.597 53 | 0.950 55 | 0.487 55 | 0.281 46 | 0.575 68 | 0.619 85 | 0.647 73 | 0.764 52 | 0.620 50 | 0.871 78 | 0.846 46 | 0.688 44 | |
Superpoint Network | 0.683 57 | 0.851 27 | 0.728 77 | 0.800 49 | 0.653 55 | 0.806 72 | 0.468 59 | 0.804 49 | 0.572 65 | 0.602 50 | 0.946 67 | 0.453 72 | 0.239 70 | 0.519 85 | 0.822 46 | 0.689 57 | 0.762 55 | 0.595 61 | 0.895 54 | 0.827 53 | 0.630 62 | |
SAT | 0.742 32 | 0.860 24 | 0.765 55 | 0.819 34 | 0.769 14 | 0.848 27 | 0.533 27 | 0.829 39 | 0.663 24 | 0.631 36 | 0.955 34 | 0.586 17 | 0.274 51 | 0.753 27 | 0.896 17 | 0.729 33 | 0.760 56 | 0.666 33 | 0.921 33 | 0.855 37 | 0.733 22 | |
DenSeR | 0.628 84 | 0.800 41 | 0.625 106 | 0.719 71 | 0.545 90 | 0.806 72 | 0.445 72 | 0.597 96 | 0.448 102 | 0.519 77 | 0.938 86 | 0.481 57 | 0.328 24 | 0.489 93 | 0.499 98 | 0.657 68 | 0.759 57 | 0.592 62 | 0.881 64 | 0.797 72 | 0.634 60 | |
JSENet | ![]() | 0.699 48 | 0.881 20 | 0.762 56 | 0.821 32 | 0.667 51 | 0.800 76 | 0.522 32 | 0.792 54 | 0.613 44 | 0.607 47 | 0.935 90 | 0.492 52 | 0.205 84 | 0.576 67 | 0.853 37 | 0.691 54 | 0.758 58 | 0.652 35 | 0.872 75 | 0.828 52 | 0.649 54 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
dtc_net | 0.625 86 | 0.703 86 | 0.751 65 | 0.794 51 | 0.535 91 | 0.848 27 | 0.480 54 | 0.676 85 | 0.528 79 | 0.469 89 | 0.944 76 | 0.454 69 | 0.004 119 | 0.464 96 | 0.636 83 | 0.704 49 | 0.758 58 | 0.548 85 | 0.924 30 | 0.787 83 | 0.492 102 | |
PointTransformer++ | 0.725 39 | 0.727 76 | 0.811 24 | 0.819 34 | 0.765 15 | 0.841 34 | 0.502 41 | 0.814 47 | 0.621 42 | 0.623 39 | 0.955 34 | 0.556 31 | 0.284 44 | 0.620 57 | 0.866 27 | 0.781 14 | 0.757 60 | 0.648 36 | 0.932 25 | 0.862 30 | 0.709 37 | |
PointConv | ![]() | 0.666 63 | 0.781 48 | 0.759 58 | 0.699 76 | 0.644 61 | 0.822 56 | 0.475 55 | 0.779 55 | 0.564 71 | 0.504 82 | 0.953 44 | 0.428 82 | 0.203 86 | 0.586 65 | 0.754 60 | 0.661 66 | 0.753 61 | 0.588 65 | 0.902 48 | 0.813 63 | 0.642 57 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointASNL | ![]() | 0.666 63 | 0.703 86 | 0.781 46 | 0.751 67 | 0.655 54 | 0.830 47 | 0.471 57 | 0.769 58 | 0.474 95 | 0.537 67 | 0.951 51 | 0.475 60 | 0.279 48 | 0.635 50 | 0.698 74 | 0.675 60 | 0.751 62 | 0.553 81 | 0.816 94 | 0.806 65 | 0.703 40 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
O3DSeg | 0.668 62 | 0.822 36 | 0.771 52 | 0.496 111 | 0.651 57 | 0.833 44 | 0.541 23 | 0.761 60 | 0.555 74 | 0.611 43 | 0.966 15 | 0.489 54 | 0.370 6 | 0.388 104 | 0.580 88 | 0.776 17 | 0.751 62 | 0.570 70 | 0.956 7 | 0.817 60 | 0.646 56 | |
VACNN++ | 0.684 54 | 0.728 75 | 0.757 61 | 0.776 58 | 0.690 44 | 0.804 74 | 0.464 62 | 0.816 44 | 0.577 64 | 0.587 57 | 0.945 70 | 0.508 49 | 0.276 50 | 0.671 41 | 0.710 69 | 0.663 65 | 0.750 64 | 0.589 64 | 0.881 64 | 0.832 51 | 0.653 53 | |
SALANet | 0.670 61 | 0.816 38 | 0.770 53 | 0.768 60 | 0.652 56 | 0.807 71 | 0.451 65 | 0.747 65 | 0.659 28 | 0.545 64 | 0.924 100 | 0.473 61 | 0.149 107 | 0.571 70 | 0.811 51 | 0.635 80 | 0.746 65 | 0.623 48 | 0.892 56 | 0.794 74 | 0.570 82 | |
Weakly-Openseg v3 | 0.625 86 | 0.924 8 | 0.787 42 | 0.620 99 | 0.555 89 | 0.811 70 | 0.393 92 | 0.666 87 | 0.382 110 | 0.520 76 | 0.953 44 | 0.250 114 | 0.208 81 | 0.604 60 | 0.670 77 | 0.644 77 | 0.742 66 | 0.538 91 | 0.919 36 | 0.803 67 | 0.513 100 | |
RandLA-Net | ![]() | 0.645 69 | 0.778 49 | 0.731 76 | 0.699 76 | 0.577 80 | 0.829 48 | 0.446 70 | 0.736 69 | 0.477 94 | 0.523 75 | 0.945 70 | 0.454 69 | 0.269 57 | 0.484 94 | 0.749 63 | 0.618 83 | 0.738 67 | 0.599 58 | 0.827 91 | 0.792 79 | 0.621 64 |
PointNet2-SFPN | 0.631 80 | 0.771 55 | 0.692 91 | 0.672 84 | 0.524 93 | 0.837 39 | 0.440 77 | 0.706 79 | 0.538 76 | 0.446 94 | 0.944 76 | 0.421 87 | 0.219 77 | 0.552 77 | 0.751 62 | 0.591 92 | 0.737 68 | 0.543 88 | 0.901 50 | 0.768 91 | 0.557 89 | |
HPGCNN | 0.656 67 | 0.698 88 | 0.743 71 | 0.650 92 | 0.564 84 | 0.820 59 | 0.505 40 | 0.758 61 | 0.631 38 | 0.479 86 | 0.945 70 | 0.480 58 | 0.226 72 | 0.572 69 | 0.774 58 | 0.690 55 | 0.735 69 | 0.614 51 | 0.853 86 | 0.776 89 | 0.597 75 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
wsss-transformer | 0.600 92 | 0.634 98 | 0.743 71 | 0.697 78 | 0.601 72 | 0.781 85 | 0.437 79 | 0.585 99 | 0.493 89 | 0.446 94 | 0.933 95 | 0.394 93 | 0.011 118 | 0.654 44 | 0.661 82 | 0.603 87 | 0.733 70 | 0.526 94 | 0.832 90 | 0.761 94 | 0.480 105 | |
Feature-Geometry Net | ![]() | 0.685 53 | 0.866 22 | 0.748 66 | 0.819 34 | 0.645 60 | 0.794 79 | 0.450 68 | 0.802 51 | 0.587 59 | 0.604 48 | 0.945 70 | 0.464 64 | 0.201 87 | 0.554 76 | 0.840 42 | 0.723 38 | 0.732 71 | 0.602 57 | 0.907 43 | 0.822 57 | 0.603 72 |
PD-Net | 0.638 74 | 0.797 42 | 0.769 54 | 0.641 97 | 0.590 75 | 0.820 59 | 0.461 63 | 0.537 105 | 0.637 36 | 0.536 68 | 0.947 64 | 0.388 95 | 0.206 83 | 0.656 43 | 0.668 79 | 0.647 73 | 0.732 71 | 0.585 67 | 0.868 80 | 0.793 76 | 0.473 108 | |
DCM-Net | 0.658 66 | 0.778 49 | 0.702 83 | 0.806 45 | 0.619 67 | 0.813 69 | 0.468 59 | 0.693 81 | 0.494 88 | 0.524 73 | 0.941 82 | 0.449 74 | 0.298 36 | 0.510 87 | 0.821 47 | 0.675 60 | 0.727 73 | 0.568 73 | 0.826 92 | 0.803 67 | 0.637 59 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
VI-PointConv | 0.676 59 | 0.770 57 | 0.754 62 | 0.783 56 | 0.621 66 | 0.814 66 | 0.552 18 | 0.758 61 | 0.571 68 | 0.557 61 | 0.954 40 | 0.529 41 | 0.268 59 | 0.530 83 | 0.682 75 | 0.675 60 | 0.719 74 | 0.603 56 | 0.888 59 | 0.833 49 | 0.665 49 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
AttAN | 0.609 91 | 0.760 60 | 0.667 98 | 0.649 93 | 0.521 94 | 0.793 80 | 0.457 64 | 0.648 90 | 0.528 79 | 0.434 99 | 0.947 64 | 0.401 92 | 0.153 105 | 0.454 97 | 0.721 68 | 0.648 72 | 0.717 75 | 0.536 92 | 0.904 45 | 0.765 92 | 0.485 104 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
GMLPs | 0.538 102 | 0.495 112 | 0.693 90 | 0.647 94 | 0.471 101 | 0.793 80 | 0.300 107 | 0.477 108 | 0.505 86 | 0.358 106 | 0.903 110 | 0.327 104 | 0.081 113 | 0.472 95 | 0.529 94 | 0.448 109 | 0.710 76 | 0.509 95 | 0.746 101 | 0.737 100 | 0.554 91 | |
PointMRNet | 0.640 72 | 0.717 82 | 0.701 84 | 0.692 79 | 0.576 81 | 0.801 75 | 0.467 61 | 0.716 74 | 0.563 72 | 0.459 92 | 0.953 44 | 0.429 81 | 0.169 99 | 0.581 66 | 0.854 36 | 0.605 86 | 0.710 76 | 0.550 83 | 0.894 55 | 0.793 76 | 0.575 80 | |
PointConv-SFPN | 0.641 70 | 0.776 51 | 0.703 82 | 0.721 70 | 0.557 87 | 0.826 51 | 0.451 65 | 0.672 86 | 0.563 72 | 0.483 85 | 0.943 79 | 0.425 85 | 0.162 102 | 0.644 47 | 0.726 65 | 0.659 67 | 0.709 78 | 0.572 69 | 0.875 70 | 0.786 84 | 0.559 88 | |
SAFNet-seg | ![]() | 0.654 68 | 0.752 63 | 0.734 75 | 0.664 89 | 0.583 79 | 0.815 65 | 0.399 89 | 0.754 63 | 0.639 35 | 0.535 69 | 0.942 80 | 0.470 62 | 0.309 31 | 0.665 42 | 0.539 91 | 0.650 69 | 0.708 79 | 0.635 42 | 0.857 85 | 0.793 76 | 0.642 57 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
APCF-Net | 0.631 80 | 0.742 68 | 0.687 96 | 0.672 84 | 0.557 87 | 0.792 82 | 0.408 85 | 0.665 88 | 0.545 75 | 0.508 79 | 0.952 49 | 0.428 82 | 0.186 93 | 0.634 51 | 0.702 72 | 0.620 82 | 0.706 80 | 0.555 80 | 0.873 73 | 0.798 71 | 0.581 78 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
SPH3D-GCN | ![]() | 0.610 90 | 0.858 26 | 0.772 50 | 0.489 112 | 0.532 92 | 0.792 82 | 0.404 88 | 0.643 92 | 0.570 69 | 0.507 81 | 0.935 90 | 0.414 89 | 0.046 116 | 0.510 87 | 0.702 72 | 0.602 88 | 0.705 81 | 0.549 84 | 0.859 84 | 0.773 90 | 0.534 95 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
3DSM_DMMF | 0.631 80 | 0.626 99 | 0.745 69 | 0.801 48 | 0.607 69 | 0.751 97 | 0.506 39 | 0.729 72 | 0.565 70 | 0.491 84 | 0.866 114 | 0.434 77 | 0.197 90 | 0.595 61 | 0.630 84 | 0.709 46 | 0.705 81 | 0.560 76 | 0.875 70 | 0.740 99 | 0.491 103 | |
PointSPNet | 0.637 75 | 0.734 71 | 0.692 91 | 0.714 73 | 0.576 81 | 0.797 78 | 0.446 70 | 0.743 67 | 0.598 55 | 0.437 97 | 0.942 80 | 0.403 91 | 0.150 106 | 0.626 54 | 0.800 55 | 0.649 70 | 0.697 83 | 0.557 79 | 0.846 88 | 0.777 88 | 0.563 86 | |
FPConv | ![]() | 0.639 73 | 0.785 46 | 0.760 57 | 0.713 74 | 0.603 70 | 0.798 77 | 0.392 93 | 0.534 106 | 0.603 52 | 0.524 73 | 0.948 62 | 0.457 67 | 0.250 65 | 0.538 81 | 0.723 67 | 0.598 90 | 0.696 84 | 0.614 51 | 0.872 75 | 0.799 69 | 0.567 85 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
SegGroup_sem | ![]() | 0.627 85 | 0.818 37 | 0.747 68 | 0.701 75 | 0.602 71 | 0.764 93 | 0.385 97 | 0.629 93 | 0.490 90 | 0.508 79 | 0.931 97 | 0.409 90 | 0.201 87 | 0.564 72 | 0.725 66 | 0.618 83 | 0.692 85 | 0.539 90 | 0.873 73 | 0.794 74 | 0.548 92 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
MVPNet | ![]() | 0.641 70 | 0.831 32 | 0.715 78 | 0.671 86 | 0.590 75 | 0.781 85 | 0.394 91 | 0.679 83 | 0.642 33 | 0.553 62 | 0.937 87 | 0.462 65 | 0.256 63 | 0.649 45 | 0.406 104 | 0.626 81 | 0.691 86 | 0.666 33 | 0.877 68 | 0.792 79 | 0.608 68 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
FusionAwareConv | 0.630 83 | 0.604 104 | 0.741 73 | 0.766 62 | 0.590 75 | 0.747 98 | 0.501 42 | 0.734 70 | 0.503 87 | 0.527 71 | 0.919 104 | 0.454 69 | 0.323 26 | 0.550 79 | 0.420 103 | 0.678 59 | 0.688 87 | 0.544 86 | 0.896 53 | 0.795 73 | 0.627 63 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
SD-DETR | 0.576 97 | 0.746 65 | 0.609 110 | 0.445 116 | 0.517 95 | 0.643 111 | 0.366 99 | 0.714 76 | 0.456 100 | 0.468 90 | 0.870 113 | 0.432 78 | 0.264 60 | 0.558 75 | 0.674 76 | 0.586 95 | 0.688 87 | 0.482 103 | 0.739 103 | 0.733 101 | 0.537 94 | |
SConv | 0.636 76 | 0.830 33 | 0.697 87 | 0.752 66 | 0.572 83 | 0.780 87 | 0.445 72 | 0.716 74 | 0.529 78 | 0.530 70 | 0.951 51 | 0.446 76 | 0.170 98 | 0.507 89 | 0.666 80 | 0.636 79 | 0.682 89 | 0.541 89 | 0.886 60 | 0.799 69 | 0.594 76 | |
PPCNN++ | ![]() | 0.663 65 | 0.746 65 | 0.708 80 | 0.722 69 | 0.638 63 | 0.820 59 | 0.451 65 | 0.566 101 | 0.599 54 | 0.541 65 | 0.950 55 | 0.510 48 | 0.313 29 | 0.648 46 | 0.819 49 | 0.616 85 | 0.682 89 | 0.590 63 | 0.869 79 | 0.810 64 | 0.656 52 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
subcloud_weak | 0.516 104 | 0.676 90 | 0.591 113 | 0.609 100 | 0.442 104 | 0.774 89 | 0.335 103 | 0.597 96 | 0.422 107 | 0.357 107 | 0.932 96 | 0.341 103 | 0.094 112 | 0.298 109 | 0.528 95 | 0.473 107 | 0.676 91 | 0.495 100 | 0.602 114 | 0.721 104 | 0.349 116 | |
ROSMRF | 0.580 96 | 0.772 54 | 0.707 81 | 0.681 82 | 0.563 85 | 0.764 93 | 0.362 100 | 0.515 107 | 0.465 98 | 0.465 91 | 0.936 89 | 0.427 84 | 0.207 82 | 0.438 98 | 0.577 89 | 0.536 100 | 0.675 92 | 0.486 102 | 0.723 105 | 0.779 86 | 0.524 97 | |
SIConv | 0.625 86 | 0.830 33 | 0.694 89 | 0.757 64 | 0.563 85 | 0.772 91 | 0.448 69 | 0.647 91 | 0.520 81 | 0.509 78 | 0.949 59 | 0.431 80 | 0.191 91 | 0.496 91 | 0.614 86 | 0.647 73 | 0.672 93 | 0.535 93 | 0.876 69 | 0.783 85 | 0.571 81 | |
DVVNet | 0.562 100 | 0.648 95 | 0.700 85 | 0.770 59 | 0.586 78 | 0.687 105 | 0.333 104 | 0.650 89 | 0.514 84 | 0.475 88 | 0.906 108 | 0.359 99 | 0.223 76 | 0.340 107 | 0.442 102 | 0.422 111 | 0.668 94 | 0.501 98 | 0.708 106 | 0.779 86 | 0.534 95 | |
PointMTL | 0.632 79 | 0.731 73 | 0.688 94 | 0.675 83 | 0.591 74 | 0.784 84 | 0.444 75 | 0.565 102 | 0.610 46 | 0.492 83 | 0.949 59 | 0.456 68 | 0.254 64 | 0.587 63 | 0.706 70 | 0.599 89 | 0.665 95 | 0.612 54 | 0.868 80 | 0.791 82 | 0.579 79 | |
SQN_0.1% | 0.569 98 | 0.676 90 | 0.696 88 | 0.657 90 | 0.497 96 | 0.779 88 | 0.424 81 | 0.548 103 | 0.515 83 | 0.376 104 | 0.902 111 | 0.422 86 | 0.357 10 | 0.379 105 | 0.456 100 | 0.596 91 | 0.659 96 | 0.544 86 | 0.685 108 | 0.665 112 | 0.556 90 | |
PanopticFusion-label | 0.529 103 | 0.491 113 | 0.688 94 | 0.604 102 | 0.386 108 | 0.632 112 | 0.225 118 | 0.705 80 | 0.434 105 | 0.293 112 | 0.815 116 | 0.348 102 | 0.241 69 | 0.499 90 | 0.669 78 | 0.507 102 | 0.649 97 | 0.442 111 | 0.796 96 | 0.602 116 | 0.561 87 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
LAP-D | 0.594 93 | 0.720 80 | 0.692 91 | 0.637 98 | 0.456 103 | 0.773 90 | 0.391 95 | 0.730 71 | 0.587 59 | 0.445 96 | 0.940 84 | 0.381 96 | 0.288 41 | 0.434 100 | 0.453 101 | 0.591 92 | 0.649 97 | 0.581 68 | 0.777 98 | 0.749 98 | 0.610 67 | |
Supervoxel-CNN | 0.635 77 | 0.656 94 | 0.711 79 | 0.719 71 | 0.613 68 | 0.757 96 | 0.444 75 | 0.765 59 | 0.534 77 | 0.566 59 | 0.928 98 | 0.478 59 | 0.272 53 | 0.636 49 | 0.531 93 | 0.664 64 | 0.645 99 | 0.508 97 | 0.864 82 | 0.792 79 | 0.611 65 | |
CCRFNet | 0.589 95 | 0.766 59 | 0.659 101 | 0.683 81 | 0.470 102 | 0.740 100 | 0.387 96 | 0.620 95 | 0.490 90 | 0.476 87 | 0.922 102 | 0.355 101 | 0.245 68 | 0.511 86 | 0.511 96 | 0.571 97 | 0.643 100 | 0.493 101 | 0.872 75 | 0.762 93 | 0.600 73 | |
Pointnet++ & Feature | ![]() | 0.557 101 | 0.735 70 | 0.661 100 | 0.686 80 | 0.491 98 | 0.744 99 | 0.392 93 | 0.539 104 | 0.451 101 | 0.375 105 | 0.946 67 | 0.376 97 | 0.205 84 | 0.403 103 | 0.356 107 | 0.553 99 | 0.643 100 | 0.497 99 | 0.824 93 | 0.756 95 | 0.515 98 |
DPC | 0.592 94 | 0.720 80 | 0.700 85 | 0.602 103 | 0.480 99 | 0.762 95 | 0.380 98 | 0.713 77 | 0.585 62 | 0.437 97 | 0.940 84 | 0.369 98 | 0.288 41 | 0.434 100 | 0.509 97 | 0.590 94 | 0.639 102 | 0.567 74 | 0.772 99 | 0.755 96 | 0.592 77 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
TextureNet | ![]() | 0.566 99 | 0.672 92 | 0.664 99 | 0.671 86 | 0.494 97 | 0.719 101 | 0.445 72 | 0.678 84 | 0.411 108 | 0.396 102 | 0.935 90 | 0.356 100 | 0.225 74 | 0.412 102 | 0.535 92 | 0.565 98 | 0.636 103 | 0.464 105 | 0.794 97 | 0.680 109 | 0.568 84 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
HPEIN | 0.618 89 | 0.729 74 | 0.668 97 | 0.647 94 | 0.597 73 | 0.766 92 | 0.414 84 | 0.680 82 | 0.520 81 | 0.525 72 | 0.946 67 | 0.432 78 | 0.215 79 | 0.493 92 | 0.599 87 | 0.638 78 | 0.617 104 | 0.570 70 | 0.897 52 | 0.806 65 | 0.605 71 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
SurfaceConvPF | 0.442 112 | 0.505 111 | 0.622 108 | 0.380 119 | 0.342 114 | 0.654 108 | 0.227 117 | 0.397 111 | 0.367 112 | 0.276 114 | 0.924 100 | 0.240 115 | 0.198 89 | 0.359 106 | 0.262 110 | 0.366 113 | 0.581 105 | 0.435 112 | 0.640 111 | 0.668 110 | 0.398 111 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
DGCNN_reproduce | ![]() | 0.446 111 | 0.474 115 | 0.623 107 | 0.463 114 | 0.366 111 | 0.651 109 | 0.310 105 | 0.389 112 | 0.349 114 | 0.330 109 | 0.937 87 | 0.271 111 | 0.126 109 | 0.285 110 | 0.224 113 | 0.350 116 | 0.577 106 | 0.445 110 | 0.625 112 | 0.723 103 | 0.394 112 |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
Online SegFusion | 0.515 105 | 0.607 103 | 0.644 104 | 0.579 105 | 0.434 105 | 0.630 113 | 0.353 101 | 0.628 94 | 0.440 103 | 0.410 100 | 0.762 119 | 0.307 106 | 0.167 100 | 0.520 84 | 0.403 105 | 0.516 101 | 0.565 107 | 0.447 109 | 0.678 109 | 0.701 106 | 0.514 99 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
Tangent Convolutions | ![]() | 0.438 114 | 0.437 117 | 0.646 103 | 0.474 113 | 0.369 110 | 0.645 110 | 0.353 101 | 0.258 117 | 0.282 119 | 0.279 113 | 0.918 105 | 0.298 108 | 0.147 108 | 0.283 111 | 0.294 109 | 0.487 104 | 0.562 108 | 0.427 113 | 0.619 113 | 0.633 114 | 0.352 115 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
FCPN | ![]() | 0.447 110 | 0.679 89 | 0.604 112 | 0.578 106 | 0.380 109 | 0.682 106 | 0.291 110 | 0.106 120 | 0.483 93 | 0.258 118 | 0.920 103 | 0.258 113 | 0.025 117 | 0.231 116 | 0.325 108 | 0.480 106 | 0.560 109 | 0.463 106 | 0.725 104 | 0.666 111 | 0.231 120 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
3DWSSS | 0.425 115 | 0.525 110 | 0.647 102 | 0.522 108 | 0.324 115 | 0.488 120 | 0.077 121 | 0.712 78 | 0.353 113 | 0.401 101 | 0.636 121 | 0.281 110 | 0.176 96 | 0.340 107 | 0.565 90 | 0.175 120 | 0.551 110 | 0.398 115 | 0.370 121 | 0.602 116 | 0.361 114 | |
PointCNN with RGB | ![]() | 0.458 109 | 0.577 106 | 0.611 109 | 0.356 120 | 0.321 116 | 0.715 102 | 0.299 109 | 0.376 113 | 0.328 116 | 0.319 110 | 0.944 76 | 0.285 109 | 0.164 101 | 0.216 117 | 0.229 112 | 0.484 105 | 0.545 111 | 0.456 107 | 0.755 100 | 0.709 105 | 0.475 107 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
3DMV, FTSDF | 0.501 106 | 0.558 108 | 0.608 111 | 0.424 118 | 0.478 100 | 0.690 104 | 0.246 114 | 0.586 98 | 0.468 96 | 0.450 93 | 0.911 106 | 0.394 93 | 0.160 103 | 0.438 98 | 0.212 114 | 0.432 110 | 0.541 112 | 0.475 104 | 0.742 102 | 0.727 102 | 0.477 106 | |
PCNN | 0.498 107 | 0.559 107 | 0.644 104 | 0.560 107 | 0.420 107 | 0.711 103 | 0.229 116 | 0.414 109 | 0.436 104 | 0.352 108 | 0.941 82 | 0.324 105 | 0.155 104 | 0.238 114 | 0.387 106 | 0.493 103 | 0.529 113 | 0.509 95 | 0.813 95 | 0.751 97 | 0.504 101 | |
SPLAT Net | ![]() | 0.393 116 | 0.472 116 | 0.511 117 | 0.606 101 | 0.311 117 | 0.656 107 | 0.245 115 | 0.405 110 | 0.328 116 | 0.197 119 | 0.927 99 | 0.227 117 | 0.000 121 | 0.001 122 | 0.249 111 | 0.271 119 | 0.510 114 | 0.383 117 | 0.593 115 | 0.699 107 | 0.267 118 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 117 | 0.297 119 | 0.491 118 | 0.432 117 | 0.358 113 | 0.612 115 | 0.274 112 | 0.116 119 | 0.411 108 | 0.265 115 | 0.904 109 | 0.229 116 | 0.079 114 | 0.250 112 | 0.185 117 | 0.320 117 | 0.510 114 | 0.385 116 | 0.548 116 | 0.597 119 | 0.394 112 | |
3DMV | 0.484 108 | 0.484 114 | 0.538 116 | 0.643 96 | 0.424 106 | 0.606 116 | 0.310 105 | 0.574 100 | 0.433 106 | 0.378 103 | 0.796 117 | 0.301 107 | 0.214 80 | 0.537 82 | 0.208 115 | 0.472 108 | 0.507 116 | 0.413 114 | 0.693 107 | 0.602 116 | 0.539 93 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
PNET2 | 0.442 112 | 0.548 109 | 0.548 115 | 0.597 104 | 0.363 112 | 0.628 114 | 0.300 107 | 0.292 115 | 0.374 111 | 0.307 111 | 0.881 112 | 0.268 112 | 0.186 93 | 0.238 114 | 0.204 116 | 0.407 112 | 0.506 117 | 0.449 108 | 0.667 110 | 0.620 115 | 0.462 110 | |
GrowSP++ | 0.323 119 | 0.114 121 | 0.589 114 | 0.499 110 | 0.147 121 | 0.555 117 | 0.290 111 | 0.336 114 | 0.290 118 | 0.262 116 | 0.865 115 | 0.102 121 | 0.000 121 | 0.037 120 | 0.000 122 | 0.000 122 | 0.462 118 | 0.381 118 | 0.389 120 | 0.664 113 | 0.473 108 | |
SSC-UNet | ![]() | 0.308 120 | 0.353 118 | 0.290 121 | 0.278 121 | 0.166 120 | 0.553 118 | 0.169 120 | 0.286 116 | 0.147 121 | 0.148 121 | 0.908 107 | 0.182 119 | 0.064 115 | 0.023 121 | 0.018 121 | 0.354 115 | 0.363 119 | 0.345 119 | 0.546 118 | 0.685 108 | 0.278 117 |
ScanNet | ![]() | 0.306 121 | 0.203 120 | 0.366 120 | 0.501 109 | 0.311 117 | 0.524 119 | 0.211 119 | 0.002 122 | 0.342 115 | 0.189 120 | 0.786 118 | 0.145 120 | 0.102 111 | 0.245 113 | 0.152 118 | 0.318 118 | 0.348 120 | 0.300 120 | 0.460 119 | 0.437 121 | 0.182 121 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
PointNet++ | ![]() | 0.339 118 | 0.584 105 | 0.478 119 | 0.458 115 | 0.256 119 | 0.360 121 | 0.250 113 | 0.247 118 | 0.278 120 | 0.261 117 | 0.677 120 | 0.183 118 | 0.117 110 | 0.212 118 | 0.145 119 | 0.364 114 | 0.346 121 | 0.232 121 | 0.548 116 | 0.523 120 | 0.252 119 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
ERROR | 0.054 122 | 0.000 122 | 0.041 122 | 0.172 122 | 0.030 122 | 0.062 123 | 0.001 122 | 0.035 121 | 0.004 122 | 0.051 122 | 0.143 122 | 0.019 122 | 0.003 120 | 0.041 119 | 0.050 120 | 0.003 121 | 0.054 122 | 0.018 122 | 0.005 123 | 0.264 122 | 0.082 122 | |
MVF-GNN | 0.014 123 | 0.000 122 | 0.000 123 | 0.000 123 | 0.007 123 | 0.086 122 | 0.000 123 | 0.000 123 | 0.001 123 | 0.000 123 | 0.029 123 | 0.001 123 | 0.000 121 | 0.000 123 | 0.000 122 | 0.000 122 | 0.000 123 | 0.018 122 | 0.015 122 | 0.115 123 | 0.000 123 | |