3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mix3D | 0.781 3 | 0.964 2 | 0.855 1 | 0.843 15 | 0.781 6 | 0.858 11 | 0.575 6 | 0.831 31 | 0.685 13 | 0.714 2 | 0.979 1 | 0.594 7 | 0.310 26 | 0.801 1 | 0.892 15 | 0.841 2 | 0.819 4 | 0.723 4 | 0.940 13 | 0.887 6 | 0.725 22 | |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
PTv3 ScanNet | 0.794 1 | 0.941 3 | 0.813 17 | 0.851 7 | 0.782 5 | 0.890 2 | 0.597 1 | 0.916 2 | 0.696 7 | 0.713 3 | 0.979 1 | 0.635 1 | 0.384 2 | 0.793 2 | 0.907 7 | 0.821 4 | 0.790 30 | 0.696 10 | 0.967 3 | 0.903 1 | 0.805 1 | |
Xiaoyang Wu, Li Jiang, Peng-Shuai Wang, Zhijian Liu, Xihui Liu, Yu Qiao, Wanli Ouyang, Tong He, Hengshuang Zhao: Point Transformer V3: Simpler, Faster, Stronger. CVPR 2024 | ||||||||||||||||||||||
PPT-SpUNet-Joint | 0.766 7 | 0.932 5 | 0.794 31 | 0.829 23 | 0.751 21 | 0.854 13 | 0.540 20 | 0.903 7 | 0.630 32 | 0.672 14 | 0.963 14 | 0.565 20 | 0.357 8 | 0.788 3 | 0.900 11 | 0.737 25 | 0.802 15 | 0.685 15 | 0.950 7 | 0.887 6 | 0.780 5 | |
Xiaoyang Wu, Zhuotao Tian, Xin Wen, Bohao Peng, Xihui Liu, Kaicheng Yu, Hengshuang Zhao: Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training. CVPR 2024 | ||||||||||||||||||||||
PointConvFormer | 0.749 17 | 0.793 39 | 0.790 34 | 0.807 35 | 0.750 22 | 0.856 12 | 0.524 26 | 0.881 13 | 0.588 51 | 0.642 25 | 0.977 8 | 0.591 9 | 0.274 45 | 0.781 4 | 0.929 2 | 0.804 6 | 0.796 23 | 0.642 32 | 0.947 9 | 0.885 8 | 0.715 28 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
DMF-Net | 0.752 15 | 0.906 12 | 0.793 33 | 0.802 39 | 0.689 38 | 0.825 44 | 0.556 12 | 0.867 16 | 0.681 14 | 0.602 42 | 0.960 17 | 0.555 25 | 0.365 7 | 0.779 5 | 0.859 25 | 0.747 22 | 0.795 26 | 0.717 6 | 0.917 30 | 0.856 28 | 0.764 9 | |
C.Yang, Y.Yan, W.Zhao, J.Ye, X.Yang, A.Hussain, B.Dong, K.Huang: Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation. ICONIP 2023 | ||||||||||||||||||||||
Retro-FPN | 0.744 23 | 0.842 26 | 0.800 25 | 0.767 53 | 0.740 25 | 0.836 34 | 0.541 18 | 0.914 3 | 0.672 17 | 0.626 30 | 0.958 20 | 0.552 26 | 0.272 47 | 0.777 6 | 0.886 18 | 0.696 44 | 0.801 19 | 0.674 23 | 0.941 12 | 0.858 26 | 0.717 25 | |
Peng Xiang*, Xin Wen*, Yu-Shen Liu, Hui Zhang, Yi Fang, Zhizhong Han: Retrospective Feature Pyramid Network for Point Cloud Semantic Segmentation. ICCV 2023 | ||||||||||||||||||||||
OccuSeg+Semantic | 0.764 9 | 0.758 57 | 0.796 29 | 0.839 17 | 0.746 23 | 0.907 1 | 0.562 11 | 0.850 23 | 0.680 15 | 0.672 14 | 0.978 4 | 0.610 3 | 0.335 17 | 0.777 6 | 0.819 43 | 0.847 1 | 0.830 1 | 0.691 13 | 0.972 2 | 0.885 8 | 0.727 20 | |
TTT-KD | 0.773 5 | 0.646 89 | 0.818 13 | 0.809 33 | 0.774 8 | 0.878 3 | 0.581 2 | 0.943 1 | 0.687 11 | 0.704 5 | 0.978 4 | 0.607 5 | 0.336 15 | 0.775 8 | 0.912 5 | 0.838 3 | 0.823 2 | 0.694 11 | 0.967 3 | 0.899 2 | 0.794 3 | |
Lisa Weijler, Muhammad Jehanzeb Mirza, Leon Sick, Can Ekkazan, Pedro Hermosilla: TTT-KD: Test-Time Training for 3D Semantic Segmentation through Knowledge Distillation from Foundation Models. | ||||||||||||||||||||||
BPNet | 0.749 17 | 0.909 10 | 0.818 13 | 0.811 31 | 0.752 19 | 0.839 31 | 0.485 45 | 0.842 27 | 0.673 16 | 0.644 21 | 0.957 24 | 0.528 35 | 0.305 28 | 0.773 9 | 0.859 25 | 0.788 8 | 0.818 5 | 0.693 12 | 0.916 31 | 0.856 28 | 0.723 23 | |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
OA-CNN-L_ScanNet20 | 0.756 12 | 0.783 43 | 0.826 5 | 0.858 4 | 0.776 7 | 0.837 32 | 0.548 15 | 0.896 11 | 0.649 24 | 0.675 12 | 0.962 15 | 0.586 12 | 0.335 17 | 0.771 10 | 0.802 47 | 0.770 15 | 0.787 32 | 0.691 13 | 0.936 17 | 0.880 11 | 0.761 10 | |
ResLFE_HDS | 0.772 6 | 0.939 4 | 0.824 6 | 0.854 6 | 0.771 9 | 0.840 29 | 0.564 10 | 0.900 8 | 0.686 12 | 0.677 11 | 0.961 16 | 0.537 29 | 0.348 11 | 0.769 11 | 0.903 9 | 0.785 10 | 0.815 6 | 0.676 20 | 0.939 14 | 0.880 11 | 0.772 8 | |
PonderV2 | 0.785 2 | 0.978 1 | 0.800 25 | 0.833 21 | 0.788 3 | 0.853 15 | 0.545 16 | 0.910 5 | 0.713 1 | 0.705 4 | 0.979 1 | 0.596 6 | 0.390 1 | 0.769 11 | 0.832 40 | 0.821 4 | 0.792 29 | 0.730 1 | 0.975 1 | 0.897 4 | 0.785 4 | |
Haoyi Zhu, Honghui Yang, Xiaoyang Wu, Di Huang, Sha Zhang, Xianglong He, Tong He, Hengshuang Zhao, Chunhua Shen, Yu Qiao, Wanli Ouyang: PonderV2: Pave the Way for 3D Foundataion Model with A Universal Pre-training Paradigm. | ||||||||||||||||||||||
MSP | 0.748 19 | 0.623 92 | 0.804 23 | 0.859 3 | 0.745 24 | 0.824 46 | 0.501 35 | 0.912 4 | 0.690 9 | 0.685 8 | 0.956 25 | 0.567 19 | 0.320 23 | 0.768 13 | 0.918 4 | 0.720 32 | 0.802 15 | 0.676 20 | 0.921 28 | 0.881 10 | 0.779 6 | |
PointTransformerV2 | 0.752 15 | 0.742 65 | 0.809 20 | 0.872 1 | 0.758 14 | 0.860 10 | 0.552 13 | 0.891 12 | 0.610 39 | 0.687 6 | 0.960 17 | 0.559 23 | 0.304 29 | 0.766 14 | 0.926 3 | 0.767 16 | 0.797 22 | 0.644 31 | 0.942 11 | 0.876 16 | 0.722 24 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
VMNet | 0.746 21 | 0.870 18 | 0.838 2 | 0.858 4 | 0.729 29 | 0.850 19 | 0.501 35 | 0.874 15 | 0.587 52 | 0.658 18 | 0.956 25 | 0.564 21 | 0.299 30 | 0.765 15 | 0.900 11 | 0.716 35 | 0.812 11 | 0.631 37 | 0.939 14 | 0.858 26 | 0.709 29 | |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
O-CNN | 0.762 11 | 0.924 8 | 0.823 7 | 0.844 14 | 0.770 10 | 0.852 17 | 0.577 4 | 0.847 26 | 0.711 2 | 0.640 26 | 0.958 20 | 0.592 8 | 0.217 71 | 0.762 16 | 0.888 16 | 0.758 19 | 0.813 10 | 0.726 2 | 0.932 22 | 0.868 19 | 0.744 13 | |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
EQ-Net | 0.743 24 | 0.620 93 | 0.799 28 | 0.849 9 | 0.730 28 | 0.822 48 | 0.493 42 | 0.897 10 | 0.664 18 | 0.681 9 | 0.955 28 | 0.562 22 | 0.378 3 | 0.760 17 | 0.903 9 | 0.738 24 | 0.801 19 | 0.673 24 | 0.907 35 | 0.877 13 | 0.745 12 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
ConDaFormer | 0.755 13 | 0.927 6 | 0.822 8 | 0.836 18 | 0.801 1 | 0.849 20 | 0.516 30 | 0.864 20 | 0.651 23 | 0.680 10 | 0.958 20 | 0.584 14 | 0.282 40 | 0.759 18 | 0.855 30 | 0.728 27 | 0.802 15 | 0.678 17 | 0.880 57 | 0.873 18 | 0.756 11 | |
Lunhao Duan, Shanshan Zhao, Nan Xue, Mingming Gong, Guisong Xia, Dacheng Tao: ConDaFormer : Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding. Neurips, 2023 | ||||||||||||||||||||||
MatchingNet | 0.724 33 | 0.812 36 | 0.812 18 | 0.810 32 | 0.735 27 | 0.834 36 | 0.495 41 | 0.860 21 | 0.572 59 | 0.602 42 | 0.954 34 | 0.512 40 | 0.280 42 | 0.757 19 | 0.845 36 | 0.725 29 | 0.780 34 | 0.606 48 | 0.937 16 | 0.851 34 | 0.700 33 | |
Swin3D | 0.779 4 | 0.861 20 | 0.818 13 | 0.836 18 | 0.790 2 | 0.875 4 | 0.576 5 | 0.905 6 | 0.704 4 | 0.739 1 | 0.969 10 | 0.611 2 | 0.349 10 | 0.756 20 | 0.958 1 | 0.702 43 | 0.805 14 | 0.708 7 | 0.916 31 | 0.898 3 | 0.801 2 | |
LRPNet | 0.742 25 | 0.816 34 | 0.806 22 | 0.807 35 | 0.752 19 | 0.828 42 | 0.575 6 | 0.839 29 | 0.699 5 | 0.637 27 | 0.954 34 | 0.520 38 | 0.320 23 | 0.755 21 | 0.834 38 | 0.760 18 | 0.772 38 | 0.676 20 | 0.915 33 | 0.862 23 | 0.717 25 | |
SAT | 0.742 25 | 0.860 21 | 0.765 47 | 0.819 26 | 0.769 11 | 0.848 22 | 0.533 22 | 0.829 32 | 0.663 19 | 0.631 29 | 0.955 28 | 0.586 12 | 0.274 45 | 0.753 22 | 0.896 13 | 0.729 26 | 0.760 48 | 0.666 26 | 0.921 28 | 0.855 30 | 0.733 16 | |
OctFormer | 0.766 7 | 0.925 7 | 0.808 21 | 0.849 9 | 0.786 4 | 0.846 25 | 0.566 9 | 0.876 14 | 0.690 9 | 0.674 13 | 0.960 17 | 0.576 16 | 0.226 65 | 0.753 22 | 0.904 8 | 0.777 12 | 0.815 6 | 0.722 5 | 0.923 27 | 0.877 13 | 0.776 7 | |
Peng-Shuai Wang: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH 2023 | ||||||||||||||||||||||
CU-Hybrid Net | 0.764 9 | 0.924 8 | 0.819 11 | 0.840 16 | 0.757 16 | 0.853 15 | 0.580 3 | 0.848 24 | 0.709 3 | 0.643 22 | 0.958 20 | 0.587 11 | 0.295 32 | 0.753 22 | 0.884 19 | 0.758 19 | 0.815 6 | 0.725 3 | 0.927 24 | 0.867 20 | 0.743 14 | |
One-Thing-One-Click | 0.693 41 | 0.743 64 | 0.794 31 | 0.655 83 | 0.684 40 | 0.822 48 | 0.497 40 | 0.719 66 | 0.622 34 | 0.617 33 | 0.977 8 | 0.447 68 | 0.339 14 | 0.750 25 | 0.664 72 | 0.703 42 | 0.790 30 | 0.596 53 | 0.946 10 | 0.855 30 | 0.647 48 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
Virtual MVFusion | 0.746 21 | 0.771 51 | 0.819 11 | 0.848 11 | 0.702 35 | 0.865 9 | 0.397 83 | 0.899 9 | 0.699 5 | 0.664 17 | 0.948 53 | 0.588 10 | 0.330 19 | 0.746 26 | 0.851 34 | 0.764 17 | 0.796 23 | 0.704 9 | 0.935 18 | 0.866 21 | 0.728 18 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
StratifiedFormer | 0.747 20 | 0.901 13 | 0.803 24 | 0.845 13 | 0.757 16 | 0.846 25 | 0.512 31 | 0.825 34 | 0.696 7 | 0.645 20 | 0.956 25 | 0.576 16 | 0.262 56 | 0.744 27 | 0.861 24 | 0.742 23 | 0.770 41 | 0.705 8 | 0.899 43 | 0.860 25 | 0.734 15 | |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
MVF-GNN | 0.658 58 | 0.558 100 | 0.751 56 | 0.655 83 | 0.690 36 | 0.722 92 | 0.453 57 | 0.867 16 | 0.579 57 | 0.576 50 | 0.893 103 | 0.523 36 | 0.293 33 | 0.733 28 | 0.571 81 | 0.692 45 | 0.659 87 | 0.606 48 | 0.875 61 | 0.804 59 | 0.668 41 | |
MinkowskiNet | 0.736 28 | 0.859 22 | 0.818 13 | 0.832 22 | 0.709 33 | 0.840 29 | 0.521 28 | 0.853 22 | 0.660 21 | 0.643 22 | 0.951 43 | 0.544 27 | 0.286 38 | 0.731 29 | 0.893 14 | 0.675 53 | 0.772 38 | 0.683 16 | 0.874 64 | 0.852 33 | 0.727 20 | |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
ClickSeg_Semantic | 0.703 37 | 0.774 49 | 0.800 25 | 0.793 44 | 0.760 13 | 0.847 24 | 0.471 49 | 0.802 44 | 0.463 92 | 0.634 28 | 0.968 12 | 0.491 46 | 0.271 49 | 0.726 30 | 0.910 6 | 0.706 39 | 0.815 6 | 0.551 75 | 0.878 58 | 0.833 41 | 0.570 75 | |
LargeKernel3D | 0.739 27 | 0.909 10 | 0.820 10 | 0.806 37 | 0.740 25 | 0.852 17 | 0.545 16 | 0.826 33 | 0.594 50 | 0.643 22 | 0.955 28 | 0.541 28 | 0.263 55 | 0.723 31 | 0.858 27 | 0.775 14 | 0.767 42 | 0.678 17 | 0.933 20 | 0.848 35 | 0.694 34 | |
Yukang Chen*, Jianhui Liu*, Xiangyu Zhang, Xiaojuan Qi, Jiaya Jia: LargeKernel3D: Scaling up Kernels in 3D Sparse CNNs. CVPR 2023 | ||||||||||||||||||||||
SparseConvNet | 0.725 31 | 0.647 88 | 0.821 9 | 0.846 12 | 0.721 31 | 0.869 6 | 0.533 22 | 0.754 56 | 0.603 45 | 0.614 34 | 0.955 28 | 0.572 18 | 0.325 21 | 0.710 32 | 0.870 20 | 0.724 30 | 0.823 2 | 0.628 38 | 0.934 19 | 0.865 22 | 0.683 37 | |
PNE | 0.755 13 | 0.786 41 | 0.835 4 | 0.834 20 | 0.758 14 | 0.849 20 | 0.570 8 | 0.836 30 | 0.648 25 | 0.668 16 | 0.978 4 | 0.581 15 | 0.367 6 | 0.683 33 | 0.856 28 | 0.804 6 | 0.801 19 | 0.678 17 | 0.961 5 | 0.889 5 | 0.716 27 | |
P. Hermosilla: Point Neighborhood Embeddings. | ||||||||||||||||||||||
PointMetaBase | 0.714 35 | 0.835 27 | 0.785 36 | 0.821 24 | 0.684 40 | 0.846 25 | 0.531 24 | 0.865 19 | 0.614 36 | 0.596 46 | 0.953 37 | 0.500 43 | 0.246 61 | 0.674 34 | 0.888 16 | 0.692 45 | 0.764 44 | 0.624 40 | 0.849 79 | 0.844 40 | 0.675 39 | |
VACNN++ | 0.684 46 | 0.728 72 | 0.757 53 | 0.776 50 | 0.690 36 | 0.804 65 | 0.464 54 | 0.816 37 | 0.577 58 | 0.587 49 | 0.945 61 | 0.508 42 | 0.276 44 | 0.671 35 | 0.710 61 | 0.663 58 | 0.750 56 | 0.589 58 | 0.881 55 | 0.832 43 | 0.653 46 | |
SAFNet-seg | 0.654 61 | 0.752 59 | 0.734 67 | 0.664 81 | 0.583 72 | 0.815 57 | 0.399 82 | 0.754 56 | 0.639 28 | 0.535 62 | 0.942 71 | 0.470 55 | 0.309 27 | 0.665 36 | 0.539 83 | 0.650 62 | 0.708 70 | 0.635 35 | 0.857 77 | 0.793 68 | 0.642 50 | |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
PD-Net | 0.638 67 | 0.797 38 | 0.769 46 | 0.641 91 | 0.590 68 | 0.820 51 | 0.461 55 | 0.537 98 | 0.637 29 | 0.536 61 | 0.947 55 | 0.388 88 | 0.206 75 | 0.656 37 | 0.668 70 | 0.647 66 | 0.732 62 | 0.585 60 | 0.868 72 | 0.793 68 | 0.473 101 | |
wsss-transformer | 0.600 84 | 0.634 90 | 0.743 63 | 0.697 70 | 0.601 65 | 0.781 76 | 0.437 72 | 0.585 91 | 0.493 82 | 0.446 86 | 0.933 86 | 0.394 86 | 0.011 111 | 0.654 38 | 0.661 73 | 0.603 79 | 0.733 61 | 0.526 86 | 0.832 82 | 0.761 86 | 0.480 98 | |
MVPNet | 0.641 63 | 0.831 28 | 0.715 70 | 0.671 78 | 0.590 68 | 0.781 76 | 0.394 84 | 0.679 76 | 0.642 26 | 0.553 55 | 0.937 78 | 0.462 58 | 0.256 57 | 0.649 39 | 0.406 97 | 0.626 73 | 0.691 77 | 0.666 26 | 0.877 59 | 0.792 71 | 0.608 61 | |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
PPCNN++ | 0.663 57 | 0.746 62 | 0.708 72 | 0.722 61 | 0.638 56 | 0.820 51 | 0.451 58 | 0.566 94 | 0.599 47 | 0.541 58 | 0.950 47 | 0.510 41 | 0.313 25 | 0.648 40 | 0.819 43 | 0.616 77 | 0.682 80 | 0.590 57 | 0.869 71 | 0.810 56 | 0.656 45 | |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
PointConv-SFPN | 0.641 63 | 0.776 47 | 0.703 74 | 0.721 62 | 0.557 80 | 0.826 43 | 0.451 58 | 0.672 79 | 0.563 65 | 0.483 77 | 0.943 70 | 0.425 78 | 0.162 94 | 0.644 41 | 0.726 57 | 0.659 60 | 0.709 69 | 0.572 62 | 0.875 61 | 0.786 76 | 0.559 81 | |
Supervoxel-CNN | 0.635 70 | 0.656 86 | 0.711 71 | 0.719 63 | 0.613 61 | 0.757 87 | 0.444 68 | 0.765 52 | 0.534 70 | 0.566 52 | 0.928 89 | 0.478 52 | 0.272 47 | 0.636 42 | 0.531 85 | 0.664 57 | 0.645 91 | 0.508 89 | 0.864 74 | 0.792 71 | 0.611 58 | |
PointASNL | 0.666 55 | 0.703 79 | 0.781 39 | 0.751 59 | 0.655 47 | 0.830 39 | 0.471 49 | 0.769 51 | 0.474 88 | 0.537 60 | 0.951 43 | 0.475 53 | 0.279 43 | 0.635 43 | 0.698 66 | 0.675 53 | 0.751 54 | 0.553 74 | 0.816 86 | 0.806 57 | 0.703 32 | |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
APCF-Net | 0.631 73 | 0.742 65 | 0.687 88 | 0.672 76 | 0.557 80 | 0.792 73 | 0.408 78 | 0.665 80 | 0.545 68 | 0.508 71 | 0.952 41 | 0.428 75 | 0.186 85 | 0.634 44 | 0.702 64 | 0.620 74 | 0.706 71 | 0.555 73 | 0.873 65 | 0.798 63 | 0.581 71 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
FusionNet | 0.688 44 | 0.704 78 | 0.741 65 | 0.754 57 | 0.656 46 | 0.829 40 | 0.501 35 | 0.741 61 | 0.609 41 | 0.548 56 | 0.950 47 | 0.522 37 | 0.371 4 | 0.633 45 | 0.756 51 | 0.715 36 | 0.771 40 | 0.623 41 | 0.861 75 | 0.814 53 | 0.658 44 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
contrastBoundary | 0.705 36 | 0.769 54 | 0.775 41 | 0.809 33 | 0.687 39 | 0.820 51 | 0.439 71 | 0.812 41 | 0.661 20 | 0.591 48 | 0.945 61 | 0.515 39 | 0.171 89 | 0.633 45 | 0.856 28 | 0.720 32 | 0.796 23 | 0.668 25 | 0.889 50 | 0.847 36 | 0.689 35 | |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
PointSPNet | 0.637 68 | 0.734 68 | 0.692 83 | 0.714 65 | 0.576 74 | 0.797 69 | 0.446 63 | 0.743 60 | 0.598 48 | 0.437 89 | 0.942 71 | 0.403 84 | 0.150 98 | 0.626 47 | 0.800 48 | 0.649 63 | 0.697 74 | 0.557 72 | 0.846 80 | 0.777 80 | 0.563 79 | |
IPCA | 0.731 30 | 0.890 14 | 0.837 3 | 0.864 2 | 0.726 30 | 0.873 5 | 0.530 25 | 0.824 35 | 0.489 85 | 0.647 19 | 0.978 4 | 0.609 4 | 0.336 15 | 0.624 48 | 0.733 56 | 0.758 19 | 0.776 36 | 0.570 63 | 0.949 8 | 0.877 13 | 0.728 18 | |
RPN | 0.736 28 | 0.776 47 | 0.790 34 | 0.851 7 | 0.754 18 | 0.854 13 | 0.491 44 | 0.866 18 | 0.596 49 | 0.686 7 | 0.955 28 | 0.536 30 | 0.342 13 | 0.624 48 | 0.869 21 | 0.787 9 | 0.802 15 | 0.628 38 | 0.927 24 | 0.875 17 | 0.704 31 | |
PointTransformer++ | 0.725 31 | 0.727 73 | 0.811 19 | 0.819 26 | 0.765 12 | 0.841 28 | 0.502 34 | 0.814 40 | 0.621 35 | 0.623 32 | 0.955 28 | 0.556 24 | 0.284 39 | 0.620 50 | 0.866 22 | 0.781 11 | 0.757 52 | 0.648 29 | 0.932 22 | 0.862 23 | 0.709 29 | |
INS-Conv-semantic | 0.717 34 | 0.751 60 | 0.759 50 | 0.812 30 | 0.704 34 | 0.868 7 | 0.537 21 | 0.842 27 | 0.609 41 | 0.608 38 | 0.953 37 | 0.534 32 | 0.293 33 | 0.616 51 | 0.864 23 | 0.719 34 | 0.793 27 | 0.640 33 | 0.933 20 | 0.845 39 | 0.663 43 | |
ROSMRF3D | 0.673 52 | 0.789 40 | 0.748 58 | 0.763 55 | 0.635 57 | 0.814 58 | 0.407 80 | 0.747 58 | 0.581 56 | 0.573 51 | 0.950 47 | 0.484 49 | 0.271 49 | 0.607 52 | 0.754 52 | 0.649 63 | 0.774 37 | 0.596 53 | 0.883 53 | 0.823 47 | 0.606 62 | |
3DSM_DMMF | 0.631 73 | 0.626 91 | 0.745 61 | 0.801 40 | 0.607 62 | 0.751 88 | 0.506 32 | 0.729 65 | 0.565 63 | 0.491 76 | 0.866 106 | 0.434 70 | 0.197 82 | 0.595 53 | 0.630 75 | 0.709 38 | 0.705 72 | 0.560 69 | 0.875 61 | 0.740 91 | 0.491 96 | |
RFCR | 0.702 38 | 0.889 15 | 0.745 61 | 0.813 29 | 0.672 43 | 0.818 55 | 0.493 42 | 0.815 39 | 0.623 33 | 0.610 36 | 0.947 55 | 0.470 55 | 0.249 60 | 0.594 54 | 0.848 35 | 0.705 40 | 0.779 35 | 0.646 30 | 0.892 48 | 0.823 47 | 0.611 58 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
PointMTL | 0.632 72 | 0.731 70 | 0.688 86 | 0.675 75 | 0.591 67 | 0.784 75 | 0.444 68 | 0.565 95 | 0.610 39 | 0.492 75 | 0.949 51 | 0.456 61 | 0.254 58 | 0.587 55 | 0.706 62 | 0.599 81 | 0.665 86 | 0.612 47 | 0.868 72 | 0.791 74 | 0.579 72 | |
KP-FCNN | 0.684 46 | 0.847 25 | 0.758 52 | 0.784 47 | 0.647 51 | 0.814 58 | 0.473 48 | 0.772 50 | 0.605 43 | 0.594 47 | 0.935 81 | 0.450 66 | 0.181 87 | 0.587 55 | 0.805 46 | 0.690 48 | 0.785 33 | 0.614 44 | 0.882 54 | 0.819 51 | 0.632 54 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
PointConv | 0.666 55 | 0.781 44 | 0.759 50 | 0.699 68 | 0.644 54 | 0.822 48 | 0.475 47 | 0.779 48 | 0.564 64 | 0.504 74 | 0.953 37 | 0.428 75 | 0.203 78 | 0.586 57 | 0.754 52 | 0.661 59 | 0.753 53 | 0.588 59 | 0.902 40 | 0.813 55 | 0.642 50 | |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointMRNet | 0.640 65 | 0.717 76 | 0.701 76 | 0.692 71 | 0.576 74 | 0.801 66 | 0.467 53 | 0.716 67 | 0.563 65 | 0.459 84 | 0.953 37 | 0.429 74 | 0.169 91 | 0.581 58 | 0.854 31 | 0.605 78 | 0.710 67 | 0.550 76 | 0.894 47 | 0.793 68 | 0.575 73 | |
JSENet | 0.699 40 | 0.881 17 | 0.762 48 | 0.821 24 | 0.667 44 | 0.800 67 | 0.522 27 | 0.792 47 | 0.613 37 | 0.607 39 | 0.935 81 | 0.492 45 | 0.205 76 | 0.576 59 | 0.853 32 | 0.691 47 | 0.758 50 | 0.652 28 | 0.872 67 | 0.828 44 | 0.649 47 | |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
DGNet | 0.684 46 | 0.712 77 | 0.784 37 | 0.782 49 | 0.658 45 | 0.835 35 | 0.499 39 | 0.823 36 | 0.641 27 | 0.597 45 | 0.950 47 | 0.487 48 | 0.281 41 | 0.575 60 | 0.619 76 | 0.647 66 | 0.764 44 | 0.620 43 | 0.871 70 | 0.846 38 | 0.688 36 | |
HPGCNN | 0.656 60 | 0.698 81 | 0.743 63 | 0.650 85 | 0.564 77 | 0.820 51 | 0.505 33 | 0.758 54 | 0.631 31 | 0.479 78 | 0.945 61 | 0.480 51 | 0.226 65 | 0.572 61 | 0.774 50 | 0.690 48 | 0.735 60 | 0.614 44 | 0.853 78 | 0.776 81 | 0.597 68 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
SALANet | 0.670 53 | 0.816 34 | 0.770 45 | 0.768 52 | 0.652 49 | 0.807 62 | 0.451 58 | 0.747 58 | 0.659 22 | 0.545 57 | 0.924 91 | 0.473 54 | 0.149 99 | 0.571 62 | 0.811 45 | 0.635 72 | 0.746 57 | 0.623 41 | 0.892 48 | 0.794 66 | 0.570 75 | |
joint point-based | 0.634 71 | 0.614 94 | 0.778 40 | 0.667 80 | 0.633 58 | 0.825 44 | 0.420 76 | 0.804 42 | 0.467 90 | 0.561 53 | 0.951 43 | 0.494 44 | 0.291 35 | 0.566 63 | 0.458 92 | 0.579 88 | 0.764 44 | 0.559 71 | 0.838 81 | 0.814 53 | 0.598 67 | |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
SegGroup_sem | 0.627 78 | 0.818 33 | 0.747 60 | 0.701 67 | 0.602 64 | 0.764 84 | 0.385 89 | 0.629 85 | 0.490 83 | 0.508 71 | 0.931 88 | 0.409 83 | 0.201 79 | 0.564 64 | 0.725 58 | 0.618 75 | 0.692 76 | 0.539 83 | 0.873 65 | 0.794 66 | 0.548 85 | |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
Feature_GeometricNet | 0.690 43 | 0.884 16 | 0.754 54 | 0.795 42 | 0.647 51 | 0.818 55 | 0.422 75 | 0.802 44 | 0.612 38 | 0.604 40 | 0.945 61 | 0.462 58 | 0.189 84 | 0.563 65 | 0.853 32 | 0.726 28 | 0.765 43 | 0.632 36 | 0.904 37 | 0.821 50 | 0.606 62 | |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
One Thing One Click | 0.701 39 | 0.825 31 | 0.796 29 | 0.723 60 | 0.716 32 | 0.832 38 | 0.433 73 | 0.816 37 | 0.634 30 | 0.609 37 | 0.969 10 | 0.418 81 | 0.344 12 | 0.559 66 | 0.833 39 | 0.715 36 | 0.808 13 | 0.560 69 | 0.902 40 | 0.847 36 | 0.680 38 | |
SD-DETR | 0.576 89 | 0.746 62 | 0.609 103 | 0.445 108 | 0.517 87 | 0.643 103 | 0.366 91 | 0.714 69 | 0.456 93 | 0.468 82 | 0.870 105 | 0.432 71 | 0.264 54 | 0.558 67 | 0.674 68 | 0.586 87 | 0.688 78 | 0.482 95 | 0.739 96 | 0.733 93 | 0.537 87 | |
Feature-Geometry Net | 0.685 45 | 0.866 19 | 0.748 58 | 0.819 26 | 0.645 53 | 0.794 70 | 0.450 61 | 0.802 44 | 0.587 52 | 0.604 40 | 0.945 61 | 0.464 57 | 0.201 79 | 0.554 68 | 0.840 37 | 0.723 31 | 0.732 62 | 0.602 51 | 0.907 35 | 0.822 49 | 0.603 65 | |
PointNet2-SFPN | 0.631 73 | 0.771 51 | 0.692 83 | 0.672 76 | 0.524 85 | 0.837 32 | 0.440 70 | 0.706 72 | 0.538 69 | 0.446 86 | 0.944 67 | 0.421 80 | 0.219 70 | 0.552 69 | 0.751 54 | 0.591 84 | 0.737 59 | 0.543 81 | 0.901 42 | 0.768 83 | 0.557 82 | |
PointContrast_LA_SEM | 0.683 49 | 0.757 58 | 0.784 37 | 0.786 45 | 0.639 55 | 0.824 46 | 0.408 78 | 0.775 49 | 0.604 44 | 0.541 58 | 0.934 85 | 0.532 33 | 0.269 51 | 0.552 69 | 0.777 49 | 0.645 69 | 0.793 27 | 0.640 33 | 0.913 34 | 0.824 46 | 0.671 40 | |
FusionAwareConv | 0.630 76 | 0.604 96 | 0.741 65 | 0.766 54 | 0.590 68 | 0.747 89 | 0.501 35 | 0.734 63 | 0.503 80 | 0.527 64 | 0.919 95 | 0.454 62 | 0.323 22 | 0.550 71 | 0.420 96 | 0.678 52 | 0.688 78 | 0.544 79 | 0.896 45 | 0.795 65 | 0.627 56 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
PicassoNet-II | 0.692 42 | 0.732 69 | 0.772 42 | 0.786 45 | 0.677 42 | 0.866 8 | 0.517 29 | 0.848 24 | 0.509 78 | 0.626 30 | 0.952 41 | 0.536 30 | 0.225 67 | 0.545 72 | 0.704 63 | 0.689 50 | 0.810 12 | 0.564 68 | 0.903 39 | 0.854 32 | 0.729 17 | |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
FPConv | 0.639 66 | 0.785 42 | 0.760 49 | 0.713 66 | 0.603 63 | 0.798 68 | 0.392 85 | 0.534 99 | 0.603 45 | 0.524 66 | 0.948 53 | 0.457 60 | 0.250 59 | 0.538 73 | 0.723 59 | 0.598 82 | 0.696 75 | 0.614 44 | 0.872 67 | 0.799 61 | 0.567 78 | |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
3DMV | 0.484 101 | 0.484 107 | 0.538 108 | 0.643 90 | 0.424 99 | 0.606 108 | 0.310 97 | 0.574 93 | 0.433 99 | 0.378 95 | 0.796 109 | 0.301 100 | 0.214 73 | 0.537 74 | 0.208 108 | 0.472 100 | 0.507 109 | 0.413 106 | 0.693 100 | 0.602 108 | 0.539 86 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
VI-PointConv | 0.676 51 | 0.770 53 | 0.754 54 | 0.783 48 | 0.621 59 | 0.814 58 | 0.552 13 | 0.758 54 | 0.571 61 | 0.557 54 | 0.954 34 | 0.529 34 | 0.268 53 | 0.530 75 | 0.682 67 | 0.675 53 | 0.719 65 | 0.603 50 | 0.888 51 | 0.833 41 | 0.665 42 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
Online SegFusion | 0.515 97 | 0.607 95 | 0.644 97 | 0.579 98 | 0.434 98 | 0.630 105 | 0.353 93 | 0.628 86 | 0.440 96 | 0.410 92 | 0.762 111 | 0.307 99 | 0.167 92 | 0.520 76 | 0.403 98 | 0.516 93 | 0.565 99 | 0.447 101 | 0.678 102 | 0.701 98 | 0.514 93 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
Superpoint Network | 0.683 49 | 0.851 24 | 0.728 69 | 0.800 41 | 0.653 48 | 0.806 63 | 0.468 51 | 0.804 42 | 0.572 59 | 0.602 42 | 0.946 58 | 0.453 65 | 0.239 64 | 0.519 77 | 0.822 41 | 0.689 50 | 0.762 47 | 0.595 55 | 0.895 46 | 0.827 45 | 0.630 55 | |
CCRFNet | 0.589 87 | 0.766 55 | 0.659 94 | 0.683 73 | 0.470 95 | 0.740 91 | 0.387 88 | 0.620 87 | 0.490 83 | 0.476 79 | 0.922 93 | 0.355 94 | 0.245 62 | 0.511 78 | 0.511 88 | 0.571 89 | 0.643 92 | 0.493 93 | 0.872 67 | 0.762 85 | 0.600 66 | |
DCM-Net | 0.658 58 | 0.778 45 | 0.702 75 | 0.806 37 | 0.619 60 | 0.813 61 | 0.468 51 | 0.693 74 | 0.494 81 | 0.524 66 | 0.941 73 | 0.449 67 | 0.298 31 | 0.510 79 | 0.821 42 | 0.675 53 | 0.727 64 | 0.568 66 | 0.826 84 | 0.803 60 | 0.637 52 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
Weakly-Openseg v3 | 0.489 100 | 0.749 61 | 0.664 91 | 0.646 89 | 0.496 89 | 0.559 109 | 0.122 112 | 0.577 92 | 0.257 112 | 0.364 98 | 0.805 108 | 0.198 110 | 0.096 104 | 0.510 79 | 0.496 91 | 0.361 107 | 0.563 100 | 0.359 110 | 0.777 90 | 0.644 105 | 0.532 90 | |
SPH3D-GCN | 0.610 82 | 0.858 23 | 0.772 42 | 0.489 104 | 0.532 84 | 0.792 73 | 0.404 81 | 0.643 84 | 0.570 62 | 0.507 73 | 0.935 81 | 0.414 82 | 0.046 109 | 0.510 79 | 0.702 64 | 0.602 80 | 0.705 72 | 0.549 77 | 0.859 76 | 0.773 82 | 0.534 88 | |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
SConv | 0.636 69 | 0.830 29 | 0.697 79 | 0.752 58 | 0.572 76 | 0.780 78 | 0.445 65 | 0.716 67 | 0.529 71 | 0.530 63 | 0.951 43 | 0.446 69 | 0.170 90 | 0.507 82 | 0.666 71 | 0.636 71 | 0.682 80 | 0.541 82 | 0.886 52 | 0.799 61 | 0.594 69 | |
PanopticFusion-label | 0.529 95 | 0.491 106 | 0.688 86 | 0.604 95 | 0.386 101 | 0.632 104 | 0.225 109 | 0.705 73 | 0.434 98 | 0.293 105 | 0.815 107 | 0.348 95 | 0.241 63 | 0.499 83 | 0.669 69 | 0.507 94 | 0.649 89 | 0.442 103 | 0.796 88 | 0.602 108 | 0.561 80 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
SIConv | 0.625 79 | 0.830 29 | 0.694 81 | 0.757 56 | 0.563 78 | 0.772 82 | 0.448 62 | 0.647 83 | 0.520 74 | 0.509 70 | 0.949 51 | 0.431 73 | 0.191 83 | 0.496 84 | 0.614 77 | 0.647 66 | 0.672 84 | 0.535 85 | 0.876 60 | 0.783 77 | 0.571 74 | |
HPEIN | 0.618 81 | 0.729 71 | 0.668 89 | 0.647 87 | 0.597 66 | 0.766 83 | 0.414 77 | 0.680 75 | 0.520 74 | 0.525 65 | 0.946 58 | 0.432 71 | 0.215 72 | 0.493 85 | 0.599 78 | 0.638 70 | 0.617 96 | 0.570 63 | 0.897 44 | 0.806 57 | 0.605 64 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
DenSeR | 0.628 77 | 0.800 37 | 0.625 99 | 0.719 63 | 0.545 82 | 0.806 63 | 0.445 65 | 0.597 88 | 0.448 95 | 0.519 69 | 0.938 77 | 0.481 50 | 0.328 20 | 0.489 86 | 0.499 90 | 0.657 61 | 0.759 49 | 0.592 56 | 0.881 55 | 0.797 64 | 0.634 53 | |
RandLA-Net | 0.645 62 | 0.778 45 | 0.731 68 | 0.699 68 | 0.577 73 | 0.829 40 | 0.446 63 | 0.736 62 | 0.477 87 | 0.523 68 | 0.945 61 | 0.454 62 | 0.269 51 | 0.484 87 | 0.749 55 | 0.618 75 | 0.738 58 | 0.599 52 | 0.827 83 | 0.792 71 | 0.621 57 | |
GMLPs | 0.538 94 | 0.495 105 | 0.693 82 | 0.647 87 | 0.471 94 | 0.793 71 | 0.300 99 | 0.477 101 | 0.505 79 | 0.358 99 | 0.903 101 | 0.327 97 | 0.081 106 | 0.472 88 | 0.529 86 | 0.448 101 | 0.710 67 | 0.509 87 | 0.746 94 | 0.737 92 | 0.554 84 | |
dtc_net | 0.625 79 | 0.703 79 | 0.751 56 | 0.794 43 | 0.535 83 | 0.848 22 | 0.480 46 | 0.676 78 | 0.528 72 | 0.469 81 | 0.944 67 | 0.454 62 | 0.004 112 | 0.464 89 | 0.636 74 | 0.704 41 | 0.758 50 | 0.548 78 | 0.924 26 | 0.787 75 | 0.492 95 | |
AttAN | 0.609 83 | 0.760 56 | 0.667 90 | 0.649 86 | 0.521 86 | 0.793 71 | 0.457 56 | 0.648 82 | 0.528 72 | 0.434 91 | 0.947 55 | 0.401 85 | 0.153 97 | 0.454 90 | 0.721 60 | 0.648 65 | 0.717 66 | 0.536 84 | 0.904 37 | 0.765 84 | 0.485 97 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
ROSMRF | 0.580 88 | 0.772 50 | 0.707 73 | 0.681 74 | 0.563 78 | 0.764 84 | 0.362 92 | 0.515 100 | 0.465 91 | 0.465 83 | 0.936 80 | 0.427 77 | 0.207 74 | 0.438 91 | 0.577 80 | 0.536 92 | 0.675 83 | 0.486 94 | 0.723 98 | 0.779 78 | 0.524 91 | |
3DMV, FTSDF | 0.501 98 | 0.558 100 | 0.608 104 | 0.424 110 | 0.478 93 | 0.690 96 | 0.246 105 | 0.586 90 | 0.468 89 | 0.450 85 | 0.911 97 | 0.394 86 | 0.160 95 | 0.438 91 | 0.212 107 | 0.432 102 | 0.541 105 | 0.475 96 | 0.742 95 | 0.727 94 | 0.477 99 | |
LAP-D | 0.594 85 | 0.720 74 | 0.692 83 | 0.637 92 | 0.456 96 | 0.773 81 | 0.391 87 | 0.730 64 | 0.587 52 | 0.445 88 | 0.940 75 | 0.381 89 | 0.288 36 | 0.434 93 | 0.453 94 | 0.591 84 | 0.649 89 | 0.581 61 | 0.777 90 | 0.749 90 | 0.610 60 | |
DPC | 0.592 86 | 0.720 74 | 0.700 77 | 0.602 96 | 0.480 92 | 0.762 86 | 0.380 90 | 0.713 70 | 0.585 55 | 0.437 89 | 0.940 75 | 0.369 91 | 0.288 36 | 0.434 93 | 0.509 89 | 0.590 86 | 0.639 94 | 0.567 67 | 0.772 92 | 0.755 88 | 0.592 70 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
TextureNet | 0.566 91 | 0.672 85 | 0.664 91 | 0.671 78 | 0.494 90 | 0.719 93 | 0.445 65 | 0.678 77 | 0.411 101 | 0.396 94 | 0.935 81 | 0.356 93 | 0.225 67 | 0.412 95 | 0.535 84 | 0.565 90 | 0.636 95 | 0.464 97 | 0.794 89 | 0.680 101 | 0.568 77 | |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
Pointnet++ & Feature | 0.557 93 | 0.735 67 | 0.661 93 | 0.686 72 | 0.491 91 | 0.744 90 | 0.392 85 | 0.539 97 | 0.451 94 | 0.375 97 | 0.946 58 | 0.376 90 | 0.205 76 | 0.403 96 | 0.356 100 | 0.553 91 | 0.643 92 | 0.497 91 | 0.824 85 | 0.756 87 | 0.515 92 | |
O3DSeg | 0.668 54 | 0.822 32 | 0.771 44 | 0.496 103 | 0.651 50 | 0.833 37 | 0.541 18 | 0.761 53 | 0.555 67 | 0.611 35 | 0.966 13 | 0.489 47 | 0.370 5 | 0.388 97 | 0.580 79 | 0.776 13 | 0.751 54 | 0.570 63 | 0.956 6 | 0.817 52 | 0.646 49 | |
SQN_0.1% | 0.569 90 | 0.676 83 | 0.696 80 | 0.657 82 | 0.497 88 | 0.779 79 | 0.424 74 | 0.548 96 | 0.515 76 | 0.376 96 | 0.902 102 | 0.422 79 | 0.357 8 | 0.379 98 | 0.456 93 | 0.596 83 | 0.659 87 | 0.544 79 | 0.685 101 | 0.665 104 | 0.556 83 | |
SurfaceConvPF | 0.442 105 | 0.505 104 | 0.622 101 | 0.380 111 | 0.342 107 | 0.654 100 | 0.227 108 | 0.397 104 | 0.367 104 | 0.276 107 | 0.924 91 | 0.240 107 | 0.198 81 | 0.359 99 | 0.262 103 | 0.366 105 | 0.581 97 | 0.435 104 | 0.640 104 | 0.668 102 | 0.398 103 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
3DWSSS | 0.425 108 | 0.525 103 | 0.647 95 | 0.522 101 | 0.324 108 | 0.488 112 | 0.077 113 | 0.712 71 | 0.353 105 | 0.401 93 | 0.636 113 | 0.281 103 | 0.176 88 | 0.340 100 | 0.565 82 | 0.175 113 | 0.551 103 | 0.398 107 | 0.370 113 | 0.602 108 | 0.361 106 | |
DVVNet | 0.562 92 | 0.648 87 | 0.700 77 | 0.770 51 | 0.586 71 | 0.687 97 | 0.333 96 | 0.650 81 | 0.514 77 | 0.475 80 | 0.906 99 | 0.359 92 | 0.223 69 | 0.340 100 | 0.442 95 | 0.422 103 | 0.668 85 | 0.501 90 | 0.708 99 | 0.779 78 | 0.534 88 | |
subcloud_weak | 0.516 96 | 0.676 83 | 0.591 106 | 0.609 93 | 0.442 97 | 0.774 80 | 0.335 95 | 0.597 88 | 0.422 100 | 0.357 100 | 0.932 87 | 0.341 96 | 0.094 105 | 0.298 102 | 0.528 87 | 0.473 99 | 0.676 82 | 0.495 92 | 0.602 107 | 0.721 96 | 0.349 108 | |
DGCNN_reproduce | 0.446 104 | 0.474 108 | 0.623 100 | 0.463 106 | 0.366 104 | 0.651 101 | 0.310 97 | 0.389 105 | 0.349 106 | 0.330 102 | 0.937 78 | 0.271 104 | 0.126 101 | 0.285 103 | 0.224 106 | 0.350 109 | 0.577 98 | 0.445 102 | 0.625 105 | 0.723 95 | 0.394 104 | |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
Tangent Convolutions | 0.438 107 | 0.437 110 | 0.646 96 | 0.474 105 | 0.369 103 | 0.645 102 | 0.353 93 | 0.258 109 | 0.282 110 | 0.279 106 | 0.918 96 | 0.298 101 | 0.147 100 | 0.283 104 | 0.294 102 | 0.487 96 | 0.562 101 | 0.427 105 | 0.619 106 | 0.633 106 | 0.352 107 | |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 110 | 0.297 112 | 0.491 110 | 0.432 109 | 0.358 106 | 0.612 107 | 0.274 103 | 0.116 111 | 0.411 101 | 0.265 108 | 0.904 100 | 0.229 108 | 0.079 107 | 0.250 105 | 0.185 110 | 0.320 110 | 0.510 107 | 0.385 108 | 0.548 109 | 0.597 111 | 0.394 104 | |
ScanNet | 0.306 113 | 0.203 113 | 0.366 112 | 0.501 102 | 0.311 110 | 0.524 111 | 0.211 110 | 0.002 114 | 0.342 107 | 0.189 112 | 0.786 110 | 0.145 113 | 0.102 103 | 0.245 106 | 0.152 111 | 0.318 111 | 0.348 112 | 0.300 112 | 0.460 112 | 0.437 113 | 0.182 113 | |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
PCNN | 0.498 99 | 0.559 99 | 0.644 97 | 0.560 100 | 0.420 100 | 0.711 95 | 0.229 107 | 0.414 102 | 0.436 97 | 0.352 101 | 0.941 73 | 0.324 98 | 0.155 96 | 0.238 107 | 0.387 99 | 0.493 95 | 0.529 106 | 0.509 87 | 0.813 87 | 0.751 89 | 0.504 94 | |
PNET2 | 0.442 105 | 0.548 102 | 0.548 107 | 0.597 97 | 0.363 105 | 0.628 106 | 0.300 99 | 0.292 107 | 0.374 103 | 0.307 104 | 0.881 104 | 0.268 105 | 0.186 85 | 0.238 107 | 0.204 109 | 0.407 104 | 0.506 110 | 0.449 100 | 0.667 103 | 0.620 107 | 0.462 102 | |
FCPN | 0.447 103 | 0.679 82 | 0.604 105 | 0.578 99 | 0.380 102 | 0.682 98 | 0.291 102 | 0.106 112 | 0.483 86 | 0.258 110 | 0.920 94 | 0.258 106 | 0.025 110 | 0.231 109 | 0.325 101 | 0.480 98 | 0.560 102 | 0.463 98 | 0.725 97 | 0.666 103 | 0.231 112 | |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
PointCNN with RGB | 0.458 102 | 0.577 98 | 0.611 102 | 0.356 112 | 0.321 109 | 0.715 94 | 0.299 101 | 0.376 106 | 0.328 108 | 0.319 103 | 0.944 67 | 0.285 102 | 0.164 93 | 0.216 110 | 0.229 105 | 0.484 97 | 0.545 104 | 0.456 99 | 0.755 93 | 0.709 97 | 0.475 100 | |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
PointNet++ | 0.339 111 | 0.584 97 | 0.478 111 | 0.458 107 | 0.256 112 | 0.360 113 | 0.250 104 | 0.247 110 | 0.278 111 | 0.261 109 | 0.677 112 | 0.183 111 | 0.117 102 | 0.212 111 | 0.145 112 | 0.364 106 | 0.346 113 | 0.232 113 | 0.548 109 | 0.523 112 | 0.252 111 | |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
ERROR | 0.054 114 | 0.000 114 | 0.041 114 | 0.172 114 | 0.030 114 | 0.062 114 | 0.001 114 | 0.035 113 | 0.004 114 | 0.051 114 | 0.143 114 | 0.019 114 | 0.003 113 | 0.041 112 | 0.050 113 | 0.003 114 | 0.054 114 | 0.018 114 | 0.005 114 | 0.264 114 | 0.082 114 | |
SSC-UNet | 0.308 112 | 0.353 111 | 0.290 113 | 0.278 113 | 0.166 113 | 0.553 110 | 0.169 111 | 0.286 108 | 0.147 113 | 0.148 113 | 0.908 98 | 0.182 112 | 0.064 108 | 0.023 113 | 0.018 114 | 0.354 108 | 0.363 111 | 0.345 111 | 0.546 111 | 0.685 100 | 0.278 109 | |
SPLAT Net | 0.393 109 | 0.472 109 | 0.511 109 | 0.606 94 | 0.311 110 | 0.656 99 | 0.245 106 | 0.405 103 | 0.328 108 | 0.197 111 | 0.927 90 | 0.227 109 | 0.000 114 | 0.001 114 | 0.249 104 | 0.271 112 | 0.510 107 | 0.383 109 | 0.593 108 | 0.699 99 | 0.267 110 | |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 |