3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
PTv3-PPT-ALC | ![]() | 0.798 1 | 0.911 11 | 0.812 22 | 0.854 7 | 0.770 12 | 0.856 15 | 0.555 16 | 0.943 1 | 0.660 25 | 0.735 2 | 0.979 1 | 0.606 7 | 0.492 1 | 0.792 4 | 0.934 4 | 0.841 2 | 0.819 5 | 0.716 9 | 0.947 10 | 0.906 1 | 0.822 1 |
DITR ScanNet | 0.797 2 | 0.727 76 | 0.869 1 | 0.882 1 | 0.785 6 | 0.868 7 | 0.578 5 | 0.943 1 | 0.744 1 | 0.727 3 | 0.979 1 | 0.627 2 | 0.364 9 | 0.824 1 | 0.949 2 | 0.779 14 | 0.844 1 | 0.757 1 | 0.982 1 | 0.905 2 | 0.802 3 | |
PTv3 ScanNet | 0.794 3 | 0.941 3 | 0.813 21 | 0.851 10 | 0.782 7 | 0.890 2 | 0.597 1 | 0.916 5 | 0.696 10 | 0.713 5 | 0.979 1 | 0.635 1 | 0.384 3 | 0.793 3 | 0.907 10 | 0.821 5 | 0.790 35 | 0.696 14 | 0.967 4 | 0.903 3 | 0.805 2 | |
Xiaoyang Wu, Li Jiang, Peng-Shuai Wang, Zhijian Liu, Xihui Liu, Yu Qiao, Wanli Ouyang, Tong He, Hengshuang Zhao: Point Transformer V3: Simpler, Faster, Stronger. CVPR 2024 (Oral) | ||||||||||||||||||||||
PonderV2 | 0.785 4 | 0.978 1 | 0.800 30 | 0.833 28 | 0.788 4 | 0.853 20 | 0.545 20 | 0.910 8 | 0.713 3 | 0.705 6 | 0.979 1 | 0.596 9 | 0.390 2 | 0.769 15 | 0.832 45 | 0.821 5 | 0.792 34 | 0.730 2 | 0.975 2 | 0.897 6 | 0.785 7 | |
Haoyi Zhu, Honghui Yang, Xiaoyang Wu, Di Huang, Sha Zhang, Xianglong He, Tong He, Hengshuang Zhao, Chunhua Shen, Yu Qiao, Wanli Ouyang: PonderV2: Pave the Way for 3D Foundataion Model with A Universal Pre-training Paradigm. | ||||||||||||||||||||||
Mix3D | ![]() | 0.781 5 | 0.964 2 | 0.855 2 | 0.843 19 | 0.781 8 | 0.858 13 | 0.575 8 | 0.831 37 | 0.685 16 | 0.714 4 | 0.979 1 | 0.594 10 | 0.310 29 | 0.801 2 | 0.892 19 | 0.841 2 | 0.819 5 | 0.723 6 | 0.940 15 | 0.887 8 | 0.725 28 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
Swin3D | ![]() | 0.779 6 | 0.861 23 | 0.818 16 | 0.836 25 | 0.790 3 | 0.875 4 | 0.576 7 | 0.905 9 | 0.704 7 | 0.739 1 | 0.969 12 | 0.611 3 | 0.349 12 | 0.756 25 | 0.958 1 | 0.702 50 | 0.805 18 | 0.708 10 | 0.916 38 | 0.898 5 | 0.801 4 |
TTT-KD | 0.773 7 | 0.646 96 | 0.818 16 | 0.809 40 | 0.774 10 | 0.878 3 | 0.581 3 | 0.943 1 | 0.687 14 | 0.704 7 | 0.978 6 | 0.607 6 | 0.336 18 | 0.775 11 | 0.912 8 | 0.838 4 | 0.823 3 | 0.694 15 | 0.967 4 | 0.899 4 | 0.794 6 | |
Lisa Weijler, Muhammad Jehanzeb Mirza, Leon Sick, Can Ekkazan, Pedro Hermosilla: TTT-KD: Test-Time Training for 3D Semantic Segmentation through Knowledge Distillation from Foundation Models. | ||||||||||||||||||||||
ResLFE_HDS | 0.772 8 | 0.939 4 | 0.824 7 | 0.854 7 | 0.771 11 | 0.840 34 | 0.564 12 | 0.900 11 | 0.686 15 | 0.677 14 | 0.961 18 | 0.537 35 | 0.348 13 | 0.769 15 | 0.903 12 | 0.785 12 | 0.815 8 | 0.676 26 | 0.939 16 | 0.880 13 | 0.772 11 | |
OctFormer | ![]() | 0.766 9 | 0.925 7 | 0.808 26 | 0.849 12 | 0.786 5 | 0.846 30 | 0.566 11 | 0.876 18 | 0.690 12 | 0.674 16 | 0.960 19 | 0.576 21 | 0.226 71 | 0.753 27 | 0.904 11 | 0.777 15 | 0.815 8 | 0.722 7 | 0.923 31 | 0.877 16 | 0.776 10 |
Peng-Shuai Wang: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH 2023 | ||||||||||||||||||||||
PPT-SpUNet-Joint | 0.766 9 | 0.932 5 | 0.794 36 | 0.829 30 | 0.751 26 | 0.854 18 | 0.540 24 | 0.903 10 | 0.630 38 | 0.672 17 | 0.963 16 | 0.565 25 | 0.357 10 | 0.788 5 | 0.900 14 | 0.737 30 | 0.802 19 | 0.685 20 | 0.950 8 | 0.887 8 | 0.780 8 | |
Xiaoyang Wu, Zhuotao Tian, Xin Wen, Bohao Peng, Xihui Liu, Kaicheng Yu, Hengshuang Zhao: Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training. CVPR 2024 | ||||||||||||||||||||||
OccuSeg+Semantic | 0.764 11 | 0.758 61 | 0.796 34 | 0.839 23 | 0.746 30 | 0.907 1 | 0.562 13 | 0.850 28 | 0.680 18 | 0.672 17 | 0.978 6 | 0.610 4 | 0.335 20 | 0.777 9 | 0.819 49 | 0.847 1 | 0.830 2 | 0.691 17 | 0.972 3 | 0.885 10 | 0.727 26 | |
CU-Hybrid Net | 0.764 11 | 0.924 8 | 0.819 14 | 0.840 22 | 0.757 21 | 0.853 20 | 0.580 4 | 0.848 29 | 0.709 5 | 0.643 27 | 0.958 23 | 0.587 15 | 0.295 37 | 0.753 27 | 0.884 23 | 0.758 22 | 0.815 8 | 0.725 5 | 0.927 27 | 0.867 27 | 0.743 19 | |
O-CNN | ![]() | 0.762 13 | 0.924 8 | 0.823 8 | 0.844 18 | 0.770 12 | 0.852 22 | 0.577 6 | 0.847 31 | 0.711 4 | 0.640 31 | 0.958 23 | 0.592 11 | 0.217 77 | 0.762 20 | 0.888 20 | 0.758 22 | 0.813 12 | 0.726 4 | 0.932 25 | 0.868 26 | 0.744 18 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
DiffSegNet | 0.758 14 | 0.725 78 | 0.789 41 | 0.843 19 | 0.762 17 | 0.856 15 | 0.562 13 | 0.920 4 | 0.657 28 | 0.658 21 | 0.958 23 | 0.589 13 | 0.337 17 | 0.782 6 | 0.879 24 | 0.787 10 | 0.779 40 | 0.678 22 | 0.926 29 | 0.880 13 | 0.799 5 | |
DTC | 0.757 15 | 0.843 29 | 0.820 12 | 0.847 15 | 0.791 2 | 0.862 11 | 0.511 37 | 0.870 21 | 0.707 6 | 0.652 23 | 0.954 40 | 0.604 8 | 0.279 47 | 0.760 21 | 0.942 3 | 0.734 31 | 0.766 49 | 0.701 13 | 0.884 60 | 0.874 22 | 0.736 20 | |
OA-CNN-L_ScanNet20 | 0.756 16 | 0.783 47 | 0.826 6 | 0.858 5 | 0.776 9 | 0.837 38 | 0.548 19 | 0.896 14 | 0.649 30 | 0.675 15 | 0.962 17 | 0.586 16 | 0.335 20 | 0.771 14 | 0.802 54 | 0.770 18 | 0.787 37 | 0.691 17 | 0.936 20 | 0.880 13 | 0.761 13 | |
PNE | 0.755 17 | 0.786 45 | 0.835 5 | 0.834 27 | 0.758 19 | 0.849 25 | 0.570 10 | 0.836 36 | 0.648 31 | 0.668 19 | 0.978 6 | 0.581 19 | 0.367 7 | 0.683 38 | 0.856 33 | 0.804 7 | 0.801 23 | 0.678 22 | 0.961 6 | 0.889 7 | 0.716 34 | |
P. Hermosilla: Point Neighborhood Embeddings. | ||||||||||||||||||||||
LSK3DNet | ![]() | 0.755 17 | 0.899 16 | 0.823 8 | 0.843 19 | 0.764 16 | 0.838 37 | 0.584 2 | 0.845 32 | 0.717 2 | 0.638 33 | 0.956 30 | 0.580 20 | 0.229 70 | 0.640 47 | 0.900 14 | 0.750 25 | 0.813 12 | 0.729 3 | 0.920 35 | 0.872 24 | 0.757 14 |
Tuo Feng, Wenguan Wang, Fan Ma, Yi Yang: LSK3DNet: Towards Effective and Efficient 3D Perception with Large Sparse Kernels. CVPR 2024 | ||||||||||||||||||||||
ConDaFormer | 0.755 17 | 0.927 6 | 0.822 10 | 0.836 25 | 0.801 1 | 0.849 25 | 0.516 34 | 0.864 25 | 0.651 29 | 0.680 13 | 0.958 23 | 0.584 18 | 0.282 44 | 0.759 23 | 0.855 35 | 0.728 33 | 0.802 19 | 0.678 22 | 0.880 65 | 0.873 23 | 0.756 16 | |
Lunhao Duan, Shanshan Zhao, Nan Xue, Mingming Gong, Guisong Xia, Dacheng Tao: ConDaFormer : Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding. Neurips, 2023 | ||||||||||||||||||||||
PointTransformerV2 | 0.752 20 | 0.742 68 | 0.809 25 | 0.872 2 | 0.758 19 | 0.860 12 | 0.552 17 | 0.891 16 | 0.610 45 | 0.687 8 | 0.960 19 | 0.559 29 | 0.304 32 | 0.766 18 | 0.926 6 | 0.767 19 | 0.797 27 | 0.644 37 | 0.942 13 | 0.876 19 | 0.722 30 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
DMF-Net | 0.752 20 | 0.906 14 | 0.793 38 | 0.802 46 | 0.689 44 | 0.825 51 | 0.556 15 | 0.867 22 | 0.681 17 | 0.602 49 | 0.960 19 | 0.555 31 | 0.365 8 | 0.779 8 | 0.859 30 | 0.747 26 | 0.795 31 | 0.717 8 | 0.917 37 | 0.856 35 | 0.764 12 | |
C.Yang, Y.Yan, W.Zhao, J.Ye, X.Yang, A.Hussain, B.Dong, K.Huang: Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation. ICONIP 2023 | ||||||||||||||||||||||
BPNet | ![]() | 0.749 22 | 0.909 12 | 0.818 16 | 0.811 38 | 0.752 24 | 0.839 36 | 0.485 52 | 0.842 33 | 0.673 20 | 0.644 26 | 0.957 28 | 0.528 41 | 0.305 31 | 0.773 12 | 0.859 30 | 0.788 9 | 0.818 7 | 0.693 16 | 0.916 38 | 0.856 35 | 0.723 29 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
PointConvFormer | 0.749 22 | 0.793 43 | 0.790 39 | 0.807 42 | 0.750 28 | 0.856 15 | 0.524 30 | 0.881 17 | 0.588 57 | 0.642 30 | 0.977 10 | 0.591 12 | 0.274 50 | 0.781 7 | 0.929 5 | 0.804 7 | 0.796 28 | 0.642 38 | 0.947 10 | 0.885 10 | 0.715 35 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
MSP | 0.748 24 | 0.623 99 | 0.804 28 | 0.859 4 | 0.745 31 | 0.824 53 | 0.501 41 | 0.912 7 | 0.690 12 | 0.685 10 | 0.956 30 | 0.567 24 | 0.320 26 | 0.768 17 | 0.918 7 | 0.720 38 | 0.802 19 | 0.676 26 | 0.921 33 | 0.881 12 | 0.779 9 | |
StratifiedFormer | ![]() | 0.747 25 | 0.901 15 | 0.803 29 | 0.845 17 | 0.757 21 | 0.846 30 | 0.512 36 | 0.825 40 | 0.696 10 | 0.645 25 | 0.956 30 | 0.576 21 | 0.262 61 | 0.744 33 | 0.861 29 | 0.742 28 | 0.770 47 | 0.705 11 | 0.899 50 | 0.860 32 | 0.734 21 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
VMNet | ![]() | 0.746 26 | 0.870 21 | 0.838 3 | 0.858 5 | 0.729 36 | 0.850 24 | 0.501 41 | 0.874 19 | 0.587 58 | 0.658 21 | 0.956 30 | 0.564 26 | 0.299 34 | 0.765 19 | 0.900 14 | 0.716 41 | 0.812 14 | 0.631 43 | 0.939 16 | 0.858 33 | 0.709 36 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
Virtual MVFusion | 0.746 26 | 0.771 55 | 0.819 14 | 0.848 14 | 0.702 42 | 0.865 10 | 0.397 89 | 0.899 12 | 0.699 8 | 0.664 20 | 0.948 61 | 0.588 14 | 0.330 22 | 0.746 32 | 0.851 39 | 0.764 20 | 0.796 28 | 0.704 12 | 0.935 21 | 0.866 28 | 0.728 24 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
DiffSeg3D2 | 0.745 28 | 0.725 78 | 0.814 20 | 0.837 24 | 0.751 26 | 0.831 45 | 0.514 35 | 0.896 14 | 0.674 19 | 0.684 11 | 0.960 19 | 0.564 26 | 0.303 33 | 0.773 12 | 0.820 48 | 0.713 44 | 0.798 26 | 0.690 19 | 0.923 31 | 0.875 20 | 0.757 14 | |
Retro-FPN | 0.744 29 | 0.842 30 | 0.800 30 | 0.767 60 | 0.740 32 | 0.836 40 | 0.541 22 | 0.914 6 | 0.672 21 | 0.626 37 | 0.958 23 | 0.552 32 | 0.272 52 | 0.777 9 | 0.886 22 | 0.696 51 | 0.801 23 | 0.674 29 | 0.941 14 | 0.858 33 | 0.717 32 | |
Peng Xiang*, Xin Wen*, Yu-Shen Liu, Hui Zhang, Yi Fang, Zhizhong Han: Retrospective Feature Pyramid Network for Point Cloud Semantic Segmentation. ICCV 2023 | ||||||||||||||||||||||
EQ-Net | 0.743 30 | 0.620 100 | 0.799 33 | 0.849 12 | 0.730 35 | 0.822 55 | 0.493 49 | 0.897 13 | 0.664 22 | 0.681 12 | 0.955 34 | 0.562 28 | 0.378 4 | 0.760 21 | 0.903 12 | 0.738 29 | 0.801 23 | 0.673 30 | 0.907 42 | 0.877 16 | 0.745 17 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
SAT | 0.742 31 | 0.860 24 | 0.765 55 | 0.819 33 | 0.769 14 | 0.848 27 | 0.533 26 | 0.829 38 | 0.663 23 | 0.631 36 | 0.955 34 | 0.586 16 | 0.274 50 | 0.753 27 | 0.896 17 | 0.729 32 | 0.760 55 | 0.666 32 | 0.921 33 | 0.855 37 | 0.733 22 | |
LRPNet | 0.742 31 | 0.816 38 | 0.806 27 | 0.807 42 | 0.752 24 | 0.828 49 | 0.575 8 | 0.839 35 | 0.699 8 | 0.637 34 | 0.954 40 | 0.520 44 | 0.320 26 | 0.755 26 | 0.834 43 | 0.760 21 | 0.772 44 | 0.676 26 | 0.915 40 | 0.862 30 | 0.717 32 | |
LargeKernel3D | 0.739 33 | 0.909 12 | 0.820 12 | 0.806 44 | 0.740 32 | 0.852 22 | 0.545 20 | 0.826 39 | 0.594 56 | 0.643 27 | 0.955 34 | 0.541 34 | 0.263 60 | 0.723 36 | 0.858 32 | 0.775 17 | 0.767 48 | 0.678 22 | 0.933 23 | 0.848 42 | 0.694 41 | |
Yukang Chen*, Jianhui Liu*, Xiangyu Zhang, Xiaojuan Qi, Jiaya Jia: LargeKernel3D: Scaling up Kernels in 3D Sparse CNNs. CVPR 2023 | ||||||||||||||||||||||
RPN | 0.736 34 | 0.776 51 | 0.790 39 | 0.851 10 | 0.754 23 | 0.854 18 | 0.491 51 | 0.866 23 | 0.596 55 | 0.686 9 | 0.955 34 | 0.536 36 | 0.342 15 | 0.624 54 | 0.869 26 | 0.787 10 | 0.802 19 | 0.628 44 | 0.927 27 | 0.875 20 | 0.704 38 | |
MinkowskiNet | ![]() | 0.736 34 | 0.859 25 | 0.818 16 | 0.832 29 | 0.709 40 | 0.840 34 | 0.521 32 | 0.853 27 | 0.660 25 | 0.643 27 | 0.951 51 | 0.544 33 | 0.286 42 | 0.731 34 | 0.893 18 | 0.675 59 | 0.772 44 | 0.683 21 | 0.874 71 | 0.852 40 | 0.727 26 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
IPCA | 0.731 36 | 0.890 17 | 0.837 4 | 0.864 3 | 0.726 37 | 0.873 5 | 0.530 29 | 0.824 41 | 0.489 91 | 0.647 24 | 0.978 6 | 0.609 5 | 0.336 18 | 0.624 54 | 0.733 63 | 0.758 22 | 0.776 42 | 0.570 69 | 0.949 9 | 0.877 16 | 0.728 24 | |
online3d | 0.727 37 | 0.715 83 | 0.777 48 | 0.854 7 | 0.748 29 | 0.858 13 | 0.497 46 | 0.872 20 | 0.572 64 | 0.639 32 | 0.957 28 | 0.523 42 | 0.297 36 | 0.750 30 | 0.803 53 | 0.744 27 | 0.810 15 | 0.587 65 | 0.938 18 | 0.871 25 | 0.719 31 | |
SparseConvNet | 0.725 38 | 0.647 95 | 0.821 11 | 0.846 16 | 0.721 38 | 0.869 6 | 0.533 26 | 0.754 62 | 0.603 51 | 0.614 41 | 0.955 34 | 0.572 23 | 0.325 24 | 0.710 37 | 0.870 25 | 0.724 36 | 0.823 3 | 0.628 44 | 0.934 22 | 0.865 29 | 0.683 44 | |
PointTransformer++ | 0.725 38 | 0.727 76 | 0.811 24 | 0.819 33 | 0.765 15 | 0.841 33 | 0.502 40 | 0.814 46 | 0.621 41 | 0.623 39 | 0.955 34 | 0.556 30 | 0.284 43 | 0.620 56 | 0.866 27 | 0.781 13 | 0.757 59 | 0.648 35 | 0.932 25 | 0.862 30 | 0.709 36 | |
MatchingNet | 0.724 40 | 0.812 40 | 0.812 22 | 0.810 39 | 0.735 34 | 0.834 42 | 0.495 48 | 0.860 26 | 0.572 64 | 0.602 49 | 0.954 40 | 0.512 46 | 0.280 46 | 0.757 24 | 0.845 41 | 0.725 35 | 0.780 39 | 0.606 54 | 0.937 19 | 0.851 41 | 0.700 40 | |
INS-Conv-semantic | 0.717 41 | 0.751 64 | 0.759 58 | 0.812 37 | 0.704 41 | 0.868 7 | 0.537 25 | 0.842 33 | 0.609 47 | 0.608 45 | 0.953 44 | 0.534 38 | 0.293 38 | 0.616 57 | 0.864 28 | 0.719 40 | 0.793 32 | 0.640 39 | 0.933 23 | 0.845 46 | 0.663 49 | |
PointMetaBase | 0.714 42 | 0.835 31 | 0.785 43 | 0.821 31 | 0.684 46 | 0.846 30 | 0.531 28 | 0.865 24 | 0.614 42 | 0.596 53 | 0.953 44 | 0.500 49 | 0.246 66 | 0.674 39 | 0.888 20 | 0.692 52 | 0.764 51 | 0.624 46 | 0.849 86 | 0.844 47 | 0.675 46 | |
contrastBoundary | ![]() | 0.705 43 | 0.769 58 | 0.775 49 | 0.809 40 | 0.687 45 | 0.820 58 | 0.439 77 | 0.812 47 | 0.661 24 | 0.591 55 | 0.945 69 | 0.515 45 | 0.171 96 | 0.633 51 | 0.856 33 | 0.720 38 | 0.796 28 | 0.668 31 | 0.889 57 | 0.847 43 | 0.689 42 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
ClickSeg_Semantic | 0.703 44 | 0.774 53 | 0.800 30 | 0.793 51 | 0.760 18 | 0.847 29 | 0.471 56 | 0.802 50 | 0.463 98 | 0.634 35 | 0.968 14 | 0.491 52 | 0.271 54 | 0.726 35 | 0.910 9 | 0.706 46 | 0.815 8 | 0.551 81 | 0.878 66 | 0.833 48 | 0.570 81 | |
RFCR | 0.702 45 | 0.889 18 | 0.745 68 | 0.813 36 | 0.672 49 | 0.818 62 | 0.493 49 | 0.815 45 | 0.623 39 | 0.610 43 | 0.947 63 | 0.470 61 | 0.249 65 | 0.594 61 | 0.848 40 | 0.705 47 | 0.779 40 | 0.646 36 | 0.892 55 | 0.823 54 | 0.611 64 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
One Thing One Click | 0.701 46 | 0.825 35 | 0.796 34 | 0.723 67 | 0.716 39 | 0.832 44 | 0.433 79 | 0.816 43 | 0.634 36 | 0.609 44 | 0.969 12 | 0.418 87 | 0.344 14 | 0.559 73 | 0.833 44 | 0.715 42 | 0.808 17 | 0.560 75 | 0.902 47 | 0.847 43 | 0.680 45 | |
JSENet | ![]() | 0.699 47 | 0.881 20 | 0.762 56 | 0.821 31 | 0.667 50 | 0.800 75 | 0.522 31 | 0.792 53 | 0.613 43 | 0.607 46 | 0.935 89 | 0.492 51 | 0.205 83 | 0.576 66 | 0.853 37 | 0.691 53 | 0.758 57 | 0.652 34 | 0.872 74 | 0.828 51 | 0.649 53 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
One-Thing-One-Click | 0.693 48 | 0.743 67 | 0.794 36 | 0.655 90 | 0.684 46 | 0.822 55 | 0.497 46 | 0.719 72 | 0.622 40 | 0.617 40 | 0.977 10 | 0.447 74 | 0.339 16 | 0.750 30 | 0.664 80 | 0.703 49 | 0.790 35 | 0.596 58 | 0.946 12 | 0.855 37 | 0.647 54 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
PicassoNet-II | ![]() | 0.692 49 | 0.732 72 | 0.772 50 | 0.786 52 | 0.677 48 | 0.866 9 | 0.517 33 | 0.848 29 | 0.509 84 | 0.626 37 | 0.952 49 | 0.536 36 | 0.225 73 | 0.545 79 | 0.704 70 | 0.689 56 | 0.810 15 | 0.564 74 | 0.903 46 | 0.854 39 | 0.729 23 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
Feature_GeometricNet | ![]() | 0.690 50 | 0.884 19 | 0.754 62 | 0.795 49 | 0.647 57 | 0.818 62 | 0.422 81 | 0.802 50 | 0.612 44 | 0.604 47 | 0.945 69 | 0.462 64 | 0.189 91 | 0.563 72 | 0.853 37 | 0.726 34 | 0.765 50 | 0.632 42 | 0.904 44 | 0.821 57 | 0.606 68 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
FusionNet | 0.688 51 | 0.704 85 | 0.741 72 | 0.754 64 | 0.656 52 | 0.829 47 | 0.501 41 | 0.741 67 | 0.609 47 | 0.548 62 | 0.950 55 | 0.522 43 | 0.371 5 | 0.633 51 | 0.756 58 | 0.715 42 | 0.771 46 | 0.623 47 | 0.861 82 | 0.814 60 | 0.658 50 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
Feature-Geometry Net | ![]() | 0.685 52 | 0.866 22 | 0.748 65 | 0.819 33 | 0.645 59 | 0.794 78 | 0.450 67 | 0.802 50 | 0.587 58 | 0.604 47 | 0.945 69 | 0.464 63 | 0.201 86 | 0.554 75 | 0.840 42 | 0.723 37 | 0.732 70 | 0.602 56 | 0.907 42 | 0.822 56 | 0.603 71 |
VACNN++ | 0.684 53 | 0.728 75 | 0.757 61 | 0.776 57 | 0.690 43 | 0.804 73 | 0.464 61 | 0.816 43 | 0.577 63 | 0.587 56 | 0.945 69 | 0.508 48 | 0.276 49 | 0.671 40 | 0.710 68 | 0.663 64 | 0.750 63 | 0.589 63 | 0.881 63 | 0.832 50 | 0.653 52 | |
KP-FCNN | 0.684 53 | 0.847 28 | 0.758 60 | 0.784 54 | 0.647 57 | 0.814 65 | 0.473 55 | 0.772 56 | 0.605 49 | 0.594 54 | 0.935 89 | 0.450 72 | 0.181 94 | 0.587 62 | 0.805 52 | 0.690 54 | 0.785 38 | 0.614 50 | 0.882 62 | 0.819 58 | 0.632 60 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
DGNet | 0.684 53 | 0.712 84 | 0.784 44 | 0.782 56 | 0.658 51 | 0.835 41 | 0.499 45 | 0.823 42 | 0.641 33 | 0.597 52 | 0.950 55 | 0.487 54 | 0.281 45 | 0.575 67 | 0.619 84 | 0.647 72 | 0.764 51 | 0.620 49 | 0.871 77 | 0.846 45 | 0.688 43 | |
PointContrast_LA_SEM | 0.683 56 | 0.757 62 | 0.784 44 | 0.786 52 | 0.639 61 | 0.824 53 | 0.408 84 | 0.775 55 | 0.604 50 | 0.541 64 | 0.934 93 | 0.532 39 | 0.269 56 | 0.552 76 | 0.777 56 | 0.645 75 | 0.793 32 | 0.640 39 | 0.913 41 | 0.824 53 | 0.671 47 | |
Superpoint Network | 0.683 56 | 0.851 27 | 0.728 76 | 0.800 48 | 0.653 54 | 0.806 71 | 0.468 58 | 0.804 48 | 0.572 64 | 0.602 49 | 0.946 66 | 0.453 71 | 0.239 69 | 0.519 84 | 0.822 46 | 0.689 56 | 0.762 54 | 0.595 60 | 0.895 53 | 0.827 52 | 0.630 61 | |
VI-PointConv | 0.676 58 | 0.770 57 | 0.754 62 | 0.783 55 | 0.621 65 | 0.814 65 | 0.552 17 | 0.758 60 | 0.571 67 | 0.557 60 | 0.954 40 | 0.529 40 | 0.268 58 | 0.530 82 | 0.682 74 | 0.675 59 | 0.719 73 | 0.603 55 | 0.888 58 | 0.833 48 | 0.665 48 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
ROSMRF3D | 0.673 59 | 0.789 44 | 0.748 65 | 0.763 62 | 0.635 63 | 0.814 65 | 0.407 86 | 0.747 64 | 0.581 62 | 0.573 57 | 0.950 55 | 0.484 55 | 0.271 54 | 0.607 58 | 0.754 59 | 0.649 69 | 0.774 43 | 0.596 58 | 0.883 61 | 0.823 54 | 0.606 68 | |
SALANet | 0.670 60 | 0.816 38 | 0.770 53 | 0.768 59 | 0.652 55 | 0.807 70 | 0.451 64 | 0.747 64 | 0.659 27 | 0.545 63 | 0.924 99 | 0.473 60 | 0.149 106 | 0.571 69 | 0.811 51 | 0.635 79 | 0.746 64 | 0.623 47 | 0.892 55 | 0.794 73 | 0.570 81 | |
O3DSeg | 0.668 61 | 0.822 36 | 0.771 52 | 0.496 110 | 0.651 56 | 0.833 43 | 0.541 22 | 0.761 59 | 0.555 73 | 0.611 42 | 0.966 15 | 0.489 53 | 0.370 6 | 0.388 103 | 0.580 87 | 0.776 16 | 0.751 61 | 0.570 69 | 0.956 7 | 0.817 59 | 0.646 55 | |
PointConv | ![]() | 0.666 62 | 0.781 48 | 0.759 58 | 0.699 75 | 0.644 60 | 0.822 55 | 0.475 54 | 0.779 54 | 0.564 70 | 0.504 81 | 0.953 44 | 0.428 81 | 0.203 85 | 0.586 64 | 0.754 59 | 0.661 65 | 0.753 60 | 0.588 64 | 0.902 47 | 0.813 62 | 0.642 56 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointASNL | ![]() | 0.666 62 | 0.703 86 | 0.781 46 | 0.751 66 | 0.655 53 | 0.830 46 | 0.471 56 | 0.769 57 | 0.474 94 | 0.537 66 | 0.951 51 | 0.475 59 | 0.279 47 | 0.635 49 | 0.698 73 | 0.675 59 | 0.751 61 | 0.553 80 | 0.816 93 | 0.806 64 | 0.703 39 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
PPCNN++ | ![]() | 0.663 64 | 0.746 65 | 0.708 79 | 0.722 68 | 0.638 62 | 0.820 58 | 0.451 64 | 0.566 100 | 0.599 53 | 0.541 64 | 0.950 55 | 0.510 47 | 0.313 28 | 0.648 45 | 0.819 49 | 0.616 84 | 0.682 88 | 0.590 62 | 0.869 78 | 0.810 63 | 0.656 51 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
DCM-Net | 0.658 65 | 0.778 49 | 0.702 82 | 0.806 44 | 0.619 66 | 0.813 68 | 0.468 58 | 0.693 80 | 0.494 87 | 0.524 72 | 0.941 81 | 0.449 73 | 0.298 35 | 0.510 86 | 0.821 47 | 0.675 59 | 0.727 72 | 0.568 72 | 0.826 91 | 0.803 66 | 0.637 58 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
HPGCNN | 0.656 66 | 0.698 88 | 0.743 70 | 0.650 91 | 0.564 83 | 0.820 58 | 0.505 39 | 0.758 60 | 0.631 37 | 0.479 85 | 0.945 69 | 0.480 57 | 0.226 71 | 0.572 68 | 0.774 57 | 0.690 54 | 0.735 68 | 0.614 50 | 0.853 85 | 0.776 88 | 0.597 74 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
SAFNet-seg | ![]() | 0.654 67 | 0.752 63 | 0.734 74 | 0.664 88 | 0.583 78 | 0.815 64 | 0.399 88 | 0.754 62 | 0.639 34 | 0.535 68 | 0.942 79 | 0.470 61 | 0.309 30 | 0.665 41 | 0.539 90 | 0.650 68 | 0.708 78 | 0.635 41 | 0.857 84 | 0.793 75 | 0.642 56 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
RandLA-Net | ![]() | 0.645 68 | 0.778 49 | 0.731 75 | 0.699 75 | 0.577 79 | 0.829 47 | 0.446 69 | 0.736 68 | 0.477 93 | 0.523 74 | 0.945 69 | 0.454 68 | 0.269 56 | 0.484 93 | 0.749 62 | 0.618 82 | 0.738 66 | 0.599 57 | 0.827 90 | 0.792 78 | 0.621 63 |
MVPNet | ![]() | 0.641 69 | 0.831 32 | 0.715 77 | 0.671 85 | 0.590 74 | 0.781 84 | 0.394 90 | 0.679 82 | 0.642 32 | 0.553 61 | 0.937 86 | 0.462 64 | 0.256 62 | 0.649 44 | 0.406 103 | 0.626 80 | 0.691 85 | 0.666 32 | 0.877 67 | 0.792 78 | 0.608 67 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
PointConv-SFPN | 0.641 69 | 0.776 51 | 0.703 81 | 0.721 69 | 0.557 86 | 0.826 50 | 0.451 64 | 0.672 85 | 0.563 71 | 0.483 84 | 0.943 78 | 0.425 84 | 0.162 101 | 0.644 46 | 0.726 64 | 0.659 66 | 0.709 77 | 0.572 68 | 0.875 69 | 0.786 83 | 0.559 87 | |
PointMRNet | 0.640 71 | 0.717 82 | 0.701 83 | 0.692 78 | 0.576 80 | 0.801 74 | 0.467 60 | 0.716 73 | 0.563 71 | 0.459 91 | 0.953 44 | 0.429 80 | 0.169 98 | 0.581 65 | 0.854 36 | 0.605 85 | 0.710 75 | 0.550 82 | 0.894 54 | 0.793 75 | 0.575 79 | |
FPConv | ![]() | 0.639 72 | 0.785 46 | 0.760 57 | 0.713 73 | 0.603 69 | 0.798 76 | 0.392 92 | 0.534 105 | 0.603 51 | 0.524 72 | 0.948 61 | 0.457 66 | 0.250 64 | 0.538 80 | 0.723 66 | 0.598 89 | 0.696 83 | 0.614 50 | 0.872 74 | 0.799 68 | 0.567 84 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
PD-Net | 0.638 73 | 0.797 42 | 0.769 54 | 0.641 96 | 0.590 74 | 0.820 58 | 0.461 62 | 0.537 104 | 0.637 35 | 0.536 67 | 0.947 63 | 0.388 94 | 0.206 82 | 0.656 42 | 0.668 78 | 0.647 72 | 0.732 70 | 0.585 66 | 0.868 79 | 0.793 75 | 0.473 107 | |
PointSPNet | 0.637 74 | 0.734 71 | 0.692 90 | 0.714 72 | 0.576 80 | 0.797 77 | 0.446 69 | 0.743 66 | 0.598 54 | 0.437 96 | 0.942 79 | 0.403 90 | 0.150 105 | 0.626 53 | 0.800 55 | 0.649 69 | 0.697 82 | 0.557 78 | 0.846 87 | 0.777 87 | 0.563 85 | |
SConv | 0.636 75 | 0.830 33 | 0.697 86 | 0.752 65 | 0.572 82 | 0.780 86 | 0.445 71 | 0.716 73 | 0.529 77 | 0.530 69 | 0.951 51 | 0.446 75 | 0.170 97 | 0.507 88 | 0.666 79 | 0.636 78 | 0.682 88 | 0.541 88 | 0.886 59 | 0.799 68 | 0.594 75 | |
Supervoxel-CNN | 0.635 76 | 0.656 93 | 0.711 78 | 0.719 70 | 0.613 67 | 0.757 95 | 0.444 74 | 0.765 58 | 0.534 76 | 0.566 58 | 0.928 97 | 0.478 58 | 0.272 52 | 0.636 48 | 0.531 92 | 0.664 63 | 0.645 98 | 0.508 96 | 0.864 81 | 0.792 78 | 0.611 64 | |
joint point-based | ![]() | 0.634 77 | 0.614 101 | 0.778 47 | 0.667 87 | 0.633 64 | 0.825 51 | 0.420 82 | 0.804 48 | 0.467 96 | 0.561 59 | 0.951 51 | 0.494 50 | 0.291 39 | 0.566 70 | 0.458 98 | 0.579 95 | 0.764 51 | 0.559 77 | 0.838 88 | 0.814 60 | 0.598 73 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
PointMTL | 0.632 78 | 0.731 73 | 0.688 93 | 0.675 82 | 0.591 73 | 0.784 83 | 0.444 74 | 0.565 101 | 0.610 45 | 0.492 82 | 0.949 59 | 0.456 67 | 0.254 63 | 0.587 62 | 0.706 69 | 0.599 88 | 0.665 94 | 0.612 53 | 0.868 79 | 0.791 81 | 0.579 78 | |
PointNet2-SFPN | 0.631 79 | 0.771 55 | 0.692 90 | 0.672 83 | 0.524 92 | 0.837 38 | 0.440 76 | 0.706 78 | 0.538 75 | 0.446 93 | 0.944 75 | 0.421 86 | 0.219 76 | 0.552 76 | 0.751 61 | 0.591 91 | 0.737 67 | 0.543 87 | 0.901 49 | 0.768 90 | 0.557 88 | |
3DSM_DMMF | 0.631 79 | 0.626 98 | 0.745 68 | 0.801 47 | 0.607 68 | 0.751 96 | 0.506 38 | 0.729 71 | 0.565 69 | 0.491 83 | 0.866 113 | 0.434 76 | 0.197 89 | 0.595 60 | 0.630 83 | 0.709 45 | 0.705 80 | 0.560 75 | 0.875 69 | 0.740 98 | 0.491 102 | |
APCF-Net | 0.631 79 | 0.742 68 | 0.687 95 | 0.672 83 | 0.557 86 | 0.792 81 | 0.408 84 | 0.665 87 | 0.545 74 | 0.508 78 | 0.952 49 | 0.428 81 | 0.186 92 | 0.634 50 | 0.702 71 | 0.620 81 | 0.706 79 | 0.555 79 | 0.873 72 | 0.798 70 | 0.581 77 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
FusionAwareConv | 0.630 82 | 0.604 103 | 0.741 72 | 0.766 61 | 0.590 74 | 0.747 97 | 0.501 41 | 0.734 69 | 0.503 86 | 0.527 70 | 0.919 103 | 0.454 68 | 0.323 25 | 0.550 78 | 0.420 102 | 0.678 58 | 0.688 86 | 0.544 85 | 0.896 52 | 0.795 72 | 0.627 62 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
DenSeR | 0.628 83 | 0.800 41 | 0.625 105 | 0.719 70 | 0.545 89 | 0.806 71 | 0.445 71 | 0.597 95 | 0.448 101 | 0.519 76 | 0.938 85 | 0.481 56 | 0.328 23 | 0.489 92 | 0.499 97 | 0.657 67 | 0.759 56 | 0.592 61 | 0.881 63 | 0.797 71 | 0.634 59 | |
SegGroup_sem | ![]() | 0.627 84 | 0.818 37 | 0.747 67 | 0.701 74 | 0.602 70 | 0.764 92 | 0.385 96 | 0.629 92 | 0.490 89 | 0.508 78 | 0.931 96 | 0.409 89 | 0.201 86 | 0.564 71 | 0.725 65 | 0.618 82 | 0.692 84 | 0.539 89 | 0.873 72 | 0.794 73 | 0.548 91 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
SIConv | 0.625 85 | 0.830 33 | 0.694 88 | 0.757 63 | 0.563 84 | 0.772 90 | 0.448 68 | 0.647 90 | 0.520 80 | 0.509 77 | 0.949 59 | 0.431 79 | 0.191 90 | 0.496 90 | 0.614 85 | 0.647 72 | 0.672 92 | 0.535 92 | 0.876 68 | 0.783 84 | 0.571 80 | |
Weakly-Openseg v3 | 0.625 85 | 0.924 8 | 0.787 42 | 0.620 98 | 0.555 88 | 0.811 69 | 0.393 91 | 0.666 86 | 0.382 109 | 0.520 75 | 0.953 44 | 0.250 113 | 0.208 80 | 0.604 59 | 0.670 76 | 0.644 76 | 0.742 65 | 0.538 90 | 0.919 36 | 0.803 66 | 0.513 99 | |
dtc_net | 0.625 85 | 0.703 86 | 0.751 64 | 0.794 50 | 0.535 90 | 0.848 27 | 0.480 53 | 0.676 84 | 0.528 78 | 0.469 88 | 0.944 75 | 0.454 68 | 0.004 118 | 0.464 95 | 0.636 82 | 0.704 48 | 0.758 57 | 0.548 84 | 0.924 30 | 0.787 82 | 0.492 101 | |
HPEIN | 0.618 88 | 0.729 74 | 0.668 96 | 0.647 93 | 0.597 72 | 0.766 91 | 0.414 83 | 0.680 81 | 0.520 80 | 0.525 71 | 0.946 66 | 0.432 77 | 0.215 78 | 0.493 91 | 0.599 86 | 0.638 77 | 0.617 103 | 0.570 69 | 0.897 51 | 0.806 64 | 0.605 70 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
SPH3D-GCN | ![]() | 0.610 89 | 0.858 26 | 0.772 50 | 0.489 111 | 0.532 91 | 0.792 81 | 0.404 87 | 0.643 91 | 0.570 68 | 0.507 80 | 0.935 89 | 0.414 88 | 0.046 115 | 0.510 86 | 0.702 71 | 0.602 87 | 0.705 80 | 0.549 83 | 0.859 83 | 0.773 89 | 0.534 94 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
AttAN | 0.609 90 | 0.760 60 | 0.667 97 | 0.649 92 | 0.521 93 | 0.793 79 | 0.457 63 | 0.648 89 | 0.528 78 | 0.434 98 | 0.947 63 | 0.401 91 | 0.153 104 | 0.454 96 | 0.721 67 | 0.648 71 | 0.717 74 | 0.536 91 | 0.904 44 | 0.765 91 | 0.485 103 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
wsss-transformer | 0.600 91 | 0.634 97 | 0.743 70 | 0.697 77 | 0.601 71 | 0.781 84 | 0.437 78 | 0.585 98 | 0.493 88 | 0.446 93 | 0.933 94 | 0.394 92 | 0.011 117 | 0.654 43 | 0.661 81 | 0.603 86 | 0.733 69 | 0.526 93 | 0.832 89 | 0.761 93 | 0.480 104 | |
LAP-D | 0.594 92 | 0.720 80 | 0.692 90 | 0.637 97 | 0.456 102 | 0.773 89 | 0.391 94 | 0.730 70 | 0.587 58 | 0.445 95 | 0.940 83 | 0.381 95 | 0.288 40 | 0.434 99 | 0.453 100 | 0.591 91 | 0.649 96 | 0.581 67 | 0.777 97 | 0.749 97 | 0.610 66 | |
DPC | 0.592 93 | 0.720 80 | 0.700 84 | 0.602 102 | 0.480 98 | 0.762 94 | 0.380 97 | 0.713 76 | 0.585 61 | 0.437 96 | 0.940 83 | 0.369 97 | 0.288 40 | 0.434 99 | 0.509 96 | 0.590 93 | 0.639 101 | 0.567 73 | 0.772 98 | 0.755 95 | 0.592 76 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
CCRFNet | 0.589 94 | 0.766 59 | 0.659 100 | 0.683 80 | 0.470 101 | 0.740 99 | 0.387 95 | 0.620 94 | 0.490 89 | 0.476 86 | 0.922 101 | 0.355 100 | 0.245 67 | 0.511 85 | 0.511 95 | 0.571 96 | 0.643 99 | 0.493 100 | 0.872 74 | 0.762 92 | 0.600 72 | |
ROSMRF | 0.580 95 | 0.772 54 | 0.707 80 | 0.681 81 | 0.563 84 | 0.764 92 | 0.362 99 | 0.515 106 | 0.465 97 | 0.465 90 | 0.936 88 | 0.427 83 | 0.207 81 | 0.438 97 | 0.577 88 | 0.536 99 | 0.675 91 | 0.486 101 | 0.723 104 | 0.779 85 | 0.524 96 | |
SD-DETR | 0.576 96 | 0.746 65 | 0.609 109 | 0.445 115 | 0.517 94 | 0.643 110 | 0.366 98 | 0.714 75 | 0.456 99 | 0.468 89 | 0.870 112 | 0.432 77 | 0.264 59 | 0.558 74 | 0.674 75 | 0.586 94 | 0.688 86 | 0.482 102 | 0.739 102 | 0.733 100 | 0.537 93 | |
SQN_0.1% | 0.569 97 | 0.676 90 | 0.696 87 | 0.657 89 | 0.497 95 | 0.779 87 | 0.424 80 | 0.548 102 | 0.515 82 | 0.376 103 | 0.902 110 | 0.422 85 | 0.357 10 | 0.379 104 | 0.456 99 | 0.596 90 | 0.659 95 | 0.544 85 | 0.685 107 | 0.665 111 | 0.556 89 | |
TextureNet | ![]() | 0.566 98 | 0.672 92 | 0.664 98 | 0.671 85 | 0.494 96 | 0.719 100 | 0.445 71 | 0.678 83 | 0.411 107 | 0.396 101 | 0.935 89 | 0.356 99 | 0.225 73 | 0.412 101 | 0.535 91 | 0.565 97 | 0.636 102 | 0.464 104 | 0.794 96 | 0.680 108 | 0.568 83 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
DVVNet | 0.562 99 | 0.648 94 | 0.700 84 | 0.770 58 | 0.586 77 | 0.687 104 | 0.333 103 | 0.650 88 | 0.514 83 | 0.475 87 | 0.906 107 | 0.359 98 | 0.223 75 | 0.340 106 | 0.442 101 | 0.422 110 | 0.668 93 | 0.501 97 | 0.708 105 | 0.779 85 | 0.534 94 | |
Pointnet++ & Feature | ![]() | 0.557 100 | 0.735 70 | 0.661 99 | 0.686 79 | 0.491 97 | 0.744 98 | 0.392 92 | 0.539 103 | 0.451 100 | 0.375 104 | 0.946 66 | 0.376 96 | 0.205 83 | 0.403 102 | 0.356 106 | 0.553 98 | 0.643 99 | 0.497 98 | 0.824 92 | 0.756 94 | 0.515 97 |
GMLPs | 0.538 101 | 0.495 111 | 0.693 89 | 0.647 93 | 0.471 100 | 0.793 79 | 0.300 106 | 0.477 107 | 0.505 85 | 0.358 105 | 0.903 109 | 0.327 103 | 0.081 112 | 0.472 94 | 0.529 93 | 0.448 108 | 0.710 75 | 0.509 94 | 0.746 100 | 0.737 99 | 0.554 90 | |
PanopticFusion-label | 0.529 102 | 0.491 112 | 0.688 93 | 0.604 101 | 0.386 107 | 0.632 111 | 0.225 117 | 0.705 79 | 0.434 104 | 0.293 111 | 0.815 115 | 0.348 101 | 0.241 68 | 0.499 89 | 0.669 77 | 0.507 101 | 0.649 96 | 0.442 110 | 0.796 95 | 0.602 115 | 0.561 86 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
subcloud_weak | 0.516 103 | 0.676 90 | 0.591 112 | 0.609 99 | 0.442 103 | 0.774 88 | 0.335 102 | 0.597 95 | 0.422 106 | 0.357 106 | 0.932 95 | 0.341 102 | 0.094 111 | 0.298 108 | 0.528 94 | 0.473 106 | 0.676 90 | 0.495 99 | 0.602 113 | 0.721 103 | 0.349 115 | |
Online SegFusion | 0.515 104 | 0.607 102 | 0.644 103 | 0.579 104 | 0.434 104 | 0.630 112 | 0.353 100 | 0.628 93 | 0.440 102 | 0.410 99 | 0.762 118 | 0.307 105 | 0.167 99 | 0.520 83 | 0.403 104 | 0.516 100 | 0.565 106 | 0.447 108 | 0.678 108 | 0.701 105 | 0.514 98 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
3DMV, FTSDF | 0.501 105 | 0.558 107 | 0.608 110 | 0.424 117 | 0.478 99 | 0.690 103 | 0.246 113 | 0.586 97 | 0.468 95 | 0.450 92 | 0.911 105 | 0.394 92 | 0.160 102 | 0.438 97 | 0.212 113 | 0.432 109 | 0.541 111 | 0.475 103 | 0.742 101 | 0.727 101 | 0.477 105 | |
PCNN | 0.498 106 | 0.559 106 | 0.644 103 | 0.560 106 | 0.420 106 | 0.711 102 | 0.229 115 | 0.414 108 | 0.436 103 | 0.352 107 | 0.941 81 | 0.324 104 | 0.155 103 | 0.238 113 | 0.387 105 | 0.493 102 | 0.529 112 | 0.509 94 | 0.813 94 | 0.751 96 | 0.504 100 | |
3DMV | 0.484 107 | 0.484 113 | 0.538 115 | 0.643 95 | 0.424 105 | 0.606 115 | 0.310 104 | 0.574 99 | 0.433 105 | 0.378 102 | 0.796 116 | 0.301 106 | 0.214 79 | 0.537 81 | 0.208 114 | 0.472 107 | 0.507 115 | 0.413 113 | 0.693 106 | 0.602 115 | 0.539 92 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
PointCNN with RGB | ![]() | 0.458 108 | 0.577 105 | 0.611 108 | 0.356 119 | 0.321 115 | 0.715 101 | 0.299 108 | 0.376 112 | 0.328 115 | 0.319 109 | 0.944 75 | 0.285 108 | 0.164 100 | 0.216 116 | 0.229 111 | 0.484 104 | 0.545 110 | 0.456 106 | 0.755 99 | 0.709 104 | 0.475 106 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
FCPN | ![]() | 0.447 109 | 0.679 89 | 0.604 111 | 0.578 105 | 0.380 108 | 0.682 105 | 0.291 109 | 0.106 119 | 0.483 92 | 0.258 117 | 0.920 102 | 0.258 112 | 0.025 116 | 0.231 115 | 0.325 107 | 0.480 105 | 0.560 108 | 0.463 105 | 0.725 103 | 0.666 110 | 0.231 119 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
DGCNN_reproduce | ![]() | 0.446 110 | 0.474 114 | 0.623 106 | 0.463 113 | 0.366 110 | 0.651 108 | 0.310 104 | 0.389 111 | 0.349 113 | 0.330 108 | 0.937 86 | 0.271 110 | 0.126 108 | 0.285 109 | 0.224 112 | 0.350 115 | 0.577 105 | 0.445 109 | 0.625 111 | 0.723 102 | 0.394 111 |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
PNET2 | 0.442 111 | 0.548 108 | 0.548 114 | 0.597 103 | 0.363 111 | 0.628 113 | 0.300 106 | 0.292 114 | 0.374 110 | 0.307 110 | 0.881 111 | 0.268 111 | 0.186 92 | 0.238 113 | 0.204 115 | 0.407 111 | 0.506 116 | 0.449 107 | 0.667 109 | 0.620 114 | 0.462 109 | |
SurfaceConvPF | 0.442 111 | 0.505 110 | 0.622 107 | 0.380 118 | 0.342 113 | 0.654 107 | 0.227 116 | 0.397 110 | 0.367 111 | 0.276 113 | 0.924 99 | 0.240 114 | 0.198 88 | 0.359 105 | 0.262 109 | 0.366 112 | 0.581 104 | 0.435 111 | 0.640 110 | 0.668 109 | 0.398 110 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
Tangent Convolutions | ![]() | 0.438 113 | 0.437 116 | 0.646 102 | 0.474 112 | 0.369 109 | 0.645 109 | 0.353 100 | 0.258 116 | 0.282 118 | 0.279 112 | 0.918 104 | 0.298 107 | 0.147 107 | 0.283 110 | 0.294 108 | 0.487 103 | 0.562 107 | 0.427 112 | 0.619 112 | 0.633 113 | 0.352 114 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
3DWSSS | 0.425 114 | 0.525 109 | 0.647 101 | 0.522 107 | 0.324 114 | 0.488 119 | 0.077 120 | 0.712 77 | 0.353 112 | 0.401 100 | 0.636 120 | 0.281 109 | 0.176 95 | 0.340 106 | 0.565 89 | 0.175 119 | 0.551 109 | 0.398 114 | 0.370 120 | 0.602 115 | 0.361 113 | |
SPLAT Net | ![]() | 0.393 115 | 0.472 115 | 0.511 116 | 0.606 100 | 0.311 116 | 0.656 106 | 0.245 114 | 0.405 109 | 0.328 115 | 0.197 118 | 0.927 98 | 0.227 116 | 0.000 120 | 0.001 121 | 0.249 110 | 0.271 118 | 0.510 113 | 0.383 116 | 0.593 114 | 0.699 106 | 0.267 117 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 116 | 0.297 118 | 0.491 117 | 0.432 116 | 0.358 112 | 0.612 114 | 0.274 111 | 0.116 118 | 0.411 107 | 0.265 114 | 0.904 108 | 0.229 115 | 0.079 113 | 0.250 111 | 0.185 116 | 0.320 116 | 0.510 113 | 0.385 115 | 0.548 115 | 0.597 118 | 0.394 111 | |
PointNet++ | ![]() | 0.339 117 | 0.584 104 | 0.478 118 | 0.458 114 | 0.256 118 | 0.360 120 | 0.250 112 | 0.247 117 | 0.278 119 | 0.261 116 | 0.677 119 | 0.183 117 | 0.117 109 | 0.212 117 | 0.145 118 | 0.364 113 | 0.346 120 | 0.232 120 | 0.548 115 | 0.523 119 | 0.252 118 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
GrowSP++ | 0.323 118 | 0.114 120 | 0.589 113 | 0.499 109 | 0.147 120 | 0.555 116 | 0.290 110 | 0.336 113 | 0.290 117 | 0.262 115 | 0.865 114 | 0.102 120 | 0.000 120 | 0.037 119 | 0.000 121 | 0.000 121 | 0.462 117 | 0.381 117 | 0.389 119 | 0.664 112 | 0.473 107 | |
SSC-UNet | ![]() | 0.308 119 | 0.353 117 | 0.290 120 | 0.278 120 | 0.166 119 | 0.553 117 | 0.169 119 | 0.286 115 | 0.147 120 | 0.148 120 | 0.908 106 | 0.182 118 | 0.064 114 | 0.023 120 | 0.018 120 | 0.354 114 | 0.363 118 | 0.345 118 | 0.546 117 | 0.685 107 | 0.278 116 |
ScanNet | ![]() | 0.306 120 | 0.203 119 | 0.366 119 | 0.501 108 | 0.311 116 | 0.524 118 | 0.211 118 | 0.002 121 | 0.342 114 | 0.189 119 | 0.786 117 | 0.145 119 | 0.102 110 | 0.245 112 | 0.152 117 | 0.318 117 | 0.348 119 | 0.300 119 | 0.460 118 | 0.437 120 | 0.182 120 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
ERROR | 0.054 121 | 0.000 121 | 0.041 121 | 0.172 121 | 0.030 121 | 0.062 122 | 0.001 121 | 0.035 120 | 0.004 121 | 0.051 121 | 0.143 121 | 0.019 121 | 0.003 119 | 0.041 118 | 0.050 119 | 0.003 120 | 0.054 121 | 0.018 121 | 0.005 122 | 0.264 121 | 0.082 121 | |
MVF-GNN | 0.014 122 | 0.000 121 | 0.000 122 | 0.000 122 | 0.007 122 | 0.086 121 | 0.000 122 | 0.000 122 | 0.001 122 | 0.000 122 | 0.029 122 | 0.001 122 | 0.000 120 | 0.000 122 | 0.000 121 | 0.000 121 | 0.000 122 | 0.018 121 | 0.015 121 | 0.115 122 | 0.000 122 | |