3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PTv3-PPT-ALC | 0.798 1 | 0.911 10 | 0.812 21 | 0.854 7 | 0.770 12 | 0.856 14 | 0.555 15 | 0.943 1 | 0.660 24 | 0.735 2 | 0.979 1 | 0.606 7 | 0.492 1 | 0.792 4 | 0.934 3 | 0.841 2 | 0.819 5 | 0.716 8 | 0.947 10 | 0.906 1 | 0.822 1 | |
PTv3 ScanNet | 0.794 2 | 0.941 3 | 0.813 20 | 0.851 9 | 0.782 6 | 0.890 3 | 0.597 1 | 0.916 5 | 0.696 9 | 0.713 5 | 0.979 1 | 0.635 2 | 0.384 3 | 0.793 3 | 0.907 10 | 0.821 5 | 0.790 33 | 0.696 13 | 0.967 3 | 0.903 2 | 0.805 2 | |
Xiaoyang Wu, Li Jiang, Peng-Shuai Wang, Zhijian Liu, Xihui Liu, Yu Qiao, Wanli Ouyang, Tong He, Hengshuang Zhao: Point Transformer V3: Simpler, Faster, Stronger. CVPR 2024 (Oral) | ||||||||||||||||||||||
DITR ScanNet | 0.793 3 | 0.811 40 | 0.852 2 | 0.889 1 | 0.774 9 | 0.907 1 | 0.592 2 | 0.927 3 | 0.719 1 | 0.718 3 | 0.961 17 | 0.652 1 | 0.348 12 | 0.817 1 | 0.927 5 | 0.795 9 | 0.824 2 | 0.749 1 | 0.948 9 | 0.887 7 | 0.771 11 | |
PonderV2 | 0.785 4 | 0.978 1 | 0.800 29 | 0.833 26 | 0.788 4 | 0.853 19 | 0.545 19 | 0.910 8 | 0.713 2 | 0.705 6 | 0.979 1 | 0.596 9 | 0.390 2 | 0.769 15 | 0.832 44 | 0.821 5 | 0.792 32 | 0.730 2 | 0.975 1 | 0.897 5 | 0.785 6 | |
Haoyi Zhu, Honghui Yang, Xiaoyang Wu, Di Huang, Sha Zhang, Xianglong He, Tong He, Hengshuang Zhao, Chunhua Shen, Yu Qiao, Wanli Ouyang: PonderV2: Pave the Way for 3D Foundataion Model with A Universal Pre-training Paradigm. | ||||||||||||||||||||||
Mix3D | 0.781 5 | 0.964 2 | 0.855 1 | 0.843 18 | 0.781 7 | 0.858 13 | 0.575 7 | 0.831 36 | 0.685 15 | 0.714 4 | 0.979 1 | 0.594 10 | 0.310 29 | 0.801 2 | 0.892 18 | 0.841 2 | 0.819 5 | 0.723 5 | 0.940 15 | 0.887 7 | 0.725 27 | |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
Swin3D | 0.779 6 | 0.861 22 | 0.818 15 | 0.836 23 | 0.790 3 | 0.875 5 | 0.576 6 | 0.905 9 | 0.704 6 | 0.739 1 | 0.969 11 | 0.611 3 | 0.349 11 | 0.756 25 | 0.958 1 | 0.702 48 | 0.805 16 | 0.708 9 | 0.916 35 | 0.898 4 | 0.801 3 | |
TTT-KD | 0.773 7 | 0.646 94 | 0.818 15 | 0.809 38 | 0.774 9 | 0.878 4 | 0.581 3 | 0.943 1 | 0.687 13 | 0.704 7 | 0.978 5 | 0.607 6 | 0.336 18 | 0.775 11 | 0.912 8 | 0.838 4 | 0.823 3 | 0.694 14 | 0.967 3 | 0.899 3 | 0.794 5 | |
Lisa Weijler, Muhammad Jehanzeb Mirza, Leon Sick, Can Ekkazan, Pedro Hermosilla: TTT-KD: Test-Time Training for 3D Semantic Segmentation through Knowledge Distillation from Foundation Models. | ||||||||||||||||||||||
ResLFE_HDS | 0.772 8 | 0.939 4 | 0.824 7 | 0.854 7 | 0.771 11 | 0.840 33 | 0.564 11 | 0.900 11 | 0.686 14 | 0.677 14 | 0.961 17 | 0.537 34 | 0.348 12 | 0.769 15 | 0.903 12 | 0.785 13 | 0.815 8 | 0.676 25 | 0.939 16 | 0.880 13 | 0.772 10 | |
OctFormer | 0.766 9 | 0.925 7 | 0.808 25 | 0.849 11 | 0.786 5 | 0.846 29 | 0.566 10 | 0.876 18 | 0.690 11 | 0.674 16 | 0.960 19 | 0.576 20 | 0.226 70 | 0.753 27 | 0.904 11 | 0.777 15 | 0.815 8 | 0.722 6 | 0.923 30 | 0.877 16 | 0.776 9 | |
Peng-Shuai Wang: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH 2023 | ||||||||||||||||||||||
PPT-SpUNet-Joint | 0.766 9 | 0.932 5 | 0.794 35 | 0.829 28 | 0.751 25 | 0.854 17 | 0.540 23 | 0.903 10 | 0.630 37 | 0.672 17 | 0.963 15 | 0.565 24 | 0.357 9 | 0.788 5 | 0.900 14 | 0.737 28 | 0.802 17 | 0.685 19 | 0.950 7 | 0.887 7 | 0.780 7 | |
Xiaoyang Wu, Zhuotao Tian, Xin Wen, Bohao Peng, Xihui Liu, Kaicheng Yu, Hengshuang Zhao: Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training. CVPR 2024 | ||||||||||||||||||||||
OccuSeg+Semantic | 0.764 11 | 0.758 61 | 0.796 33 | 0.839 21 | 0.746 28 | 0.907 1 | 0.562 12 | 0.850 28 | 0.680 17 | 0.672 17 | 0.978 5 | 0.610 4 | 0.335 20 | 0.777 9 | 0.819 48 | 0.847 1 | 0.830 1 | 0.691 16 | 0.972 2 | 0.885 10 | 0.727 25 | |
CU-Hybrid Net | 0.764 11 | 0.924 8 | 0.819 13 | 0.840 20 | 0.757 20 | 0.853 19 | 0.580 4 | 0.848 29 | 0.709 4 | 0.643 27 | 0.958 23 | 0.587 15 | 0.295 36 | 0.753 27 | 0.884 22 | 0.758 22 | 0.815 8 | 0.725 4 | 0.927 26 | 0.867 25 | 0.743 18 | |
O-CNN | 0.762 13 | 0.924 8 | 0.823 8 | 0.844 17 | 0.770 12 | 0.852 21 | 0.577 5 | 0.847 31 | 0.711 3 | 0.640 31 | 0.958 23 | 0.592 11 | 0.217 76 | 0.762 20 | 0.888 19 | 0.758 22 | 0.813 12 | 0.726 3 | 0.932 24 | 0.868 24 | 0.744 17 | |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
DiffSegNet | 0.758 14 | 0.725 77 | 0.789 40 | 0.843 18 | 0.762 16 | 0.856 14 | 0.562 12 | 0.920 4 | 0.657 27 | 0.658 21 | 0.958 23 | 0.589 13 | 0.337 17 | 0.782 6 | 0.879 23 | 0.787 11 | 0.779 38 | 0.678 21 | 0.926 28 | 0.880 13 | 0.799 4 | |
DTC | 0.757 15 | 0.843 28 | 0.820 11 | 0.847 14 | 0.791 2 | 0.862 11 | 0.511 36 | 0.870 20 | 0.707 5 | 0.652 23 | 0.954 38 | 0.604 8 | 0.279 47 | 0.760 21 | 0.942 2 | 0.734 29 | 0.766 47 | 0.701 12 | 0.884 57 | 0.874 22 | 0.736 19 | |
OA-CNN-L_ScanNet20 | 0.756 16 | 0.783 47 | 0.826 6 | 0.858 5 | 0.776 8 | 0.837 36 | 0.548 18 | 0.896 14 | 0.649 29 | 0.675 15 | 0.962 16 | 0.586 16 | 0.335 20 | 0.771 14 | 0.802 52 | 0.770 18 | 0.787 35 | 0.691 16 | 0.936 19 | 0.880 13 | 0.761 13 | |
PNE | 0.755 17 | 0.786 45 | 0.835 5 | 0.834 25 | 0.758 18 | 0.849 24 | 0.570 9 | 0.836 35 | 0.648 30 | 0.668 19 | 0.978 5 | 0.581 19 | 0.367 7 | 0.683 38 | 0.856 32 | 0.804 7 | 0.801 21 | 0.678 21 | 0.961 5 | 0.889 6 | 0.716 32 | |
P. Hermosilla: Point Neighborhood Embeddings. | ||||||||||||||||||||||
ConDaFormer | 0.755 17 | 0.927 6 | 0.822 9 | 0.836 23 | 0.801 1 | 0.849 24 | 0.516 33 | 0.864 25 | 0.651 28 | 0.680 13 | 0.958 23 | 0.584 18 | 0.282 44 | 0.759 23 | 0.855 34 | 0.728 31 | 0.802 17 | 0.678 21 | 0.880 62 | 0.873 23 | 0.756 15 | |
Lunhao Duan, Shanshan Zhao, Nan Xue, Mingming Gong, Guisong Xia, Dacheng Tao: ConDaFormer : Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding. Neurips, 2023 | ||||||||||||||||||||||
PointTransformerV2 | 0.752 19 | 0.742 68 | 0.809 24 | 0.872 2 | 0.758 18 | 0.860 12 | 0.552 16 | 0.891 16 | 0.610 44 | 0.687 8 | 0.960 19 | 0.559 28 | 0.304 32 | 0.766 18 | 0.926 6 | 0.767 19 | 0.797 25 | 0.644 36 | 0.942 13 | 0.876 19 | 0.722 29 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
DMF-Net | 0.752 19 | 0.906 13 | 0.793 37 | 0.802 44 | 0.689 43 | 0.825 49 | 0.556 14 | 0.867 21 | 0.681 16 | 0.602 47 | 0.960 19 | 0.555 30 | 0.365 8 | 0.779 8 | 0.859 29 | 0.747 25 | 0.795 29 | 0.717 7 | 0.917 34 | 0.856 33 | 0.764 12 | |
C.Yang, Y.Yan, W.Zhao, J.Ye, X.Yang, A.Hussain, B.Dong, K.Huang: Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation. ICONIP 2023 | ||||||||||||||||||||||
PointConvFormer | 0.749 21 | 0.793 43 | 0.790 38 | 0.807 40 | 0.750 27 | 0.856 14 | 0.524 29 | 0.881 17 | 0.588 56 | 0.642 30 | 0.977 9 | 0.591 12 | 0.274 50 | 0.781 7 | 0.929 4 | 0.804 7 | 0.796 26 | 0.642 37 | 0.947 10 | 0.885 10 | 0.715 33 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
BPNet | 0.749 21 | 0.909 11 | 0.818 15 | 0.811 36 | 0.752 23 | 0.839 35 | 0.485 50 | 0.842 32 | 0.673 19 | 0.644 26 | 0.957 28 | 0.528 40 | 0.305 31 | 0.773 12 | 0.859 29 | 0.788 10 | 0.818 7 | 0.693 15 | 0.916 35 | 0.856 33 | 0.723 28 | |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
MSP | 0.748 23 | 0.623 97 | 0.804 27 | 0.859 4 | 0.745 29 | 0.824 51 | 0.501 40 | 0.912 7 | 0.690 11 | 0.685 10 | 0.956 29 | 0.567 23 | 0.320 26 | 0.768 17 | 0.918 7 | 0.720 36 | 0.802 17 | 0.676 25 | 0.921 32 | 0.881 12 | 0.779 8 | |
StratifiedFormer | 0.747 24 | 0.901 14 | 0.803 28 | 0.845 16 | 0.757 20 | 0.846 29 | 0.512 35 | 0.825 39 | 0.696 9 | 0.645 25 | 0.956 29 | 0.576 20 | 0.262 61 | 0.744 32 | 0.861 28 | 0.742 26 | 0.770 45 | 0.705 10 | 0.899 47 | 0.860 30 | 0.734 20 | |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
Virtual MVFusion | 0.746 25 | 0.771 55 | 0.819 13 | 0.848 13 | 0.702 40 | 0.865 10 | 0.397 88 | 0.899 12 | 0.699 7 | 0.664 20 | 0.948 58 | 0.588 14 | 0.330 22 | 0.746 31 | 0.851 38 | 0.764 20 | 0.796 26 | 0.704 11 | 0.935 20 | 0.866 26 | 0.728 23 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
VMNet | 0.746 25 | 0.870 20 | 0.838 3 | 0.858 5 | 0.729 34 | 0.850 23 | 0.501 40 | 0.874 19 | 0.587 57 | 0.658 21 | 0.956 29 | 0.564 25 | 0.299 34 | 0.765 19 | 0.900 14 | 0.716 39 | 0.812 13 | 0.631 42 | 0.939 16 | 0.858 31 | 0.709 34 | |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
DiffSeg3D2 | 0.745 27 | 0.725 77 | 0.814 19 | 0.837 22 | 0.751 25 | 0.831 43 | 0.514 34 | 0.896 14 | 0.674 18 | 0.684 11 | 0.960 19 | 0.564 25 | 0.303 33 | 0.773 12 | 0.820 47 | 0.713 42 | 0.798 24 | 0.690 18 | 0.923 30 | 0.875 20 | 0.757 14 | |
Retro-FPN | 0.744 28 | 0.842 29 | 0.800 29 | 0.767 58 | 0.740 30 | 0.836 38 | 0.541 21 | 0.914 6 | 0.672 20 | 0.626 35 | 0.958 23 | 0.552 31 | 0.272 52 | 0.777 9 | 0.886 21 | 0.696 49 | 0.801 21 | 0.674 28 | 0.941 14 | 0.858 31 | 0.717 30 | |
Peng Xiang*, Xin Wen*, Yu-Shen Liu, Hui Zhang, Yi Fang, Zhizhong Han: Retrospective Feature Pyramid Network for Point Cloud Semantic Segmentation. ICCV 2023 | ||||||||||||||||||||||
EQ-Net | 0.743 29 | 0.620 98 | 0.799 32 | 0.849 11 | 0.730 33 | 0.822 53 | 0.493 47 | 0.897 13 | 0.664 21 | 0.681 12 | 0.955 32 | 0.562 27 | 0.378 4 | 0.760 21 | 0.903 12 | 0.738 27 | 0.801 21 | 0.673 29 | 0.907 39 | 0.877 16 | 0.745 16 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
LRPNet | 0.742 30 | 0.816 37 | 0.806 26 | 0.807 40 | 0.752 23 | 0.828 47 | 0.575 7 | 0.839 34 | 0.699 7 | 0.637 32 | 0.954 38 | 0.520 43 | 0.320 26 | 0.755 26 | 0.834 42 | 0.760 21 | 0.772 42 | 0.676 25 | 0.915 37 | 0.862 28 | 0.717 30 | |
SAT | 0.742 30 | 0.860 23 | 0.765 52 | 0.819 31 | 0.769 14 | 0.848 26 | 0.533 25 | 0.829 37 | 0.663 22 | 0.631 34 | 0.955 32 | 0.586 16 | 0.274 50 | 0.753 27 | 0.896 16 | 0.729 30 | 0.760 53 | 0.666 31 | 0.921 32 | 0.855 35 | 0.733 21 | |
LargeKernel3D | 0.739 32 | 0.909 11 | 0.820 11 | 0.806 42 | 0.740 30 | 0.852 21 | 0.545 19 | 0.826 38 | 0.594 55 | 0.643 27 | 0.955 32 | 0.541 33 | 0.263 60 | 0.723 36 | 0.858 31 | 0.775 17 | 0.767 46 | 0.678 21 | 0.933 22 | 0.848 40 | 0.694 39 | |
Yukang Chen*, Jianhui Liu*, Xiangyu Zhang, Xiaojuan Qi, Jiaya Jia: LargeKernel3D: Scaling up Kernels in 3D Sparse CNNs. CVPR 2023 | ||||||||||||||||||||||
RPN | 0.736 33 | 0.776 51 | 0.790 38 | 0.851 9 | 0.754 22 | 0.854 17 | 0.491 49 | 0.866 23 | 0.596 54 | 0.686 9 | 0.955 32 | 0.536 35 | 0.342 15 | 0.624 53 | 0.869 25 | 0.787 11 | 0.802 17 | 0.628 43 | 0.927 26 | 0.875 20 | 0.704 36 | |
MinkowskiNet | 0.736 33 | 0.859 24 | 0.818 15 | 0.832 27 | 0.709 38 | 0.840 33 | 0.521 31 | 0.853 27 | 0.660 24 | 0.643 27 | 0.951 48 | 0.544 32 | 0.286 42 | 0.731 34 | 0.893 17 | 0.675 58 | 0.772 42 | 0.683 20 | 0.874 69 | 0.852 38 | 0.727 25 | |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
IPCA | 0.731 35 | 0.890 16 | 0.837 4 | 0.864 3 | 0.726 35 | 0.873 6 | 0.530 28 | 0.824 40 | 0.489 90 | 0.647 24 | 0.978 5 | 0.609 5 | 0.336 18 | 0.624 53 | 0.733 61 | 0.758 22 | 0.776 40 | 0.570 68 | 0.949 8 | 0.877 16 | 0.728 23 | |
SparseConvNet | 0.725 36 | 0.647 93 | 0.821 10 | 0.846 15 | 0.721 36 | 0.869 7 | 0.533 25 | 0.754 61 | 0.603 50 | 0.614 39 | 0.955 32 | 0.572 22 | 0.325 24 | 0.710 37 | 0.870 24 | 0.724 34 | 0.823 3 | 0.628 43 | 0.934 21 | 0.865 27 | 0.683 42 | |
PointTransformer++ | 0.725 36 | 0.727 76 | 0.811 23 | 0.819 31 | 0.765 15 | 0.841 32 | 0.502 39 | 0.814 45 | 0.621 40 | 0.623 37 | 0.955 32 | 0.556 29 | 0.284 43 | 0.620 55 | 0.866 26 | 0.781 14 | 0.757 57 | 0.648 34 | 0.932 24 | 0.862 28 | 0.709 34 | |
MatchingNet | 0.724 38 | 0.812 39 | 0.812 21 | 0.810 37 | 0.735 32 | 0.834 40 | 0.495 46 | 0.860 26 | 0.572 64 | 0.602 47 | 0.954 38 | 0.512 45 | 0.280 46 | 0.757 24 | 0.845 40 | 0.725 33 | 0.780 37 | 0.606 53 | 0.937 18 | 0.851 39 | 0.700 38 | |
INS-Conv-semantic | 0.717 39 | 0.751 64 | 0.759 56 | 0.812 35 | 0.704 39 | 0.868 8 | 0.537 24 | 0.842 32 | 0.609 46 | 0.608 43 | 0.953 42 | 0.534 37 | 0.293 37 | 0.616 56 | 0.864 27 | 0.719 38 | 0.793 30 | 0.640 38 | 0.933 22 | 0.845 44 | 0.663 48 | |
PointMetaBase | 0.714 40 | 0.835 30 | 0.785 41 | 0.821 29 | 0.684 45 | 0.846 29 | 0.531 27 | 0.865 24 | 0.614 41 | 0.596 51 | 0.953 42 | 0.500 48 | 0.246 66 | 0.674 39 | 0.888 19 | 0.692 50 | 0.764 49 | 0.624 45 | 0.849 85 | 0.844 45 | 0.675 44 | |
contrastBoundary | 0.705 41 | 0.769 58 | 0.775 46 | 0.809 38 | 0.687 44 | 0.820 56 | 0.439 76 | 0.812 46 | 0.661 23 | 0.591 53 | 0.945 66 | 0.515 44 | 0.171 95 | 0.633 50 | 0.856 32 | 0.720 36 | 0.796 26 | 0.668 30 | 0.889 54 | 0.847 41 | 0.689 40 | |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
ClickSeg_Semantic | 0.703 42 | 0.774 53 | 0.800 29 | 0.793 49 | 0.760 17 | 0.847 28 | 0.471 54 | 0.802 49 | 0.463 97 | 0.634 33 | 0.968 13 | 0.491 51 | 0.271 54 | 0.726 35 | 0.910 9 | 0.706 44 | 0.815 8 | 0.551 80 | 0.878 63 | 0.833 46 | 0.570 80 | |
RFCR | 0.702 43 | 0.889 17 | 0.745 67 | 0.813 34 | 0.672 48 | 0.818 61 | 0.493 47 | 0.815 44 | 0.623 38 | 0.610 41 | 0.947 60 | 0.470 60 | 0.249 65 | 0.594 59 | 0.848 39 | 0.705 45 | 0.779 38 | 0.646 35 | 0.892 52 | 0.823 52 | 0.611 63 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
One Thing One Click | 0.701 44 | 0.825 34 | 0.796 33 | 0.723 65 | 0.716 37 | 0.832 42 | 0.433 78 | 0.816 42 | 0.634 35 | 0.609 42 | 0.969 11 | 0.418 86 | 0.344 14 | 0.559 71 | 0.833 43 | 0.715 40 | 0.808 15 | 0.560 74 | 0.902 44 | 0.847 41 | 0.680 43 | |
JSENet | 0.699 45 | 0.881 19 | 0.762 53 | 0.821 29 | 0.667 49 | 0.800 73 | 0.522 30 | 0.792 52 | 0.613 42 | 0.607 44 | 0.935 86 | 0.492 50 | 0.205 81 | 0.576 64 | 0.853 36 | 0.691 52 | 0.758 55 | 0.652 33 | 0.872 72 | 0.828 49 | 0.649 52 | |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
One-Thing-One-Click | 0.693 46 | 0.743 67 | 0.794 35 | 0.655 88 | 0.684 45 | 0.822 53 | 0.497 45 | 0.719 71 | 0.622 39 | 0.617 38 | 0.977 9 | 0.447 73 | 0.339 16 | 0.750 30 | 0.664 78 | 0.703 47 | 0.790 33 | 0.596 58 | 0.946 12 | 0.855 35 | 0.647 53 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
PicassoNet-II | 0.692 47 | 0.732 72 | 0.772 47 | 0.786 50 | 0.677 47 | 0.866 9 | 0.517 32 | 0.848 29 | 0.509 83 | 0.626 35 | 0.952 46 | 0.536 35 | 0.225 72 | 0.545 77 | 0.704 69 | 0.689 55 | 0.810 14 | 0.564 73 | 0.903 43 | 0.854 37 | 0.729 22 | |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
Feature_GeometricNet | 0.690 48 | 0.884 18 | 0.754 60 | 0.795 47 | 0.647 56 | 0.818 61 | 0.422 80 | 0.802 49 | 0.612 43 | 0.604 45 | 0.945 66 | 0.462 63 | 0.189 89 | 0.563 70 | 0.853 36 | 0.726 32 | 0.765 48 | 0.632 41 | 0.904 41 | 0.821 55 | 0.606 67 | |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
FusionNet | 0.688 49 | 0.704 83 | 0.741 71 | 0.754 62 | 0.656 51 | 0.829 45 | 0.501 40 | 0.741 66 | 0.609 46 | 0.548 61 | 0.950 52 | 0.522 42 | 0.371 5 | 0.633 50 | 0.756 56 | 0.715 40 | 0.771 44 | 0.623 46 | 0.861 80 | 0.814 58 | 0.658 49 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
Feature-Geometry Net | 0.685 50 | 0.866 21 | 0.748 64 | 0.819 31 | 0.645 58 | 0.794 76 | 0.450 66 | 0.802 49 | 0.587 57 | 0.604 45 | 0.945 66 | 0.464 62 | 0.201 84 | 0.554 73 | 0.840 41 | 0.723 35 | 0.732 68 | 0.602 56 | 0.907 39 | 0.822 54 | 0.603 70 | |
DGNet | 0.684 51 | 0.712 82 | 0.784 42 | 0.782 54 | 0.658 50 | 0.835 39 | 0.499 44 | 0.823 41 | 0.641 32 | 0.597 50 | 0.950 52 | 0.487 53 | 0.281 45 | 0.575 65 | 0.619 82 | 0.647 71 | 0.764 49 | 0.620 48 | 0.871 75 | 0.846 43 | 0.688 41 | |
VACNN++ | 0.684 51 | 0.728 75 | 0.757 59 | 0.776 55 | 0.690 41 | 0.804 71 | 0.464 59 | 0.816 42 | 0.577 63 | 0.587 54 | 0.945 66 | 0.508 47 | 0.276 49 | 0.671 40 | 0.710 67 | 0.663 63 | 0.750 61 | 0.589 63 | 0.881 60 | 0.832 48 | 0.653 51 | |
KP-FCNN | 0.684 51 | 0.847 27 | 0.758 58 | 0.784 52 | 0.647 56 | 0.814 64 | 0.473 53 | 0.772 55 | 0.605 48 | 0.594 52 | 0.935 86 | 0.450 71 | 0.181 92 | 0.587 60 | 0.805 51 | 0.690 53 | 0.785 36 | 0.614 49 | 0.882 59 | 0.819 56 | 0.632 59 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
PointContrast_LA_SEM | 0.683 54 | 0.757 62 | 0.784 42 | 0.786 50 | 0.639 60 | 0.824 51 | 0.408 83 | 0.775 54 | 0.604 49 | 0.541 63 | 0.934 90 | 0.532 38 | 0.269 56 | 0.552 74 | 0.777 54 | 0.645 74 | 0.793 30 | 0.640 38 | 0.913 38 | 0.824 51 | 0.671 45 | |
Superpoint Network | 0.683 54 | 0.851 26 | 0.728 75 | 0.800 46 | 0.653 53 | 0.806 69 | 0.468 56 | 0.804 47 | 0.572 64 | 0.602 47 | 0.946 63 | 0.453 70 | 0.239 69 | 0.519 83 | 0.822 45 | 0.689 55 | 0.762 52 | 0.595 60 | 0.895 50 | 0.827 50 | 0.630 60 | |
VI-PointConv | 0.676 56 | 0.770 57 | 0.754 60 | 0.783 53 | 0.621 64 | 0.814 64 | 0.552 16 | 0.758 59 | 0.571 66 | 0.557 59 | 0.954 38 | 0.529 39 | 0.268 58 | 0.530 80 | 0.682 73 | 0.675 58 | 0.719 71 | 0.603 55 | 0.888 55 | 0.833 46 | 0.665 47 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
ROSMRF3D | 0.673 57 | 0.789 44 | 0.748 64 | 0.763 60 | 0.635 62 | 0.814 64 | 0.407 85 | 0.747 63 | 0.581 61 | 0.573 56 | 0.950 52 | 0.484 54 | 0.271 54 | 0.607 57 | 0.754 57 | 0.649 68 | 0.774 41 | 0.596 58 | 0.883 58 | 0.823 52 | 0.606 67 | |
SALANet | 0.670 58 | 0.816 37 | 0.770 50 | 0.768 57 | 0.652 54 | 0.807 68 | 0.451 63 | 0.747 63 | 0.659 26 | 0.545 62 | 0.924 97 | 0.473 59 | 0.149 105 | 0.571 67 | 0.811 50 | 0.635 77 | 0.746 62 | 0.623 46 | 0.892 52 | 0.794 71 | 0.570 80 | |
O3DSeg | 0.668 59 | 0.822 35 | 0.771 49 | 0.496 109 | 0.651 55 | 0.833 41 | 0.541 21 | 0.761 58 | 0.555 72 | 0.611 40 | 0.966 14 | 0.489 52 | 0.370 6 | 0.388 102 | 0.580 85 | 0.776 16 | 0.751 59 | 0.570 68 | 0.956 6 | 0.817 57 | 0.646 54 | |
PointConv | 0.666 60 | 0.781 48 | 0.759 56 | 0.699 73 | 0.644 59 | 0.822 53 | 0.475 52 | 0.779 53 | 0.564 69 | 0.504 80 | 0.953 42 | 0.428 80 | 0.203 83 | 0.586 62 | 0.754 57 | 0.661 64 | 0.753 58 | 0.588 64 | 0.902 44 | 0.813 60 | 0.642 55 | |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointASNL | 0.666 60 | 0.703 84 | 0.781 44 | 0.751 64 | 0.655 52 | 0.830 44 | 0.471 54 | 0.769 56 | 0.474 93 | 0.537 65 | 0.951 48 | 0.475 58 | 0.279 47 | 0.635 48 | 0.698 72 | 0.675 58 | 0.751 59 | 0.553 79 | 0.816 92 | 0.806 62 | 0.703 37 | |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
PPCNN++ | 0.663 62 | 0.746 65 | 0.708 78 | 0.722 66 | 0.638 61 | 0.820 56 | 0.451 63 | 0.566 99 | 0.599 52 | 0.541 63 | 0.950 52 | 0.510 46 | 0.313 28 | 0.648 45 | 0.819 48 | 0.616 82 | 0.682 86 | 0.590 62 | 0.869 76 | 0.810 61 | 0.656 50 | |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
DCM-Net | 0.658 63 | 0.778 49 | 0.702 81 | 0.806 42 | 0.619 65 | 0.813 67 | 0.468 56 | 0.693 79 | 0.494 86 | 0.524 72 | 0.941 78 | 0.449 72 | 0.298 35 | 0.510 85 | 0.821 46 | 0.675 58 | 0.727 70 | 0.568 71 | 0.826 90 | 0.803 65 | 0.637 57 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
MVF-GNN | 0.658 63 | 0.558 105 | 0.751 62 | 0.655 88 | 0.690 41 | 0.722 98 | 0.453 62 | 0.867 21 | 0.579 62 | 0.576 55 | 0.893 109 | 0.523 41 | 0.293 37 | 0.733 33 | 0.571 87 | 0.692 50 | 0.659 93 | 0.606 53 | 0.875 66 | 0.804 64 | 0.668 46 | |
HPGCNN | 0.656 65 | 0.698 86 | 0.743 69 | 0.650 90 | 0.564 82 | 0.820 56 | 0.505 38 | 0.758 59 | 0.631 36 | 0.479 84 | 0.945 66 | 0.480 56 | 0.226 70 | 0.572 66 | 0.774 55 | 0.690 53 | 0.735 66 | 0.614 49 | 0.853 84 | 0.776 86 | 0.597 73 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
SAFNet-seg | 0.654 66 | 0.752 63 | 0.734 73 | 0.664 86 | 0.583 77 | 0.815 63 | 0.399 87 | 0.754 61 | 0.639 33 | 0.535 67 | 0.942 76 | 0.470 60 | 0.309 30 | 0.665 41 | 0.539 89 | 0.650 67 | 0.708 76 | 0.635 40 | 0.857 83 | 0.793 73 | 0.642 55 | |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
RandLA-Net | 0.645 67 | 0.778 49 | 0.731 74 | 0.699 73 | 0.577 78 | 0.829 45 | 0.446 68 | 0.736 67 | 0.477 92 | 0.523 74 | 0.945 66 | 0.454 67 | 0.269 56 | 0.484 92 | 0.749 60 | 0.618 80 | 0.738 63 | 0.599 57 | 0.827 89 | 0.792 76 | 0.621 62 | |
PointConv-SFPN | 0.641 68 | 0.776 51 | 0.703 80 | 0.721 67 | 0.557 85 | 0.826 48 | 0.451 63 | 0.672 85 | 0.563 70 | 0.483 83 | 0.943 75 | 0.425 83 | 0.162 100 | 0.644 46 | 0.726 62 | 0.659 65 | 0.709 75 | 0.572 67 | 0.875 66 | 0.786 81 | 0.559 86 | |
MVPNet | 0.641 68 | 0.831 31 | 0.715 76 | 0.671 83 | 0.590 73 | 0.781 82 | 0.394 89 | 0.679 82 | 0.642 31 | 0.553 60 | 0.937 83 | 0.462 63 | 0.256 62 | 0.649 44 | 0.406 102 | 0.626 78 | 0.691 83 | 0.666 31 | 0.877 64 | 0.792 76 | 0.608 66 | |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
PointMRNet | 0.640 70 | 0.717 81 | 0.701 82 | 0.692 76 | 0.576 79 | 0.801 72 | 0.467 58 | 0.716 72 | 0.563 70 | 0.459 90 | 0.953 42 | 0.429 79 | 0.169 97 | 0.581 63 | 0.854 35 | 0.605 83 | 0.710 73 | 0.550 81 | 0.894 51 | 0.793 73 | 0.575 78 | |
FPConv | 0.639 71 | 0.785 46 | 0.760 55 | 0.713 71 | 0.603 68 | 0.798 74 | 0.392 90 | 0.534 104 | 0.603 50 | 0.524 72 | 0.948 58 | 0.457 65 | 0.250 64 | 0.538 78 | 0.723 65 | 0.598 87 | 0.696 81 | 0.614 49 | 0.872 72 | 0.799 66 | 0.567 83 | |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
PD-Net | 0.638 72 | 0.797 42 | 0.769 51 | 0.641 95 | 0.590 73 | 0.820 56 | 0.461 60 | 0.537 103 | 0.637 34 | 0.536 66 | 0.947 60 | 0.388 93 | 0.206 80 | 0.656 42 | 0.668 76 | 0.647 71 | 0.732 68 | 0.585 65 | 0.868 77 | 0.793 73 | 0.473 106 | |
PointSPNet | 0.637 73 | 0.734 71 | 0.692 89 | 0.714 70 | 0.576 79 | 0.797 75 | 0.446 68 | 0.743 65 | 0.598 53 | 0.437 95 | 0.942 76 | 0.403 89 | 0.150 104 | 0.626 52 | 0.800 53 | 0.649 68 | 0.697 80 | 0.557 77 | 0.846 86 | 0.777 85 | 0.563 84 | |
SConv | 0.636 74 | 0.830 32 | 0.697 85 | 0.752 63 | 0.572 81 | 0.780 84 | 0.445 70 | 0.716 72 | 0.529 76 | 0.530 68 | 0.951 48 | 0.446 74 | 0.170 96 | 0.507 87 | 0.666 77 | 0.636 76 | 0.682 86 | 0.541 87 | 0.886 56 | 0.799 66 | 0.594 74 | |
Supervoxel-CNN | 0.635 75 | 0.656 91 | 0.711 77 | 0.719 68 | 0.613 66 | 0.757 93 | 0.444 73 | 0.765 57 | 0.534 75 | 0.566 57 | 0.928 95 | 0.478 57 | 0.272 52 | 0.636 47 | 0.531 91 | 0.664 62 | 0.645 97 | 0.508 95 | 0.864 79 | 0.792 76 | 0.611 63 | |
joint point-based | 0.634 76 | 0.614 99 | 0.778 45 | 0.667 85 | 0.633 63 | 0.825 49 | 0.420 81 | 0.804 47 | 0.467 95 | 0.561 58 | 0.951 48 | 0.494 49 | 0.291 39 | 0.566 68 | 0.458 97 | 0.579 94 | 0.764 49 | 0.559 76 | 0.838 87 | 0.814 58 | 0.598 72 | |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
PointMTL | 0.632 77 | 0.731 73 | 0.688 92 | 0.675 80 | 0.591 72 | 0.784 81 | 0.444 73 | 0.565 100 | 0.610 44 | 0.492 81 | 0.949 56 | 0.456 66 | 0.254 63 | 0.587 60 | 0.706 68 | 0.599 86 | 0.665 92 | 0.612 52 | 0.868 77 | 0.791 79 | 0.579 77 | |
PointNet2-SFPN | 0.631 78 | 0.771 55 | 0.692 89 | 0.672 81 | 0.524 90 | 0.837 36 | 0.440 75 | 0.706 77 | 0.538 74 | 0.446 92 | 0.944 72 | 0.421 85 | 0.219 75 | 0.552 74 | 0.751 59 | 0.591 90 | 0.737 64 | 0.543 86 | 0.901 46 | 0.768 89 | 0.557 87 | |
APCF-Net | 0.631 78 | 0.742 68 | 0.687 94 | 0.672 81 | 0.557 85 | 0.792 79 | 0.408 83 | 0.665 86 | 0.545 73 | 0.508 77 | 0.952 46 | 0.428 80 | 0.186 90 | 0.634 49 | 0.702 70 | 0.620 79 | 0.706 77 | 0.555 78 | 0.873 70 | 0.798 68 | 0.581 76 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
3DSM_DMMF | 0.631 78 | 0.626 96 | 0.745 67 | 0.801 45 | 0.607 67 | 0.751 94 | 0.506 37 | 0.729 70 | 0.565 68 | 0.491 82 | 0.866 112 | 0.434 75 | 0.197 87 | 0.595 58 | 0.630 81 | 0.709 43 | 0.705 78 | 0.560 74 | 0.875 66 | 0.740 97 | 0.491 101 | |
FusionAwareConv | 0.630 81 | 0.604 101 | 0.741 71 | 0.766 59 | 0.590 73 | 0.747 95 | 0.501 40 | 0.734 68 | 0.503 85 | 0.527 70 | 0.919 101 | 0.454 67 | 0.323 25 | 0.550 76 | 0.420 101 | 0.678 57 | 0.688 84 | 0.544 84 | 0.896 49 | 0.795 70 | 0.627 61 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
DenSeR | 0.628 82 | 0.800 41 | 0.625 104 | 0.719 68 | 0.545 87 | 0.806 69 | 0.445 70 | 0.597 94 | 0.448 100 | 0.519 75 | 0.938 82 | 0.481 55 | 0.328 23 | 0.489 91 | 0.499 96 | 0.657 66 | 0.759 54 | 0.592 61 | 0.881 60 | 0.797 69 | 0.634 58 | |
SegGroup_sem | 0.627 83 | 0.818 36 | 0.747 66 | 0.701 72 | 0.602 69 | 0.764 90 | 0.385 94 | 0.629 91 | 0.490 88 | 0.508 77 | 0.931 94 | 0.409 88 | 0.201 84 | 0.564 69 | 0.725 63 | 0.618 80 | 0.692 82 | 0.539 88 | 0.873 70 | 0.794 71 | 0.548 90 | |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
SIConv | 0.625 84 | 0.830 32 | 0.694 87 | 0.757 61 | 0.563 83 | 0.772 88 | 0.448 67 | 0.647 89 | 0.520 79 | 0.509 76 | 0.949 56 | 0.431 78 | 0.191 88 | 0.496 89 | 0.614 83 | 0.647 71 | 0.672 90 | 0.535 90 | 0.876 65 | 0.783 82 | 0.571 79 | |
dtc_net | 0.625 84 | 0.703 84 | 0.751 62 | 0.794 48 | 0.535 88 | 0.848 26 | 0.480 51 | 0.676 84 | 0.528 77 | 0.469 87 | 0.944 72 | 0.454 67 | 0.004 117 | 0.464 94 | 0.636 80 | 0.704 46 | 0.758 55 | 0.548 83 | 0.924 29 | 0.787 80 | 0.492 100 | |
HPEIN | 0.618 86 | 0.729 74 | 0.668 95 | 0.647 92 | 0.597 71 | 0.766 89 | 0.414 82 | 0.680 81 | 0.520 79 | 0.525 71 | 0.946 63 | 0.432 76 | 0.215 77 | 0.493 90 | 0.599 84 | 0.638 75 | 0.617 102 | 0.570 68 | 0.897 48 | 0.806 62 | 0.605 69 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
SPH3D-GCN | 0.610 87 | 0.858 25 | 0.772 47 | 0.489 110 | 0.532 89 | 0.792 79 | 0.404 86 | 0.643 90 | 0.570 67 | 0.507 79 | 0.935 86 | 0.414 87 | 0.046 114 | 0.510 85 | 0.702 70 | 0.602 85 | 0.705 78 | 0.549 82 | 0.859 81 | 0.773 87 | 0.534 93 | |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
AttAN | 0.609 88 | 0.760 60 | 0.667 96 | 0.649 91 | 0.521 91 | 0.793 77 | 0.457 61 | 0.648 88 | 0.528 77 | 0.434 97 | 0.947 60 | 0.401 90 | 0.153 103 | 0.454 95 | 0.721 66 | 0.648 70 | 0.717 72 | 0.536 89 | 0.904 41 | 0.765 90 | 0.485 102 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
Weakly-Openseg v3 | 0.604 89 | 0.901 14 | 0.762 53 | 0.627 97 | 0.478 97 | 0.820 56 | 0.346 100 | 0.689 80 | 0.353 110 | 0.528 69 | 0.933 91 | 0.217 115 | 0.172 94 | 0.530 80 | 0.725 63 | 0.593 89 | 0.737 64 | 0.515 92 | 0.858 82 | 0.772 88 | 0.515 96 | |
wsss-transformer | 0.600 90 | 0.634 95 | 0.743 69 | 0.697 75 | 0.601 70 | 0.781 82 | 0.437 77 | 0.585 97 | 0.493 87 | 0.446 92 | 0.933 91 | 0.394 91 | 0.011 116 | 0.654 43 | 0.661 79 | 0.603 84 | 0.733 67 | 0.526 91 | 0.832 88 | 0.761 92 | 0.480 103 | |
LAP-D | 0.594 91 | 0.720 79 | 0.692 89 | 0.637 96 | 0.456 101 | 0.773 87 | 0.391 92 | 0.730 69 | 0.587 57 | 0.445 94 | 0.940 80 | 0.381 94 | 0.288 40 | 0.434 98 | 0.453 99 | 0.591 90 | 0.649 95 | 0.581 66 | 0.777 96 | 0.749 96 | 0.610 65 | |
DPC | 0.592 92 | 0.720 79 | 0.700 83 | 0.602 101 | 0.480 96 | 0.762 92 | 0.380 95 | 0.713 75 | 0.585 60 | 0.437 95 | 0.940 80 | 0.369 96 | 0.288 40 | 0.434 98 | 0.509 95 | 0.590 92 | 0.639 100 | 0.567 72 | 0.772 97 | 0.755 94 | 0.592 75 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
CCRFNet | 0.589 93 | 0.766 59 | 0.659 99 | 0.683 78 | 0.470 100 | 0.740 97 | 0.387 93 | 0.620 93 | 0.490 88 | 0.476 85 | 0.922 99 | 0.355 99 | 0.245 67 | 0.511 84 | 0.511 94 | 0.571 95 | 0.643 98 | 0.493 99 | 0.872 72 | 0.762 91 | 0.600 71 | |
ROSMRF | 0.580 94 | 0.772 54 | 0.707 79 | 0.681 79 | 0.563 83 | 0.764 90 | 0.362 97 | 0.515 105 | 0.465 96 | 0.465 89 | 0.936 85 | 0.427 82 | 0.207 79 | 0.438 96 | 0.577 86 | 0.536 98 | 0.675 89 | 0.486 100 | 0.723 103 | 0.779 83 | 0.524 95 | |
SD-DETR | 0.576 95 | 0.746 65 | 0.609 108 | 0.445 114 | 0.517 92 | 0.643 109 | 0.366 96 | 0.714 74 | 0.456 98 | 0.468 88 | 0.870 111 | 0.432 76 | 0.264 59 | 0.558 72 | 0.674 74 | 0.586 93 | 0.688 84 | 0.482 101 | 0.739 101 | 0.733 99 | 0.537 92 | |
SQN_0.1% | 0.569 96 | 0.676 88 | 0.696 86 | 0.657 87 | 0.497 93 | 0.779 85 | 0.424 79 | 0.548 101 | 0.515 81 | 0.376 102 | 0.902 108 | 0.422 84 | 0.357 9 | 0.379 103 | 0.456 98 | 0.596 88 | 0.659 93 | 0.544 84 | 0.685 106 | 0.665 110 | 0.556 88 | |
TextureNet | 0.566 97 | 0.672 90 | 0.664 97 | 0.671 83 | 0.494 94 | 0.719 99 | 0.445 70 | 0.678 83 | 0.411 106 | 0.396 100 | 0.935 86 | 0.356 98 | 0.225 72 | 0.412 100 | 0.535 90 | 0.565 96 | 0.636 101 | 0.464 103 | 0.794 95 | 0.680 107 | 0.568 82 | |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
DVVNet | 0.562 98 | 0.648 92 | 0.700 83 | 0.770 56 | 0.586 76 | 0.687 103 | 0.333 102 | 0.650 87 | 0.514 82 | 0.475 86 | 0.906 105 | 0.359 97 | 0.223 74 | 0.340 105 | 0.442 100 | 0.422 109 | 0.668 91 | 0.501 96 | 0.708 104 | 0.779 83 | 0.534 93 | |
Pointnet++ & Feature | 0.557 99 | 0.735 70 | 0.661 98 | 0.686 77 | 0.491 95 | 0.744 96 | 0.392 90 | 0.539 102 | 0.451 99 | 0.375 103 | 0.946 63 | 0.376 95 | 0.205 81 | 0.403 101 | 0.356 105 | 0.553 97 | 0.643 98 | 0.497 97 | 0.824 91 | 0.756 93 | 0.515 96 | |
GMLPs | 0.538 100 | 0.495 110 | 0.693 88 | 0.647 92 | 0.471 99 | 0.793 77 | 0.300 105 | 0.477 106 | 0.505 84 | 0.358 104 | 0.903 107 | 0.327 102 | 0.081 111 | 0.472 93 | 0.529 92 | 0.448 107 | 0.710 73 | 0.509 93 | 0.746 99 | 0.737 98 | 0.554 89 | |
PanopticFusion-label | 0.529 101 | 0.491 111 | 0.688 92 | 0.604 100 | 0.386 106 | 0.632 110 | 0.225 116 | 0.705 78 | 0.434 103 | 0.293 110 | 0.815 114 | 0.348 100 | 0.241 68 | 0.499 88 | 0.669 75 | 0.507 100 | 0.649 95 | 0.442 109 | 0.796 94 | 0.602 114 | 0.561 85 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
subcloud_weak | 0.516 102 | 0.676 88 | 0.591 111 | 0.609 98 | 0.442 102 | 0.774 86 | 0.335 101 | 0.597 94 | 0.422 105 | 0.357 105 | 0.932 93 | 0.341 101 | 0.094 110 | 0.298 107 | 0.528 93 | 0.473 105 | 0.676 88 | 0.495 98 | 0.602 112 | 0.721 102 | 0.349 114 | |
Online SegFusion | 0.515 103 | 0.607 100 | 0.644 102 | 0.579 103 | 0.434 103 | 0.630 111 | 0.353 98 | 0.628 92 | 0.440 101 | 0.410 98 | 0.762 117 | 0.307 104 | 0.167 98 | 0.520 82 | 0.403 103 | 0.516 99 | 0.565 105 | 0.447 107 | 0.678 107 | 0.701 104 | 0.514 98 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
3DMV, FTSDF | 0.501 104 | 0.558 105 | 0.608 109 | 0.424 116 | 0.478 97 | 0.690 102 | 0.246 112 | 0.586 96 | 0.468 94 | 0.450 91 | 0.911 103 | 0.394 91 | 0.160 101 | 0.438 96 | 0.212 112 | 0.432 108 | 0.541 110 | 0.475 102 | 0.742 100 | 0.727 100 | 0.477 104 | |
PCNN | 0.498 105 | 0.559 104 | 0.644 102 | 0.560 105 | 0.420 105 | 0.711 101 | 0.229 114 | 0.414 107 | 0.436 102 | 0.352 106 | 0.941 78 | 0.324 103 | 0.155 102 | 0.238 112 | 0.387 104 | 0.493 101 | 0.529 111 | 0.509 93 | 0.813 93 | 0.751 95 | 0.504 99 | |
3DMV | 0.484 106 | 0.484 112 | 0.538 114 | 0.643 94 | 0.424 104 | 0.606 114 | 0.310 103 | 0.574 98 | 0.433 104 | 0.378 101 | 0.796 115 | 0.301 105 | 0.214 78 | 0.537 79 | 0.208 113 | 0.472 106 | 0.507 114 | 0.413 112 | 0.693 105 | 0.602 114 | 0.539 91 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
PointCNN with RGB | 0.458 107 | 0.577 103 | 0.611 107 | 0.356 118 | 0.321 114 | 0.715 100 | 0.299 107 | 0.376 111 | 0.328 114 | 0.319 108 | 0.944 72 | 0.285 107 | 0.164 99 | 0.216 115 | 0.229 110 | 0.484 103 | 0.545 109 | 0.456 105 | 0.755 98 | 0.709 103 | 0.475 105 | |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
FCPN | 0.447 108 | 0.679 87 | 0.604 110 | 0.578 104 | 0.380 107 | 0.682 104 | 0.291 108 | 0.106 118 | 0.483 91 | 0.258 116 | 0.920 100 | 0.258 111 | 0.025 115 | 0.231 114 | 0.325 106 | 0.480 104 | 0.560 107 | 0.463 104 | 0.725 102 | 0.666 109 | 0.231 118 | |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
DGCNN_reproduce | 0.446 109 | 0.474 113 | 0.623 105 | 0.463 112 | 0.366 109 | 0.651 107 | 0.310 103 | 0.389 110 | 0.349 112 | 0.330 107 | 0.937 83 | 0.271 109 | 0.126 107 | 0.285 108 | 0.224 111 | 0.350 114 | 0.577 104 | 0.445 108 | 0.625 110 | 0.723 101 | 0.394 110 | |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
PNET2 | 0.442 110 | 0.548 107 | 0.548 113 | 0.597 102 | 0.363 110 | 0.628 112 | 0.300 105 | 0.292 113 | 0.374 108 | 0.307 109 | 0.881 110 | 0.268 110 | 0.186 90 | 0.238 112 | 0.204 114 | 0.407 110 | 0.506 115 | 0.449 106 | 0.667 108 | 0.620 113 | 0.462 108 | |
SurfaceConvPF | 0.442 110 | 0.505 109 | 0.622 106 | 0.380 117 | 0.342 112 | 0.654 106 | 0.227 115 | 0.397 109 | 0.367 109 | 0.276 112 | 0.924 97 | 0.240 112 | 0.198 86 | 0.359 104 | 0.262 108 | 0.366 111 | 0.581 103 | 0.435 110 | 0.640 109 | 0.668 108 | 0.398 109 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
Tangent Convolutions | 0.438 112 | 0.437 115 | 0.646 101 | 0.474 111 | 0.369 108 | 0.645 108 | 0.353 98 | 0.258 115 | 0.282 117 | 0.279 111 | 0.918 102 | 0.298 106 | 0.147 106 | 0.283 109 | 0.294 107 | 0.487 102 | 0.562 106 | 0.427 111 | 0.619 111 | 0.633 112 | 0.352 113 | |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
3DWSSS | 0.425 113 | 0.525 108 | 0.647 100 | 0.522 106 | 0.324 113 | 0.488 118 | 0.077 119 | 0.712 76 | 0.353 110 | 0.401 99 | 0.636 119 | 0.281 108 | 0.176 93 | 0.340 105 | 0.565 88 | 0.175 118 | 0.551 108 | 0.398 113 | 0.370 119 | 0.602 114 | 0.361 112 | |
SPLAT Net | 0.393 114 | 0.472 114 | 0.511 115 | 0.606 99 | 0.311 115 | 0.656 105 | 0.245 113 | 0.405 108 | 0.328 114 | 0.197 117 | 0.927 96 | 0.227 114 | 0.000 119 | 0.001 120 | 0.249 109 | 0.271 117 | 0.510 112 | 0.383 115 | 0.593 113 | 0.699 105 | 0.267 116 | |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 115 | 0.297 117 | 0.491 116 | 0.432 115 | 0.358 111 | 0.612 113 | 0.274 110 | 0.116 117 | 0.411 106 | 0.265 113 | 0.904 106 | 0.229 113 | 0.079 112 | 0.250 110 | 0.185 115 | 0.320 115 | 0.510 112 | 0.385 114 | 0.548 114 | 0.597 117 | 0.394 110 | |
PointNet++ | 0.339 116 | 0.584 102 | 0.478 117 | 0.458 113 | 0.256 117 | 0.360 119 | 0.250 111 | 0.247 116 | 0.278 118 | 0.261 115 | 0.677 118 | 0.183 116 | 0.117 108 | 0.212 116 | 0.145 117 | 0.364 112 | 0.346 119 | 0.232 119 | 0.548 114 | 0.523 118 | 0.252 117 | |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
GrowSP++ | 0.323 117 | 0.114 119 | 0.589 112 | 0.499 108 | 0.147 119 | 0.555 115 | 0.290 109 | 0.336 112 | 0.290 116 | 0.262 114 | 0.865 113 | 0.102 119 | 0.000 119 | 0.037 118 | 0.000 120 | 0.000 120 | 0.462 116 | 0.381 116 | 0.389 118 | 0.664 111 | 0.473 106 | |
SSC-UNet | 0.308 118 | 0.353 116 | 0.290 119 | 0.278 119 | 0.166 118 | 0.553 116 | 0.169 118 | 0.286 114 | 0.147 119 | 0.148 119 | 0.908 104 | 0.182 117 | 0.064 113 | 0.023 119 | 0.018 119 | 0.354 113 | 0.363 117 | 0.345 117 | 0.546 116 | 0.685 106 | 0.278 115 | |
ScanNet | 0.306 119 | 0.203 118 | 0.366 118 | 0.501 107 | 0.311 115 | 0.524 117 | 0.211 117 | 0.002 120 | 0.342 113 | 0.189 118 | 0.786 116 | 0.145 118 | 0.102 109 | 0.245 111 | 0.152 116 | 0.318 116 | 0.348 118 | 0.300 118 | 0.460 117 | 0.437 119 | 0.182 119 | |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
ERROR | 0.054 120 | 0.000 120 | 0.041 120 | 0.172 120 | 0.030 120 | 0.062 120 | 0.001 120 | 0.035 119 | 0.004 120 | 0.051 120 | 0.143 120 | 0.019 120 | 0.003 118 | 0.041 117 | 0.050 118 | 0.003 119 | 0.054 120 | 0.018 120 | 0.005 120 | 0.264 120 | 0.082 120 | |