3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
Swin3D | ![]() | 0.779 4 | 0.861 19 | 0.818 12 | 0.836 17 | 0.790 2 | 0.875 3 | 0.576 4 | 0.905 5 | 0.704 4 | 0.739 1 | 0.969 9 | 0.611 2 | 0.349 9 | 0.756 18 | 0.958 1 | 0.702 40 | 0.805 12 | 0.708 7 | 0.916 28 | 0.898 2 | 0.801 2 |
Mix3D | ![]() | 0.781 3 | 0.964 2 | 0.855 1 | 0.843 14 | 0.781 6 | 0.858 10 | 0.575 5 | 0.831 28 | 0.685 11 | 0.714 2 | 0.979 1 | 0.594 6 | 0.310 23 | 0.801 1 | 0.892 13 | 0.841 2 | 0.819 3 | 0.723 4 | 0.940 11 | 0.887 5 | 0.725 20 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
PTv3 ScanNet | 0.794 1 | 0.941 3 | 0.813 15 | 0.851 6 | 0.782 5 | 0.890 2 | 0.597 1 | 0.916 1 | 0.696 7 | 0.713 3 | 0.979 1 | 0.635 1 | 0.384 2 | 0.793 2 | 0.907 6 | 0.821 3 | 0.790 28 | 0.696 10 | 0.967 3 | 0.903 1 | 0.805 1 | |
PonderV2 | 0.785 2 | 0.978 1 | 0.800 23 | 0.833 20 | 0.788 3 | 0.853 14 | 0.545 14 | 0.910 4 | 0.713 1 | 0.705 4 | 0.979 1 | 0.596 5 | 0.390 1 | 0.769 10 | 0.832 38 | 0.821 3 | 0.792 27 | 0.730 1 | 0.975 1 | 0.897 3 | 0.785 3 | |
Haoyi Zhu, Honghui Yang, Xiaoyang Wu, Di Huang, Sha Zhang, Xianglong He, Tong He, Hengshuang Zhao, Chunhua Shen, Yu Qiao, Wanli Ouyang: PonderV2: Pave the Way for 3D Foundataion Model with A Universal Pre-training Paradigm. | ||||||||||||||||||||||
PointTransformerV2 | 0.752 13 | 0.742 62 | 0.809 18 | 0.872 1 | 0.758 12 | 0.860 9 | 0.552 11 | 0.891 10 | 0.610 37 | 0.687 5 | 0.960 14 | 0.559 22 | 0.304 26 | 0.766 12 | 0.926 3 | 0.767 13 | 0.797 20 | 0.644 29 | 0.942 9 | 0.876 14 | 0.722 22 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
RPN | 0.736 26 | 0.776 45 | 0.790 32 | 0.851 6 | 0.754 16 | 0.854 12 | 0.491 41 | 0.866 15 | 0.596 47 | 0.686 6 | 0.955 25 | 0.536 28 | 0.342 11 | 0.624 45 | 0.869 19 | 0.787 8 | 0.802 13 | 0.628 36 | 0.927 21 | 0.875 15 | 0.704 29 | |
MSP | 0.748 17 | 0.623 88 | 0.804 21 | 0.859 3 | 0.745 22 | 0.824 43 | 0.501 32 | 0.912 3 | 0.690 9 | 0.685 7 | 0.956 22 | 0.567 18 | 0.320 20 | 0.768 11 | 0.918 4 | 0.720 29 | 0.802 13 | 0.676 19 | 0.921 25 | 0.881 9 | 0.779 5 | |
EQ-Net | 0.743 22 | 0.620 89 | 0.799 26 | 0.849 8 | 0.730 26 | 0.822 45 | 0.493 39 | 0.897 8 | 0.664 16 | 0.681 8 | 0.955 25 | 0.562 21 | 0.378 3 | 0.760 15 | 0.903 8 | 0.738 21 | 0.801 17 | 0.673 22 | 0.907 32 | 0.877 11 | 0.745 10 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
ConDaFormer | 0.755 11 | 0.927 5 | 0.822 7 | 0.836 17 | 0.801 1 | 0.849 19 | 0.516 27 | 0.864 17 | 0.651 21 | 0.680 9 | 0.958 17 | 0.584 13 | 0.282 36 | 0.759 16 | 0.855 28 | 0.728 24 | 0.802 13 | 0.678 16 | 0.880 54 | 0.873 16 | 0.756 9 | |
Lunhao Duan, Shanshan Zhao, Nan Xue, Mingming Gong, Guisong Xia, Dacheng Tao: ConDaFormer : Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding. Neurips, 2023 | ||||||||||||||||||||||
OA-CNN-L_ScanNet20 | 0.756 10 | 0.783 41 | 0.826 5 | 0.858 4 | 0.776 7 | 0.837 30 | 0.548 13 | 0.896 9 | 0.649 22 | 0.675 10 | 0.962 13 | 0.586 11 | 0.335 14 | 0.771 9 | 0.802 45 | 0.770 12 | 0.787 30 | 0.691 12 | 0.936 14 | 0.880 10 | 0.761 8 | |
OctFormer | ![]() | 0.766 5 | 0.925 6 | 0.808 19 | 0.849 8 | 0.786 4 | 0.846 24 | 0.566 8 | 0.876 12 | 0.690 9 | 0.674 11 | 0.960 14 | 0.576 15 | 0.226 61 | 0.753 20 | 0.904 7 | 0.777 10 | 0.815 5 | 0.722 5 | 0.923 24 | 0.877 11 | 0.776 6 |
Peng-Shuai Wang: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH 2023 | ||||||||||||||||||||||
OccuSeg+Semantic | 0.764 7 | 0.758 55 | 0.796 27 | 0.839 16 | 0.746 21 | 0.907 1 | 0.562 9 | 0.850 20 | 0.680 13 | 0.672 12 | 0.978 4 | 0.610 3 | 0.335 14 | 0.777 6 | 0.819 41 | 0.847 1 | 0.830 1 | 0.691 12 | 0.972 2 | 0.885 7 | 0.727 18 | |
PPT-SpUNet-Joint | 0.766 5 | 0.932 4 | 0.794 29 | 0.829 22 | 0.751 19 | 0.854 12 | 0.540 17 | 0.903 6 | 0.630 30 | 0.672 12 | 0.963 12 | 0.565 19 | 0.357 7 | 0.788 3 | 0.900 9 | 0.737 22 | 0.802 13 | 0.685 14 | 0.950 5 | 0.887 5 | 0.780 4 | |
Xiaoyang Wu, Zhuotao Tian, Xin Wen, Bohao Peng, Xihui Liu, Kaicheng Yu, Hengshuang Zhao: Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training. | ||||||||||||||||||||||
PNE | 0.755 11 | 0.786 39 | 0.835 4 | 0.834 19 | 0.758 12 | 0.849 19 | 0.570 7 | 0.836 27 | 0.648 23 | 0.668 14 | 0.978 4 | 0.581 14 | 0.367 5 | 0.683 30 | 0.856 26 | 0.804 5 | 0.801 17 | 0.678 16 | 0.961 4 | 0.889 4 | 0.716 25 | |
P. Hermosilla: Point Neighborhood Embeddings. | ||||||||||||||||||||||
Virtual MVFusion | 0.746 19 | 0.771 49 | 0.819 10 | 0.848 10 | 0.702 33 | 0.865 8 | 0.397 79 | 0.899 7 | 0.699 5 | 0.664 15 | 0.948 50 | 0.588 9 | 0.330 16 | 0.746 24 | 0.851 32 | 0.764 14 | 0.796 21 | 0.704 9 | 0.935 15 | 0.866 19 | 0.728 16 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
VMNet | ![]() | 0.746 19 | 0.870 17 | 0.838 2 | 0.858 4 | 0.729 27 | 0.850 18 | 0.501 32 | 0.874 13 | 0.587 50 | 0.658 16 | 0.956 22 | 0.564 20 | 0.299 27 | 0.765 13 | 0.900 9 | 0.716 32 | 0.812 9 | 0.631 35 | 0.939 12 | 0.858 24 | 0.709 27 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
IPCA | 0.731 28 | 0.890 13 | 0.837 3 | 0.864 2 | 0.726 28 | 0.873 4 | 0.530 22 | 0.824 32 | 0.489 81 | 0.647 17 | 0.978 4 | 0.609 4 | 0.336 13 | 0.624 45 | 0.733 54 | 0.758 16 | 0.776 34 | 0.570 60 | 0.949 6 | 0.877 11 | 0.728 16 | |
StratifiedFormer | ![]() | 0.747 18 | 0.901 12 | 0.803 22 | 0.845 12 | 0.757 14 | 0.846 24 | 0.512 28 | 0.825 31 | 0.696 7 | 0.645 18 | 0.956 22 | 0.576 15 | 0.262 52 | 0.744 25 | 0.861 22 | 0.742 20 | 0.770 39 | 0.705 8 | 0.899 40 | 0.860 23 | 0.734 13 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
BPNet | ![]() | 0.749 15 | 0.909 9 | 0.818 12 | 0.811 30 | 0.752 17 | 0.839 29 | 0.485 42 | 0.842 24 | 0.673 14 | 0.644 19 | 0.957 21 | 0.528 33 | 0.305 25 | 0.773 8 | 0.859 23 | 0.788 7 | 0.818 4 | 0.693 11 | 0.916 28 | 0.856 26 | 0.723 21 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
CU-Hybrid Net | 0.764 7 | 0.924 7 | 0.819 10 | 0.840 15 | 0.757 14 | 0.853 14 | 0.580 2 | 0.848 21 | 0.709 3 | 0.643 20 | 0.958 17 | 0.587 10 | 0.295 29 | 0.753 20 | 0.884 17 | 0.758 16 | 0.815 5 | 0.725 3 | 0.927 21 | 0.867 18 | 0.743 12 | |
LargeKernel3D | 0.739 25 | 0.909 9 | 0.820 9 | 0.806 35 | 0.740 23 | 0.852 16 | 0.545 14 | 0.826 30 | 0.594 48 | 0.643 20 | 0.955 25 | 0.541 27 | 0.263 51 | 0.723 28 | 0.858 25 | 0.775 11 | 0.767 40 | 0.678 16 | 0.933 17 | 0.848 33 | 0.694 32 | |
Yukang Chen*, Jianhui Liu*, Xiangyu Zhang, Xiaojuan Qi, Jiaya Jia: LargeKernel3D: Scaling up Kernels in 3D Sparse CNNs. CVPR 2023 | ||||||||||||||||||||||
MinkowskiNet | ![]() | 0.736 26 | 0.859 21 | 0.818 12 | 0.832 21 | 0.709 31 | 0.840 28 | 0.521 25 | 0.853 19 | 0.660 19 | 0.643 20 | 0.951 40 | 0.544 26 | 0.286 34 | 0.731 26 | 0.893 12 | 0.675 49 | 0.772 36 | 0.683 15 | 0.874 60 | 0.852 31 | 0.727 18 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
PointConvFormer | 0.749 15 | 0.793 37 | 0.790 32 | 0.807 33 | 0.750 20 | 0.856 11 | 0.524 23 | 0.881 11 | 0.588 49 | 0.642 23 | 0.977 7 | 0.591 8 | 0.274 41 | 0.781 4 | 0.929 2 | 0.804 5 | 0.796 21 | 0.642 30 | 0.947 7 | 0.885 7 | 0.715 26 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
O-CNN | ![]() | 0.762 9 | 0.924 7 | 0.823 6 | 0.844 13 | 0.770 8 | 0.852 16 | 0.577 3 | 0.847 23 | 0.711 2 | 0.640 24 | 0.958 17 | 0.592 7 | 0.217 67 | 0.762 14 | 0.888 14 | 0.758 16 | 0.813 8 | 0.726 2 | 0.932 19 | 0.868 17 | 0.744 11 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
LRPNet | 0.742 23 | 0.816 32 | 0.806 20 | 0.807 33 | 0.752 17 | 0.828 39 | 0.575 5 | 0.839 26 | 0.699 5 | 0.637 25 | 0.954 31 | 0.520 35 | 0.320 20 | 0.755 19 | 0.834 36 | 0.760 15 | 0.772 36 | 0.676 19 | 0.915 30 | 0.862 21 | 0.717 23 | |
ClickSeg_Semantic | 0.703 35 | 0.774 47 | 0.800 23 | 0.793 42 | 0.760 11 | 0.847 23 | 0.471 46 | 0.802 41 | 0.463 88 | 0.634 26 | 0.968 11 | 0.491 43 | 0.271 45 | 0.726 27 | 0.910 5 | 0.706 36 | 0.815 5 | 0.551 71 | 0.878 55 | 0.833 39 | 0.570 71 | |
SAT | 0.742 23 | 0.860 20 | 0.765 44 | 0.819 25 | 0.769 9 | 0.848 21 | 0.533 19 | 0.829 29 | 0.663 17 | 0.631 27 | 0.955 25 | 0.586 11 | 0.274 41 | 0.753 20 | 0.896 11 | 0.729 23 | 0.760 46 | 0.666 24 | 0.921 25 | 0.855 28 | 0.733 14 | |
PicassoNet-II | ![]() | 0.692 40 | 0.732 66 | 0.772 40 | 0.786 43 | 0.677 39 | 0.866 7 | 0.517 26 | 0.848 21 | 0.509 74 | 0.626 28 | 0.952 38 | 0.536 28 | 0.225 63 | 0.545 69 | 0.704 61 | 0.689 46 | 0.810 10 | 0.564 64 | 0.903 36 | 0.854 30 | 0.729 15 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
Retro-FPN | 0.744 21 | 0.842 25 | 0.800 23 | 0.767 51 | 0.740 23 | 0.836 32 | 0.541 16 | 0.914 2 | 0.672 15 | 0.626 28 | 0.958 17 | 0.552 25 | 0.272 43 | 0.777 6 | 0.886 16 | 0.696 41 | 0.801 17 | 0.674 21 | 0.941 10 | 0.858 24 | 0.717 23 | |
Peng Xiang*, Xin Wen*, Yu-Shen Liu, Hui Zhang, Yi Fang, Zhizhong Han: Retrospective Feature Pyramid Network for Point Cloud Semantic Segmentation. ICCV 2023 | ||||||||||||||||||||||
PointTransformer++ | 0.725 29 | 0.727 70 | 0.811 17 | 0.819 25 | 0.765 10 | 0.841 27 | 0.502 31 | 0.814 37 | 0.621 33 | 0.623 30 | 0.955 25 | 0.556 23 | 0.284 35 | 0.620 47 | 0.866 20 | 0.781 9 | 0.757 50 | 0.648 27 | 0.932 19 | 0.862 21 | 0.709 27 | |
One-Thing-One-Click | 0.693 39 | 0.743 61 | 0.794 29 | 0.655 81 | 0.684 37 | 0.822 45 | 0.497 37 | 0.719 62 | 0.622 32 | 0.617 31 | 0.977 7 | 0.447 64 | 0.339 12 | 0.750 23 | 0.664 70 | 0.703 39 | 0.790 28 | 0.596 50 | 0.946 8 | 0.855 28 | 0.647 45 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
SparseConvNet | 0.725 29 | 0.647 85 | 0.821 8 | 0.846 11 | 0.721 29 | 0.869 5 | 0.533 19 | 0.754 52 | 0.603 43 | 0.614 32 | 0.955 25 | 0.572 17 | 0.325 18 | 0.710 29 | 0.870 18 | 0.724 27 | 0.823 2 | 0.628 36 | 0.934 16 | 0.865 20 | 0.683 35 | |
RFCR | 0.702 36 | 0.889 14 | 0.745 57 | 0.813 28 | 0.672 40 | 0.818 52 | 0.493 39 | 0.815 36 | 0.623 31 | 0.610 33 | 0.947 52 | 0.470 51 | 0.249 56 | 0.594 51 | 0.848 33 | 0.705 37 | 0.779 33 | 0.646 28 | 0.892 45 | 0.823 45 | 0.611 54 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
One Thing One Click | 0.701 37 | 0.825 30 | 0.796 27 | 0.723 58 | 0.716 30 | 0.832 35 | 0.433 69 | 0.816 34 | 0.634 28 | 0.609 34 | 0.969 9 | 0.418 77 | 0.344 10 | 0.559 63 | 0.833 37 | 0.715 33 | 0.808 11 | 0.560 65 | 0.902 37 | 0.847 34 | 0.680 36 | |
INS-Conv-semantic | 0.717 32 | 0.751 58 | 0.759 47 | 0.812 29 | 0.704 32 | 0.868 6 | 0.537 18 | 0.842 24 | 0.609 39 | 0.608 35 | 0.953 34 | 0.534 30 | 0.293 30 | 0.616 48 | 0.864 21 | 0.719 31 | 0.793 25 | 0.640 31 | 0.933 17 | 0.845 37 | 0.663 40 | |
JSENet | ![]() | 0.699 38 | 0.881 16 | 0.762 45 | 0.821 23 | 0.667 41 | 0.800 64 | 0.522 24 | 0.792 44 | 0.613 35 | 0.607 36 | 0.935 78 | 0.492 42 | 0.205 72 | 0.576 56 | 0.853 30 | 0.691 43 | 0.758 48 | 0.652 26 | 0.872 63 | 0.828 42 | 0.649 44 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
Feature-Geometry Net | ![]() | 0.685 43 | 0.866 18 | 0.748 54 | 0.819 25 | 0.645 49 | 0.794 67 | 0.450 57 | 0.802 41 | 0.587 50 | 0.604 37 | 0.945 58 | 0.464 53 | 0.201 75 | 0.554 65 | 0.840 35 | 0.723 28 | 0.732 59 | 0.602 48 | 0.907 32 | 0.822 47 | 0.603 61 |
Feature_GeometricNet | ![]() | 0.690 41 | 0.884 15 | 0.754 51 | 0.795 40 | 0.647 47 | 0.818 52 | 0.422 71 | 0.802 41 | 0.612 36 | 0.604 37 | 0.945 58 | 0.462 54 | 0.189 80 | 0.563 62 | 0.853 30 | 0.726 25 | 0.765 41 | 0.632 34 | 0.904 34 | 0.821 48 | 0.606 58 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
MatchingNet | 0.724 31 | 0.812 34 | 0.812 16 | 0.810 31 | 0.735 25 | 0.834 34 | 0.495 38 | 0.860 18 | 0.572 56 | 0.602 39 | 0.954 31 | 0.512 37 | 0.280 38 | 0.757 17 | 0.845 34 | 0.725 26 | 0.780 32 | 0.606 46 | 0.937 13 | 0.851 32 | 0.700 31 | |
DMF-Net | 0.752 13 | 0.906 11 | 0.793 31 | 0.802 37 | 0.689 35 | 0.825 41 | 0.556 10 | 0.867 14 | 0.681 12 | 0.602 39 | 0.960 14 | 0.555 24 | 0.365 6 | 0.779 5 | 0.859 23 | 0.747 19 | 0.795 24 | 0.717 6 | 0.917 27 | 0.856 26 | 0.764 7 | |
C.Yang, Y.Yan, W.Zhao, J.Ye, X.Yang, A.Hussain, B.Dong, K.Huang: Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation. ICONIP 2023 | ||||||||||||||||||||||
Superpoint Network | 0.683 47 | 0.851 23 | 0.728 65 | 0.800 39 | 0.653 45 | 0.806 60 | 0.468 48 | 0.804 39 | 0.572 56 | 0.602 39 | 0.946 55 | 0.453 61 | 0.239 60 | 0.519 74 | 0.822 39 | 0.689 46 | 0.762 45 | 0.595 52 | 0.895 43 | 0.827 43 | 0.630 51 | |
DGNet | 0.684 44 | 0.712 74 | 0.784 35 | 0.782 47 | 0.658 42 | 0.835 33 | 0.499 36 | 0.823 33 | 0.641 25 | 0.597 42 | 0.950 44 | 0.487 44 | 0.281 37 | 0.575 57 | 0.619 74 | 0.647 62 | 0.764 42 | 0.620 41 | 0.871 66 | 0.846 36 | 0.688 34 | |
PointMetaBase | 0.714 33 | 0.835 26 | 0.785 34 | 0.821 23 | 0.684 37 | 0.846 24 | 0.531 21 | 0.865 16 | 0.614 34 | 0.596 43 | 0.953 34 | 0.500 40 | 0.246 57 | 0.674 31 | 0.888 14 | 0.692 42 | 0.764 42 | 0.624 38 | 0.849 75 | 0.844 38 | 0.675 37 | |
KP-FCNN | 0.684 44 | 0.847 24 | 0.758 49 | 0.784 45 | 0.647 47 | 0.814 55 | 0.473 45 | 0.772 47 | 0.605 41 | 0.594 44 | 0.935 78 | 0.450 62 | 0.181 83 | 0.587 52 | 0.805 44 | 0.690 44 | 0.785 31 | 0.614 42 | 0.882 51 | 0.819 49 | 0.632 50 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
contrastBoundary | ![]() | 0.705 34 | 0.769 52 | 0.775 39 | 0.809 32 | 0.687 36 | 0.820 48 | 0.439 67 | 0.812 38 | 0.661 18 | 0.591 45 | 0.945 58 | 0.515 36 | 0.171 85 | 0.633 42 | 0.856 26 | 0.720 29 | 0.796 21 | 0.668 23 | 0.889 47 | 0.847 34 | 0.689 33 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
VACNN++ | 0.684 44 | 0.728 69 | 0.757 50 | 0.776 48 | 0.690 34 | 0.804 62 | 0.464 51 | 0.816 34 | 0.577 55 | 0.587 46 | 0.945 58 | 0.508 39 | 0.276 40 | 0.671 32 | 0.710 59 | 0.663 54 | 0.750 53 | 0.589 55 | 0.881 52 | 0.832 41 | 0.653 43 | |
ROSMRF3D | 0.673 50 | 0.789 38 | 0.748 54 | 0.763 53 | 0.635 53 | 0.814 55 | 0.407 76 | 0.747 54 | 0.581 54 | 0.573 47 | 0.950 44 | 0.484 45 | 0.271 45 | 0.607 49 | 0.754 50 | 0.649 59 | 0.774 35 | 0.596 50 | 0.883 50 | 0.823 45 | 0.606 58 | |
Supervoxel-CNN | 0.635 66 | 0.656 83 | 0.711 67 | 0.719 61 | 0.613 57 | 0.757 84 | 0.444 64 | 0.765 49 | 0.534 66 | 0.566 48 | 0.928 86 | 0.478 48 | 0.272 43 | 0.636 39 | 0.531 81 | 0.664 53 | 0.645 87 | 0.508 85 | 0.864 70 | 0.792 67 | 0.611 54 | |
joint point-based | ![]() | 0.634 67 | 0.614 90 | 0.778 38 | 0.667 78 | 0.633 54 | 0.825 41 | 0.420 72 | 0.804 39 | 0.467 86 | 0.561 49 | 0.951 40 | 0.494 41 | 0.291 31 | 0.566 60 | 0.458 87 | 0.579 84 | 0.764 42 | 0.559 67 | 0.838 77 | 0.814 50 | 0.598 63 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
VI-PointConv | 0.676 49 | 0.770 51 | 0.754 51 | 0.783 46 | 0.621 55 | 0.814 55 | 0.552 11 | 0.758 50 | 0.571 58 | 0.557 50 | 0.954 31 | 0.529 32 | 0.268 49 | 0.530 72 | 0.682 65 | 0.675 49 | 0.719 62 | 0.603 47 | 0.888 48 | 0.833 39 | 0.665 39 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
MVPNet | ![]() | 0.641 59 | 0.831 27 | 0.715 66 | 0.671 76 | 0.590 64 | 0.781 73 | 0.394 80 | 0.679 72 | 0.642 24 | 0.553 51 | 0.937 75 | 0.462 54 | 0.256 53 | 0.649 36 | 0.406 92 | 0.626 69 | 0.691 74 | 0.666 24 | 0.877 56 | 0.792 67 | 0.608 57 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
FusionNet | 0.688 42 | 0.704 75 | 0.741 61 | 0.754 55 | 0.656 43 | 0.829 37 | 0.501 32 | 0.741 57 | 0.609 39 | 0.548 52 | 0.950 44 | 0.522 34 | 0.371 4 | 0.633 42 | 0.756 49 | 0.715 33 | 0.771 38 | 0.623 39 | 0.861 71 | 0.814 50 | 0.658 41 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
SALANet | 0.670 51 | 0.816 32 | 0.770 42 | 0.768 50 | 0.652 46 | 0.807 59 | 0.451 54 | 0.747 54 | 0.659 20 | 0.545 53 | 0.924 88 | 0.473 50 | 0.149 95 | 0.571 59 | 0.811 43 | 0.635 68 | 0.746 54 | 0.623 39 | 0.892 45 | 0.794 62 | 0.570 71 | |
PointContrast_LA_SEM | 0.683 47 | 0.757 56 | 0.784 35 | 0.786 43 | 0.639 51 | 0.824 43 | 0.408 74 | 0.775 46 | 0.604 42 | 0.541 54 | 0.934 82 | 0.532 31 | 0.269 47 | 0.552 66 | 0.777 47 | 0.645 65 | 0.793 25 | 0.640 31 | 0.913 31 | 0.824 44 | 0.671 38 | |
PPCNN++ | ![]() | 0.663 54 | 0.746 59 | 0.708 68 | 0.722 59 | 0.638 52 | 0.820 48 | 0.451 54 | 0.566 89 | 0.599 45 | 0.541 54 | 0.950 44 | 0.510 38 | 0.313 22 | 0.648 37 | 0.819 41 | 0.616 73 | 0.682 77 | 0.590 54 | 0.869 67 | 0.810 53 | 0.656 42 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
PointASNL | ![]() | 0.666 52 | 0.703 76 | 0.781 37 | 0.751 57 | 0.655 44 | 0.830 36 | 0.471 46 | 0.769 48 | 0.474 84 | 0.537 56 | 0.951 40 | 0.475 49 | 0.279 39 | 0.635 40 | 0.698 64 | 0.675 49 | 0.751 52 | 0.553 70 | 0.816 82 | 0.806 54 | 0.703 30 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
PD-Net | 0.638 63 | 0.797 36 | 0.769 43 | 0.641 87 | 0.590 64 | 0.820 48 | 0.461 52 | 0.537 93 | 0.637 27 | 0.536 57 | 0.947 52 | 0.388 84 | 0.206 71 | 0.656 34 | 0.668 68 | 0.647 62 | 0.732 59 | 0.585 57 | 0.868 68 | 0.793 64 | 0.473 96 | |
SAFNet-seg | ![]() | 0.654 57 | 0.752 57 | 0.734 63 | 0.664 79 | 0.583 68 | 0.815 54 | 0.399 78 | 0.754 52 | 0.639 26 | 0.535 58 | 0.942 68 | 0.470 51 | 0.309 24 | 0.665 33 | 0.539 79 | 0.650 58 | 0.708 67 | 0.635 33 | 0.857 73 | 0.793 64 | 0.642 46 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
SConv | 0.636 65 | 0.830 28 | 0.697 75 | 0.752 56 | 0.572 72 | 0.780 75 | 0.445 61 | 0.716 63 | 0.529 67 | 0.530 59 | 0.951 40 | 0.446 65 | 0.170 86 | 0.507 78 | 0.666 69 | 0.636 67 | 0.682 77 | 0.541 78 | 0.886 49 | 0.799 57 | 0.594 65 | |
FusionAwareConv | 0.630 72 | 0.604 92 | 0.741 61 | 0.766 52 | 0.590 64 | 0.747 86 | 0.501 32 | 0.734 59 | 0.503 76 | 0.527 60 | 0.919 92 | 0.454 58 | 0.323 19 | 0.550 68 | 0.420 91 | 0.678 48 | 0.688 75 | 0.544 75 | 0.896 42 | 0.795 61 | 0.627 52 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
HPEIN | 0.618 77 | 0.729 68 | 0.668 85 | 0.647 84 | 0.597 62 | 0.766 80 | 0.414 73 | 0.680 71 | 0.520 70 | 0.525 61 | 0.946 55 | 0.432 67 | 0.215 68 | 0.493 81 | 0.599 76 | 0.638 66 | 0.617 92 | 0.570 60 | 0.897 41 | 0.806 54 | 0.605 60 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
DCM-Net | 0.658 55 | 0.778 43 | 0.702 71 | 0.806 35 | 0.619 56 | 0.813 58 | 0.468 48 | 0.693 70 | 0.494 77 | 0.524 62 | 0.941 70 | 0.449 63 | 0.298 28 | 0.510 76 | 0.821 40 | 0.675 49 | 0.727 61 | 0.568 62 | 0.826 80 | 0.803 56 | 0.637 48 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
FPConv | ![]() | 0.639 62 | 0.785 40 | 0.760 46 | 0.713 64 | 0.603 59 | 0.798 65 | 0.392 81 | 0.534 94 | 0.603 43 | 0.524 62 | 0.948 50 | 0.457 56 | 0.250 55 | 0.538 70 | 0.723 57 | 0.598 78 | 0.696 72 | 0.614 42 | 0.872 63 | 0.799 57 | 0.567 74 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
RandLA-Net | ![]() | 0.645 58 | 0.778 43 | 0.731 64 | 0.699 66 | 0.577 69 | 0.829 37 | 0.446 59 | 0.736 58 | 0.477 83 | 0.523 64 | 0.945 58 | 0.454 58 | 0.269 47 | 0.484 83 | 0.749 53 | 0.618 71 | 0.738 55 | 0.599 49 | 0.827 79 | 0.792 67 | 0.621 53 |
DenSeR | 0.628 73 | 0.800 35 | 0.625 94 | 0.719 61 | 0.545 78 | 0.806 60 | 0.445 61 | 0.597 84 | 0.448 91 | 0.519 65 | 0.938 74 | 0.481 46 | 0.328 17 | 0.489 82 | 0.499 86 | 0.657 57 | 0.759 47 | 0.592 53 | 0.881 52 | 0.797 60 | 0.634 49 | |
SIConv | 0.625 75 | 0.830 28 | 0.694 77 | 0.757 54 | 0.563 74 | 0.772 79 | 0.448 58 | 0.647 79 | 0.520 70 | 0.509 66 | 0.949 48 | 0.431 69 | 0.191 79 | 0.496 80 | 0.614 75 | 0.647 62 | 0.672 81 | 0.535 81 | 0.876 57 | 0.783 73 | 0.571 70 | |
APCF-Net | 0.631 69 | 0.742 62 | 0.687 84 | 0.672 74 | 0.557 76 | 0.792 70 | 0.408 74 | 0.665 76 | 0.545 64 | 0.508 67 | 0.952 38 | 0.428 71 | 0.186 81 | 0.634 41 | 0.702 62 | 0.620 70 | 0.706 68 | 0.555 69 | 0.873 61 | 0.798 59 | 0.581 67 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
SegGroup_sem | ![]() | 0.627 74 | 0.818 31 | 0.747 56 | 0.701 65 | 0.602 60 | 0.764 81 | 0.385 85 | 0.629 81 | 0.490 79 | 0.508 67 | 0.931 85 | 0.409 79 | 0.201 75 | 0.564 61 | 0.725 56 | 0.618 71 | 0.692 73 | 0.539 79 | 0.873 61 | 0.794 62 | 0.548 81 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
SPH3D-GCN | ![]() | 0.610 78 | 0.858 22 | 0.772 40 | 0.489 99 | 0.532 80 | 0.792 70 | 0.404 77 | 0.643 80 | 0.570 59 | 0.507 69 | 0.935 78 | 0.414 78 | 0.046 104 | 0.510 76 | 0.702 62 | 0.602 76 | 0.705 69 | 0.549 73 | 0.859 72 | 0.773 78 | 0.534 84 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
PointConv | ![]() | 0.666 52 | 0.781 42 | 0.759 47 | 0.699 66 | 0.644 50 | 0.822 45 | 0.475 44 | 0.779 45 | 0.564 61 | 0.504 70 | 0.953 34 | 0.428 71 | 0.203 74 | 0.586 54 | 0.754 50 | 0.661 55 | 0.753 51 | 0.588 56 | 0.902 37 | 0.813 52 | 0.642 46 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointMTL | 0.632 68 | 0.731 67 | 0.688 82 | 0.675 73 | 0.591 63 | 0.784 72 | 0.444 64 | 0.565 90 | 0.610 37 | 0.492 71 | 0.949 48 | 0.456 57 | 0.254 54 | 0.587 52 | 0.706 60 | 0.599 77 | 0.665 83 | 0.612 45 | 0.868 68 | 0.791 70 | 0.579 68 | |
3DSM_DMMF | 0.631 69 | 0.626 87 | 0.745 57 | 0.801 38 | 0.607 58 | 0.751 85 | 0.506 29 | 0.729 61 | 0.565 60 | 0.491 72 | 0.866 102 | 0.434 66 | 0.197 78 | 0.595 50 | 0.630 73 | 0.709 35 | 0.705 69 | 0.560 65 | 0.875 58 | 0.740 87 | 0.491 91 | |
PointConv-SFPN | 0.641 59 | 0.776 45 | 0.703 70 | 0.721 60 | 0.557 76 | 0.826 40 | 0.451 54 | 0.672 75 | 0.563 62 | 0.483 73 | 0.943 67 | 0.425 74 | 0.162 90 | 0.644 38 | 0.726 55 | 0.659 56 | 0.709 66 | 0.572 59 | 0.875 58 | 0.786 72 | 0.559 77 | |
HPGCNN | 0.656 56 | 0.698 78 | 0.743 59 | 0.650 82 | 0.564 73 | 0.820 48 | 0.505 30 | 0.758 50 | 0.631 29 | 0.479 74 | 0.945 58 | 0.480 47 | 0.226 61 | 0.572 58 | 0.774 48 | 0.690 44 | 0.735 57 | 0.614 42 | 0.853 74 | 0.776 77 | 0.597 64 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
CCRFNet | 0.589 83 | 0.766 53 | 0.659 89 | 0.683 71 | 0.470 90 | 0.740 88 | 0.387 84 | 0.620 83 | 0.490 79 | 0.476 75 | 0.922 90 | 0.355 90 | 0.245 58 | 0.511 75 | 0.511 84 | 0.571 85 | 0.643 88 | 0.493 89 | 0.872 63 | 0.762 81 | 0.600 62 | |
DVVNet | 0.562 88 | 0.648 84 | 0.700 73 | 0.770 49 | 0.586 67 | 0.687 93 | 0.333 92 | 0.650 77 | 0.514 73 | 0.475 76 | 0.906 96 | 0.359 88 | 0.223 65 | 0.340 95 | 0.442 90 | 0.422 99 | 0.668 82 | 0.501 86 | 0.708 94 | 0.779 74 | 0.534 84 | |
dtc_net | 0.625 75 | 0.703 76 | 0.751 53 | 0.794 41 | 0.535 79 | 0.848 21 | 0.480 43 | 0.676 74 | 0.528 68 | 0.469 77 | 0.944 64 | 0.454 58 | 0.004 107 | 0.464 85 | 0.636 72 | 0.704 38 | 0.758 48 | 0.548 74 | 0.924 23 | 0.787 71 | 0.492 90 | |
SD-DETR | 0.576 85 | 0.746 59 | 0.609 98 | 0.445 103 | 0.517 83 | 0.643 99 | 0.366 87 | 0.714 65 | 0.456 89 | 0.468 78 | 0.870 101 | 0.432 67 | 0.264 50 | 0.558 64 | 0.674 66 | 0.586 83 | 0.688 75 | 0.482 91 | 0.739 91 | 0.733 89 | 0.537 83 | |
ROSMRF | 0.580 84 | 0.772 48 | 0.707 69 | 0.681 72 | 0.563 74 | 0.764 81 | 0.362 88 | 0.515 95 | 0.465 87 | 0.465 79 | 0.936 77 | 0.427 73 | 0.207 70 | 0.438 87 | 0.577 77 | 0.536 88 | 0.675 80 | 0.486 90 | 0.723 93 | 0.779 74 | 0.524 86 | |
PointMRNet | 0.640 61 | 0.717 73 | 0.701 72 | 0.692 69 | 0.576 70 | 0.801 63 | 0.467 50 | 0.716 63 | 0.563 62 | 0.459 80 | 0.953 34 | 0.429 70 | 0.169 87 | 0.581 55 | 0.854 29 | 0.605 74 | 0.710 64 | 0.550 72 | 0.894 44 | 0.793 64 | 0.575 69 | |
3DMV, FTSDF | 0.501 94 | 0.558 96 | 0.608 99 | 0.424 105 | 0.478 88 | 0.690 92 | 0.246 101 | 0.586 86 | 0.468 85 | 0.450 81 | 0.911 94 | 0.394 82 | 0.160 91 | 0.438 87 | 0.212 102 | 0.432 98 | 0.541 100 | 0.475 92 | 0.742 90 | 0.727 90 | 0.477 94 | |
PointNet2-SFPN | 0.631 69 | 0.771 49 | 0.692 79 | 0.672 74 | 0.524 81 | 0.837 30 | 0.440 66 | 0.706 68 | 0.538 65 | 0.446 82 | 0.944 64 | 0.421 76 | 0.219 66 | 0.552 66 | 0.751 52 | 0.591 80 | 0.737 56 | 0.543 77 | 0.901 39 | 0.768 79 | 0.557 78 | |
wsss-transformer | 0.600 80 | 0.634 86 | 0.743 59 | 0.697 68 | 0.601 61 | 0.781 73 | 0.437 68 | 0.585 87 | 0.493 78 | 0.446 82 | 0.933 83 | 0.394 82 | 0.011 106 | 0.654 35 | 0.661 71 | 0.603 75 | 0.733 58 | 0.526 82 | 0.832 78 | 0.761 82 | 0.480 93 | |
LAP-D | 0.594 81 | 0.720 71 | 0.692 79 | 0.637 88 | 0.456 91 | 0.773 78 | 0.391 83 | 0.730 60 | 0.587 50 | 0.445 84 | 0.940 72 | 0.381 85 | 0.288 32 | 0.434 89 | 0.453 89 | 0.591 80 | 0.649 85 | 0.581 58 | 0.777 86 | 0.749 86 | 0.610 56 | |
DPC | 0.592 82 | 0.720 71 | 0.700 73 | 0.602 92 | 0.480 87 | 0.762 83 | 0.380 86 | 0.713 66 | 0.585 53 | 0.437 85 | 0.940 72 | 0.369 87 | 0.288 32 | 0.434 89 | 0.509 85 | 0.590 82 | 0.639 90 | 0.567 63 | 0.772 87 | 0.755 84 | 0.592 66 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
PointSPNet | 0.637 64 | 0.734 65 | 0.692 79 | 0.714 63 | 0.576 70 | 0.797 66 | 0.446 59 | 0.743 56 | 0.598 46 | 0.437 85 | 0.942 68 | 0.403 80 | 0.150 94 | 0.626 44 | 0.800 46 | 0.649 59 | 0.697 71 | 0.557 68 | 0.846 76 | 0.777 76 | 0.563 75 | |
AttAN | 0.609 79 | 0.760 54 | 0.667 86 | 0.649 83 | 0.521 82 | 0.793 68 | 0.457 53 | 0.648 78 | 0.528 68 | 0.434 87 | 0.947 52 | 0.401 81 | 0.153 93 | 0.454 86 | 0.721 58 | 0.648 61 | 0.717 63 | 0.536 80 | 0.904 34 | 0.765 80 | 0.485 92 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
Online SegFusion | 0.515 93 | 0.607 91 | 0.644 92 | 0.579 94 | 0.434 93 | 0.630 101 | 0.353 89 | 0.628 82 | 0.440 92 | 0.410 88 | 0.762 106 | 0.307 95 | 0.167 88 | 0.520 73 | 0.403 93 | 0.516 89 | 0.565 95 | 0.447 97 | 0.678 97 | 0.701 94 | 0.514 88 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
3DWSSS | 0.425 103 | 0.525 98 | 0.647 90 | 0.522 97 | 0.324 103 | 0.488 107 | 0.077 108 | 0.712 67 | 0.353 101 | 0.401 89 | 0.636 108 | 0.281 99 | 0.176 84 | 0.340 95 | 0.565 78 | 0.175 108 | 0.551 98 | 0.398 103 | 0.370 108 | 0.602 103 | 0.361 101 | |
TextureNet | ![]() | 0.566 87 | 0.672 82 | 0.664 87 | 0.671 76 | 0.494 85 | 0.719 89 | 0.445 61 | 0.678 73 | 0.411 97 | 0.396 90 | 0.935 78 | 0.356 89 | 0.225 63 | 0.412 91 | 0.535 80 | 0.565 86 | 0.636 91 | 0.464 93 | 0.794 85 | 0.680 97 | 0.568 73 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
3DMV | 0.484 96 | 0.484 102 | 0.538 103 | 0.643 86 | 0.424 94 | 0.606 104 | 0.310 93 | 0.574 88 | 0.433 95 | 0.378 91 | 0.796 104 | 0.301 96 | 0.214 69 | 0.537 71 | 0.208 103 | 0.472 96 | 0.507 104 | 0.413 102 | 0.693 95 | 0.602 103 | 0.539 82 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
SQN_0.1% | 0.569 86 | 0.676 80 | 0.696 76 | 0.657 80 | 0.497 84 | 0.779 76 | 0.424 70 | 0.548 91 | 0.515 72 | 0.376 92 | 0.902 99 | 0.422 75 | 0.357 7 | 0.379 93 | 0.456 88 | 0.596 79 | 0.659 84 | 0.544 75 | 0.685 96 | 0.665 100 | 0.556 79 | |
Pointnet++ & Feature | ![]() | 0.557 89 | 0.735 64 | 0.661 88 | 0.686 70 | 0.491 86 | 0.744 87 | 0.392 81 | 0.539 92 | 0.451 90 | 0.375 93 | 0.946 55 | 0.376 86 | 0.205 72 | 0.403 92 | 0.356 95 | 0.553 87 | 0.643 88 | 0.497 87 | 0.824 81 | 0.756 83 | 0.515 87 |
GMLPs | 0.538 90 | 0.495 100 | 0.693 78 | 0.647 84 | 0.471 89 | 0.793 68 | 0.300 95 | 0.477 96 | 0.505 75 | 0.358 94 | 0.903 98 | 0.327 93 | 0.081 101 | 0.472 84 | 0.529 82 | 0.448 97 | 0.710 64 | 0.509 83 | 0.746 89 | 0.737 88 | 0.554 80 | |
subcloud_weak | 0.516 92 | 0.676 80 | 0.591 101 | 0.609 89 | 0.442 92 | 0.774 77 | 0.335 91 | 0.597 84 | 0.422 96 | 0.357 95 | 0.932 84 | 0.341 92 | 0.094 100 | 0.298 97 | 0.528 83 | 0.473 95 | 0.676 79 | 0.495 88 | 0.602 102 | 0.721 92 | 0.349 103 | |
PCNN | 0.498 95 | 0.559 95 | 0.644 92 | 0.560 96 | 0.420 95 | 0.711 91 | 0.229 103 | 0.414 97 | 0.436 93 | 0.352 96 | 0.941 70 | 0.324 94 | 0.155 92 | 0.238 102 | 0.387 94 | 0.493 91 | 0.529 101 | 0.509 83 | 0.813 83 | 0.751 85 | 0.504 89 | |
DGCNN_reproduce | ![]() | 0.446 99 | 0.474 103 | 0.623 95 | 0.463 101 | 0.366 99 | 0.651 97 | 0.310 93 | 0.389 100 | 0.349 102 | 0.330 97 | 0.937 75 | 0.271 100 | 0.126 97 | 0.285 98 | 0.224 101 | 0.350 104 | 0.577 94 | 0.445 98 | 0.625 100 | 0.723 91 | 0.394 99 |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
PointCNN with RGB | ![]() | 0.458 97 | 0.577 94 | 0.611 97 | 0.356 107 | 0.321 104 | 0.715 90 | 0.299 97 | 0.376 101 | 0.328 104 | 0.319 98 | 0.944 64 | 0.285 98 | 0.164 89 | 0.216 105 | 0.229 100 | 0.484 93 | 0.545 99 | 0.456 95 | 0.755 88 | 0.709 93 | 0.475 95 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
PNET2 | 0.442 100 | 0.548 97 | 0.548 102 | 0.597 93 | 0.363 100 | 0.628 102 | 0.300 95 | 0.292 102 | 0.374 99 | 0.307 99 | 0.881 100 | 0.268 101 | 0.186 81 | 0.238 102 | 0.204 104 | 0.407 100 | 0.506 105 | 0.449 96 | 0.667 98 | 0.620 102 | 0.462 97 | |
PanopticFusion-label | 0.529 91 | 0.491 101 | 0.688 82 | 0.604 91 | 0.386 96 | 0.632 100 | 0.225 105 | 0.705 69 | 0.434 94 | 0.293 100 | 0.815 103 | 0.348 91 | 0.241 59 | 0.499 79 | 0.669 67 | 0.507 90 | 0.649 85 | 0.442 99 | 0.796 84 | 0.602 103 | 0.561 76 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
Tangent Convolutions | ![]() | 0.438 102 | 0.437 105 | 0.646 91 | 0.474 100 | 0.369 98 | 0.645 98 | 0.353 89 | 0.258 104 | 0.282 106 | 0.279 101 | 0.918 93 | 0.298 97 | 0.147 96 | 0.283 99 | 0.294 97 | 0.487 92 | 0.562 96 | 0.427 101 | 0.619 101 | 0.633 101 | 0.352 102 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
SurfaceConvPF | 0.442 100 | 0.505 99 | 0.622 96 | 0.380 106 | 0.342 102 | 0.654 96 | 0.227 104 | 0.397 99 | 0.367 100 | 0.276 102 | 0.924 88 | 0.240 103 | 0.198 77 | 0.359 94 | 0.262 98 | 0.366 101 | 0.581 93 | 0.435 100 | 0.640 99 | 0.668 98 | 0.398 98 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 105 | 0.297 107 | 0.491 105 | 0.432 104 | 0.358 101 | 0.612 103 | 0.274 99 | 0.116 106 | 0.411 97 | 0.265 103 | 0.904 97 | 0.229 104 | 0.079 102 | 0.250 100 | 0.185 105 | 0.320 105 | 0.510 102 | 0.385 104 | 0.548 104 | 0.597 106 | 0.394 99 | |
PointNet++ | ![]() | 0.339 106 | 0.584 93 | 0.478 106 | 0.458 102 | 0.256 107 | 0.360 108 | 0.250 100 | 0.247 105 | 0.278 107 | 0.261 104 | 0.677 107 | 0.183 106 | 0.117 98 | 0.212 106 | 0.145 107 | 0.364 102 | 0.346 108 | 0.232 108 | 0.548 104 | 0.523 107 | 0.252 106 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
FCPN | ![]() | 0.447 98 | 0.679 79 | 0.604 100 | 0.578 95 | 0.380 97 | 0.682 94 | 0.291 98 | 0.106 107 | 0.483 82 | 0.258 105 | 0.920 91 | 0.258 102 | 0.025 105 | 0.231 104 | 0.325 96 | 0.480 94 | 0.560 97 | 0.463 94 | 0.725 92 | 0.666 99 | 0.231 107 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
SPLAT Net | ![]() | 0.393 104 | 0.472 104 | 0.511 104 | 0.606 90 | 0.311 105 | 0.656 95 | 0.245 102 | 0.405 98 | 0.328 104 | 0.197 106 | 0.927 87 | 0.227 105 | 0.000 109 | 0.001 109 | 0.249 99 | 0.271 107 | 0.510 102 | 0.383 105 | 0.593 103 | 0.699 95 | 0.267 105 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
ScanNet | ![]() | 0.306 108 | 0.203 108 | 0.366 107 | 0.501 98 | 0.311 105 | 0.524 106 | 0.211 106 | 0.002 109 | 0.342 103 | 0.189 107 | 0.786 105 | 0.145 108 | 0.102 99 | 0.245 101 | 0.152 106 | 0.318 106 | 0.348 107 | 0.300 107 | 0.460 107 | 0.437 108 | 0.182 108 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
SSC-UNet | ![]() | 0.308 107 | 0.353 106 | 0.290 108 | 0.278 108 | 0.166 108 | 0.553 105 | 0.169 107 | 0.286 103 | 0.147 108 | 0.148 108 | 0.908 95 | 0.182 107 | 0.064 103 | 0.023 108 | 0.018 109 | 0.354 103 | 0.363 106 | 0.345 106 | 0.546 106 | 0.685 96 | 0.278 104 |
ERROR | 0.054 109 | 0.000 109 | 0.041 109 | 0.172 109 | 0.030 109 | 0.062 109 | 0.001 109 | 0.035 108 | 0.004 109 | 0.051 109 | 0.143 109 | 0.019 109 | 0.003 108 | 0.041 107 | 0.050 108 | 0.003 109 | 0.054 109 | 0.018 109 | 0.005 109 | 0.264 109 | 0.082 109 | |