PointTransformerV2 | | 0.752 5 | 0.742 53 | 0.809 14 | 0.872 1 | 0.758 5 | 0.860 6 | 0.552 7 | 0.891 5 | 0.610 30 | 0.687 2 | 0.960 8 | 0.559 14 | 0.304 20 | 0.766 8 | 0.926 2 | 0.767 8 | 0.797 14 | 0.644 22 | 0.942 6 | 0.876 7 | 0.722 14 |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 |
IPCA | | 0.731 20 | 0.890 8 | 0.837 3 | 0.864 2 | 0.726 18 | 0.873 2 | 0.530 15 | 0.824 23 | 0.489 73 | 0.647 9 | 0.978 2 | 0.609 2 | 0.336 7 | 0.624 37 | 0.733 46 | 0.758 11 | 0.776 25 | 0.570 53 | 0.949 2 | 0.877 5 | 0.728 8 |
|
MSP | | 0.748 9 | 0.623 79 | 0.804 16 | 0.859 3 | 0.745 12 | 0.824 33 | 0.501 23 | 0.912 2 | 0.690 6 | 0.685 3 | 0.956 14 | 0.567 11 | 0.320 13 | 0.768 7 | 0.918 3 | 0.720 22 | 0.802 10 | 0.676 12 | 0.921 18 | 0.881 4 | 0.779 1 |
|
VMNet |  | 0.746 11 | 0.870 12 | 0.838 2 | 0.858 4 | 0.729 17 | 0.850 12 | 0.501 23 | 0.874 8 | 0.587 42 | 0.658 8 | 0.956 14 | 0.564 12 | 0.299 21 | 0.765 9 | 0.900 5 | 0.716 25 | 0.812 8 | 0.631 28 | 0.939 9 | 0.858 16 | 0.709 18 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) |
EQ-Net | | 0.743 14 | 0.620 80 | 0.799 19 | 0.849 5 | 0.730 16 | 0.822 35 | 0.493 30 | 0.897 4 | 0.664 12 | 0.681 4 | 0.955 18 | 0.562 13 | 0.378 1 | 0.760 11 | 0.903 4 | 0.738 16 | 0.801 11 | 0.673 15 | 0.907 26 | 0.877 5 | 0.745 3 |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 |
Virtual MVFusion | | 0.746 11 | 0.771 40 | 0.819 8 | 0.848 6 | 0.702 25 | 0.865 5 | 0.397 70 | 0.899 3 | 0.699 3 | 0.664 7 | 0.948 41 | 0.588 6 | 0.330 9 | 0.746 17 | 0.851 24 | 0.764 9 | 0.796 15 | 0.704 6 | 0.935 11 | 0.866 11 | 0.728 8 |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 |
SparseConvNet | | 0.725 21 | 0.647 76 | 0.821 6 | 0.846 7 | 0.721 19 | 0.869 3 | 0.533 12 | 0.754 43 | 0.603 36 | 0.614 23 | 0.955 18 | 0.572 10 | 0.325 11 | 0.710 21 | 0.870 13 | 0.724 20 | 0.823 2 | 0.628 29 | 0.934 12 | 0.865 12 | 0.683 27 |
|
StratifiedFormer |  | 0.747 10 | 0.901 7 | 0.803 17 | 0.845 8 | 0.757 6 | 0.846 14 | 0.512 19 | 0.825 22 | 0.696 5 | 0.645 10 | 0.956 14 | 0.576 9 | 0.262 44 | 0.744 18 | 0.861 16 | 0.742 15 | 0.770 30 | 0.705 5 | 0.899 33 | 0.860 15 | 0.734 6 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 |
O-CNN |  | 0.762 4 | 0.924 2 | 0.823 5 | 0.844 9 | 0.770 2 | 0.852 10 | 0.577 2 | 0.847 15 | 0.711 1 | 0.640 16 | 0.958 10 | 0.592 4 | 0.217 58 | 0.762 10 | 0.888 9 | 0.758 11 | 0.813 7 | 0.726 1 | 0.932 15 | 0.868 9 | 0.744 4 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 |
Mix3D |  | 0.781 1 | 0.964 1 | 0.855 1 | 0.843 10 | 0.781 1 | 0.858 7 | 0.575 3 | 0.831 19 | 0.685 7 | 0.714 1 | 0.979 1 | 0.594 3 | 0.310 17 | 0.801 1 | 0.892 8 | 0.841 2 | 0.819 3 | 0.723 3 | 0.940 8 | 0.887 1 | 0.725 12 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) |
CU-Hybrid Net | | 0.764 2 | 0.924 2 | 0.819 8 | 0.840 11 | 0.757 6 | 0.853 9 | 0.580 1 | 0.848 14 | 0.709 2 | 0.643 12 | 0.958 10 | 0.587 7 | 0.295 23 | 0.753 14 | 0.884 12 | 0.758 11 | 0.815 6 | 0.725 2 | 0.927 17 | 0.867 10 | 0.743 5 |
|
OccuSeg+Semantic | | 0.764 2 | 0.758 46 | 0.796 20 | 0.839 12 | 0.746 11 | 0.907 1 | 0.562 5 | 0.850 13 | 0.680 9 | 0.672 5 | 0.978 2 | 0.610 1 | 0.335 8 | 0.777 4 | 0.819 32 | 0.847 1 | 0.830 1 | 0.691 8 | 0.972 1 | 0.885 2 | 0.727 10 |
|
MinkowskiNet |  | 0.736 19 | 0.859 15 | 0.818 10 | 0.832 13 | 0.709 22 | 0.840 18 | 0.521 18 | 0.853 12 | 0.660 15 | 0.643 12 | 0.951 31 | 0.544 19 | 0.286 28 | 0.731 19 | 0.893 7 | 0.675 39 | 0.772 27 | 0.683 9 | 0.874 51 | 0.852 22 | 0.727 10 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 |
JSENet |  | 0.699 29 | 0.881 11 | 0.762 36 | 0.821 14 | 0.667 32 | 0.800 55 | 0.522 17 | 0.792 35 | 0.613 28 | 0.607 27 | 0.935 69 | 0.492 34 | 0.205 63 | 0.576 47 | 0.853 22 | 0.691 34 | 0.758 39 | 0.652 19 | 0.872 54 | 0.828 33 | 0.649 36 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 |
PointMetaBase | | 0.714 25 | 0.835 21 | 0.785 26 | 0.821 14 | 0.684 29 | 0.846 14 | 0.531 14 | 0.865 10 | 0.614 27 | 0.596 34 | 0.953 26 | 0.500 32 | 0.246 50 | 0.674 23 | 0.888 9 | 0.692 33 | 0.764 33 | 0.624 30 | 0.849 66 | 0.844 30 | 0.675 29 |
|
Feature-Geometry Net |  | 0.685 34 | 0.866 13 | 0.748 44 | 0.819 16 | 0.645 40 | 0.794 58 | 0.450 47 | 0.802 33 | 0.587 42 | 0.604 28 | 0.945 49 | 0.464 44 | 0.201 66 | 0.554 56 | 0.840 27 | 0.723 21 | 0.732 49 | 0.602 41 | 0.907 26 | 0.822 38 | 0.603 53 |
|
SAT | | 0.742 15 | 0.860 14 | 0.765 35 | 0.819 16 | 0.769 3 | 0.848 13 | 0.533 12 | 0.829 20 | 0.663 13 | 0.631 18 | 0.955 18 | 0.586 8 | 0.274 34 | 0.753 14 | 0.896 6 | 0.729 17 | 0.760 37 | 0.666 17 | 0.921 18 | 0.855 20 | 0.733 7 |
|
PointTransformer++ | | 0.725 21 | 0.727 60 | 0.811 13 | 0.819 16 | 0.765 4 | 0.841 17 | 0.502 22 | 0.814 29 | 0.621 25 | 0.623 20 | 0.955 18 | 0.556 15 | 0.284 29 | 0.620 38 | 0.866 14 | 0.781 5 | 0.757 40 | 0.648 20 | 0.932 15 | 0.862 13 | 0.709 18 |
|
RFCR | | 0.702 27 | 0.889 9 | 0.745 47 | 0.813 19 | 0.672 31 | 0.818 42 | 0.493 30 | 0.815 27 | 0.623 23 | 0.610 24 | 0.947 43 | 0.470 42 | 0.249 49 | 0.594 42 | 0.848 25 | 0.705 29 | 0.779 24 | 0.646 21 | 0.892 38 | 0.823 36 | 0.611 46 |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 |
INS-Conv-semantic | | 0.717 24 | 0.751 49 | 0.759 38 | 0.812 20 | 0.704 24 | 0.868 4 | 0.537 11 | 0.842 16 | 0.609 32 | 0.608 26 | 0.953 26 | 0.534 21 | 0.293 24 | 0.616 39 | 0.864 15 | 0.719 24 | 0.793 19 | 0.640 24 | 0.933 13 | 0.845 29 | 0.663 32 |
|
BPNet |  | 0.749 7 | 0.909 4 | 0.818 10 | 0.811 21 | 0.752 8 | 0.839 19 | 0.485 32 | 0.842 16 | 0.673 10 | 0.644 11 | 0.957 13 | 0.528 25 | 0.305 19 | 0.773 6 | 0.859 17 | 0.788 4 | 0.818 5 | 0.693 7 | 0.916 21 | 0.856 18 | 0.723 13 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) |
MatchingNet | | 0.724 23 | 0.812 29 | 0.812 12 | 0.810 22 | 0.735 15 | 0.834 24 | 0.495 29 | 0.860 11 | 0.572 48 | 0.602 30 | 0.954 23 | 0.512 29 | 0.280 31 | 0.757 12 | 0.845 26 | 0.725 19 | 0.780 23 | 0.606 39 | 0.937 10 | 0.851 23 | 0.700 21 |
|
contrastBoundary |  | 0.705 26 | 0.769 43 | 0.775 31 | 0.809 23 | 0.687 28 | 0.820 38 | 0.439 58 | 0.812 30 | 0.661 14 | 0.591 36 | 0.945 49 | 0.515 28 | 0.171 76 | 0.633 34 | 0.856 20 | 0.720 22 | 0.796 15 | 0.668 16 | 0.889 40 | 0.847 26 | 0.689 25 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 |
PointConvFormer | | 0.749 7 | 0.793 32 | 0.790 24 | 0.807 24 | 0.750 10 | 0.856 8 | 0.524 16 | 0.881 7 | 0.588 41 | 0.642 15 | 0.977 4 | 0.591 5 | 0.274 34 | 0.781 2 | 0.929 1 | 0.804 3 | 0.796 15 | 0.642 23 | 0.947 3 | 0.885 2 | 0.715 17 |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. |
LRPNet | | 0.742 15 | 0.816 27 | 0.806 15 | 0.807 24 | 0.752 8 | 0.828 29 | 0.575 3 | 0.839 18 | 0.699 3 | 0.637 17 | 0.954 23 | 0.520 27 | 0.320 13 | 0.755 13 | 0.834 28 | 0.760 10 | 0.772 27 | 0.676 12 | 0.915 22 | 0.862 13 | 0.717 15 |
|
LargeKernel3D | | 0.739 18 | 0.909 4 | 0.820 7 | 0.806 26 | 0.740 13 | 0.852 10 | 0.545 9 | 0.826 21 | 0.594 40 | 0.643 12 | 0.955 18 | 0.541 20 | 0.263 43 | 0.723 20 | 0.858 19 | 0.775 7 | 0.767 31 | 0.678 11 | 0.933 13 | 0.848 24 | 0.694 24 |
|
DCM-Net | | 0.658 46 | 0.778 36 | 0.702 62 | 0.806 26 | 0.619 47 | 0.813 48 | 0.468 37 | 0.693 61 | 0.494 68 | 0.524 53 | 0.941 60 | 0.449 53 | 0.298 22 | 0.510 67 | 0.821 31 | 0.675 39 | 0.727 51 | 0.568 55 | 0.826 71 | 0.803 47 | 0.637 40 |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] |
TXC | | 0.740 17 | 0.842 19 | 0.832 4 | 0.805 28 | 0.715 21 | 0.846 14 | 0.473 34 | 0.885 6 | 0.615 26 | 0.671 6 | 0.971 6 | 0.547 18 | 0.320 13 | 0.697 22 | 0.799 37 | 0.777 6 | 0.819 3 | 0.682 10 | 0.946 4 | 0.871 8 | 0.696 23 |
|
DMF-Net | | 0.752 5 | 0.906 6 | 0.793 23 | 0.802 29 | 0.689 27 | 0.825 31 | 0.556 6 | 0.867 9 | 0.681 8 | 0.602 30 | 0.960 8 | 0.555 16 | 0.365 3 | 0.779 3 | 0.859 17 | 0.747 14 | 0.795 18 | 0.717 4 | 0.917 20 | 0.856 18 | 0.764 2 |
|
3DSM_DMMF | | 0.631 60 | 0.626 78 | 0.745 47 | 0.801 30 | 0.607 49 | 0.751 76 | 0.506 20 | 0.729 52 | 0.565 52 | 0.491 64 | 0.866 93 | 0.434 56 | 0.197 69 | 0.595 41 | 0.630 63 | 0.709 28 | 0.705 59 | 0.560 57 | 0.875 49 | 0.740 78 | 0.491 82 |
|
Superpoint Network | | 0.683 38 | 0.851 17 | 0.728 55 | 0.800 31 | 0.653 36 | 0.806 50 | 0.468 37 | 0.804 31 | 0.572 48 | 0.602 30 | 0.946 46 | 0.453 51 | 0.239 53 | 0.519 65 | 0.822 30 | 0.689 37 | 0.762 36 | 0.595 45 | 0.895 36 | 0.827 34 | 0.630 43 |
|
Feature_GeometricNet |  | 0.690 32 | 0.884 10 | 0.754 42 | 0.795 32 | 0.647 38 | 0.818 42 | 0.422 62 | 0.802 33 | 0.612 29 | 0.604 28 | 0.945 49 | 0.462 45 | 0.189 71 | 0.563 53 | 0.853 22 | 0.726 18 | 0.765 32 | 0.632 27 | 0.904 28 | 0.821 39 | 0.606 50 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint |
PicassoNet-II |  | 0.696 30 | 0.704 65 | 0.790 24 | 0.787 33 | 0.709 22 | 0.837 20 | 0.459 42 | 0.815 27 | 0.543 57 | 0.615 22 | 0.956 14 | 0.529 23 | 0.250 47 | 0.551 59 | 0.790 38 | 0.703 30 | 0.799 13 | 0.619 34 | 0.908 25 | 0.848 24 | 0.700 21 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. |
PointContrast_LA_SEM | | 0.683 38 | 0.757 47 | 0.784 27 | 0.786 34 | 0.639 42 | 0.824 33 | 0.408 65 | 0.775 37 | 0.604 35 | 0.541 45 | 0.934 73 | 0.532 22 | 0.269 39 | 0.552 57 | 0.777 39 | 0.645 55 | 0.793 19 | 0.640 24 | 0.913 23 | 0.824 35 | 0.671 30 |
|
KP-FCNN | | 0.684 35 | 0.847 18 | 0.758 40 | 0.784 35 | 0.647 38 | 0.814 45 | 0.473 34 | 0.772 38 | 0.605 34 | 0.594 35 | 0.935 69 | 0.450 52 | 0.181 74 | 0.587 43 | 0.805 35 | 0.690 35 | 0.785 22 | 0.614 35 | 0.882 44 | 0.819 40 | 0.632 42 |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 |
VI-PointConv | | 0.676 40 | 0.770 42 | 0.754 42 | 0.783 36 | 0.621 46 | 0.814 45 | 0.552 7 | 0.758 41 | 0.571 50 | 0.557 41 | 0.954 23 | 0.529 23 | 0.268 41 | 0.530 63 | 0.682 56 | 0.675 39 | 0.719 52 | 0.603 40 | 0.888 41 | 0.833 31 | 0.665 31 |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. |
DGNet | | 0.684 35 | 0.712 64 | 0.784 27 | 0.782 37 | 0.658 33 | 0.835 23 | 0.499 27 | 0.823 24 | 0.641 18 | 0.597 33 | 0.950 35 | 0.487 35 | 0.281 30 | 0.575 48 | 0.619 64 | 0.647 52 | 0.764 33 | 0.620 33 | 0.871 57 | 0.846 28 | 0.688 26 |
|
VACNN++ | | 0.684 35 | 0.728 59 | 0.757 41 | 0.776 38 | 0.690 26 | 0.804 52 | 0.464 40 | 0.816 25 | 0.577 47 | 0.587 37 | 0.945 49 | 0.508 31 | 0.276 33 | 0.671 24 | 0.710 51 | 0.663 44 | 0.750 43 | 0.589 48 | 0.881 45 | 0.832 32 | 0.653 35 |
|
DVVNet | | 0.562 79 | 0.648 75 | 0.700 64 | 0.770 39 | 0.586 58 | 0.687 84 | 0.333 83 | 0.650 67 | 0.514 65 | 0.475 68 | 0.906 87 | 0.359 79 | 0.223 56 | 0.340 86 | 0.442 81 | 0.422 90 | 0.668 73 | 0.501 77 | 0.708 85 | 0.779 65 | 0.534 75 |
|
SALANet | | 0.670 42 | 0.816 27 | 0.770 33 | 0.768 40 | 0.652 37 | 0.807 49 | 0.451 44 | 0.747 45 | 0.659 16 | 0.545 44 | 0.924 79 | 0.473 41 | 0.149 86 | 0.571 50 | 0.811 34 | 0.635 58 | 0.746 44 | 0.623 31 | 0.892 38 | 0.794 53 | 0.570 63 |
|
Retro-FPN | | 0.744 13 | 0.842 19 | 0.800 18 | 0.767 41 | 0.740 13 | 0.836 22 | 0.541 10 | 0.914 1 | 0.672 11 | 0.626 19 | 0.958 10 | 0.552 17 | 0.272 36 | 0.777 4 | 0.886 11 | 0.696 32 | 0.801 11 | 0.674 14 | 0.941 7 | 0.858 16 | 0.717 15 |
|
FusionAwareConv | | 0.630 63 | 0.604 83 | 0.741 51 | 0.766 42 | 0.590 55 | 0.747 77 | 0.501 23 | 0.734 50 | 0.503 67 | 0.527 51 | 0.919 83 | 0.454 49 | 0.323 12 | 0.550 60 | 0.420 82 | 0.678 38 | 0.688 66 | 0.544 65 | 0.896 35 | 0.795 52 | 0.627 44 |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 |
ROSMRF3D | | 0.673 41 | 0.789 33 | 0.748 44 | 0.763 43 | 0.635 44 | 0.814 45 | 0.407 67 | 0.747 45 | 0.581 46 | 0.573 38 | 0.950 35 | 0.484 36 | 0.271 38 | 0.607 40 | 0.754 42 | 0.649 49 | 0.774 26 | 0.596 43 | 0.883 43 | 0.823 36 | 0.606 50 |
|
SIConv | | 0.625 66 | 0.830 23 | 0.694 68 | 0.757 44 | 0.563 65 | 0.772 70 | 0.448 48 | 0.647 69 | 0.520 62 | 0.509 57 | 0.949 39 | 0.431 59 | 0.191 70 | 0.496 71 | 0.614 65 | 0.647 52 | 0.672 72 | 0.535 71 | 0.876 48 | 0.783 64 | 0.571 62 |
|
FusionNet | | 0.688 33 | 0.704 65 | 0.741 51 | 0.754 45 | 0.656 34 | 0.829 27 | 0.501 23 | 0.741 48 | 0.609 32 | 0.548 43 | 0.950 35 | 0.522 26 | 0.371 2 | 0.633 34 | 0.756 41 | 0.715 26 | 0.771 29 | 0.623 31 | 0.861 62 | 0.814 41 | 0.658 33 |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 |
SConv | | 0.636 56 | 0.830 23 | 0.697 66 | 0.752 46 | 0.572 63 | 0.780 66 | 0.445 51 | 0.716 54 | 0.529 60 | 0.530 50 | 0.951 31 | 0.446 55 | 0.170 77 | 0.507 69 | 0.666 60 | 0.636 57 | 0.682 68 | 0.541 68 | 0.886 42 | 0.799 48 | 0.594 57 |
|
PointASNL |  | 0.666 43 | 0.703 67 | 0.781 29 | 0.751 47 | 0.655 35 | 0.830 26 | 0.471 36 | 0.769 39 | 0.474 76 | 0.537 47 | 0.951 31 | 0.475 40 | 0.279 32 | 0.635 32 | 0.698 55 | 0.675 39 | 0.751 42 | 0.553 62 | 0.816 73 | 0.806 45 | 0.703 20 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 |
One Thing One Click | | 0.701 28 | 0.825 25 | 0.796 20 | 0.723 48 | 0.716 20 | 0.832 25 | 0.433 60 | 0.816 25 | 0.634 21 | 0.609 25 | 0.969 7 | 0.418 67 | 0.344 5 | 0.559 54 | 0.833 29 | 0.715 26 | 0.808 9 | 0.560 57 | 0.902 30 | 0.847 26 | 0.680 28 |
|
PPCNN++ |  | 0.663 45 | 0.746 50 | 0.708 59 | 0.722 49 | 0.638 43 | 0.820 38 | 0.451 44 | 0.566 80 | 0.599 38 | 0.541 45 | 0.950 35 | 0.510 30 | 0.313 16 | 0.648 29 | 0.819 32 | 0.616 63 | 0.682 68 | 0.590 47 | 0.869 58 | 0.810 44 | 0.656 34 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access |
PointConv-SFPN | | 0.641 50 | 0.776 38 | 0.703 61 | 0.721 50 | 0.557 67 | 0.826 30 | 0.451 44 | 0.672 65 | 0.563 54 | 0.483 65 | 0.943 57 | 0.425 64 | 0.162 81 | 0.644 30 | 0.726 47 | 0.659 46 | 0.709 56 | 0.572 52 | 0.875 49 | 0.786 63 | 0.559 68 |
|
DenSeR | | 0.628 64 | 0.800 30 | 0.625 85 | 0.719 51 | 0.545 70 | 0.806 50 | 0.445 51 | 0.597 75 | 0.448 82 | 0.519 56 | 0.938 65 | 0.481 37 | 0.328 10 | 0.489 73 | 0.499 77 | 0.657 47 | 0.759 38 | 0.592 46 | 0.881 45 | 0.797 51 | 0.634 41 |
|
Supervoxel-CNN | | 0.635 57 | 0.656 74 | 0.711 58 | 0.719 51 | 0.613 48 | 0.757 75 | 0.444 54 | 0.765 40 | 0.534 59 | 0.566 39 | 0.928 77 | 0.478 39 | 0.272 36 | 0.636 31 | 0.531 72 | 0.664 43 | 0.645 78 | 0.508 76 | 0.864 61 | 0.792 58 | 0.611 46 |
|
dtc_net | | 0.596 71 | 0.683 69 | 0.725 56 | 0.715 53 | 0.549 69 | 0.803 53 | 0.444 54 | 0.647 69 | 0.493 69 | 0.495 62 | 0.941 60 | 0.409 69 | 0.000 99 | 0.424 81 | 0.544 69 | 0.598 68 | 0.703 61 | 0.522 73 | 0.912 24 | 0.792 58 | 0.520 78 |
|
PointSPNet | | 0.637 55 | 0.734 56 | 0.692 70 | 0.714 54 | 0.576 61 | 0.797 57 | 0.446 49 | 0.743 47 | 0.598 39 | 0.437 76 | 0.942 58 | 0.403 71 | 0.150 85 | 0.626 36 | 0.800 36 | 0.649 49 | 0.697 62 | 0.557 60 | 0.846 67 | 0.777 67 | 0.563 66 |
|
FPConv |  | 0.639 53 | 0.785 34 | 0.760 37 | 0.713 55 | 0.603 50 | 0.798 56 | 0.392 72 | 0.534 85 | 0.603 36 | 0.524 53 | 0.948 41 | 0.457 47 | 0.250 47 | 0.538 61 | 0.723 49 | 0.598 68 | 0.696 63 | 0.614 35 | 0.872 54 | 0.799 48 | 0.567 65 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 |
SegGroup_sem |  | 0.627 65 | 0.818 26 | 0.747 46 | 0.701 56 | 0.602 51 | 0.764 72 | 0.385 76 | 0.629 72 | 0.490 71 | 0.508 58 | 0.931 76 | 0.409 69 | 0.201 66 | 0.564 52 | 0.725 48 | 0.618 61 | 0.692 64 | 0.539 69 | 0.873 52 | 0.794 53 | 0.548 72 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 |
PointConv |  | 0.666 43 | 0.781 35 | 0.759 38 | 0.699 57 | 0.644 41 | 0.822 35 | 0.475 33 | 0.779 36 | 0.564 53 | 0.504 61 | 0.953 26 | 0.428 61 | 0.203 65 | 0.586 45 | 0.754 42 | 0.661 45 | 0.753 41 | 0.588 49 | 0.902 30 | 0.813 43 | 0.642 38 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 |
RandLA-Net |  | 0.645 49 | 0.778 36 | 0.731 54 | 0.699 57 | 0.577 60 | 0.829 27 | 0.446 49 | 0.736 49 | 0.477 75 | 0.523 55 | 0.945 49 | 0.454 49 | 0.269 39 | 0.484 74 | 0.749 45 | 0.618 61 | 0.738 45 | 0.599 42 | 0.827 70 | 0.792 58 | 0.621 45 |
|
wsss-transformer | | 0.600 70 | 0.634 77 | 0.743 49 | 0.697 59 | 0.601 52 | 0.781 64 | 0.437 59 | 0.585 78 | 0.493 69 | 0.446 73 | 0.933 74 | 0.394 73 | 0.011 97 | 0.654 27 | 0.661 62 | 0.603 65 | 0.733 48 | 0.526 72 | 0.832 69 | 0.761 73 | 0.480 84 |
|
PointMRNet | | 0.640 52 | 0.717 63 | 0.701 63 | 0.692 60 | 0.576 61 | 0.801 54 | 0.467 39 | 0.716 54 | 0.563 54 | 0.459 71 | 0.953 26 | 0.429 60 | 0.169 78 | 0.581 46 | 0.854 21 | 0.605 64 | 0.710 54 | 0.550 63 | 0.894 37 | 0.793 55 | 0.575 61 |
|
Pointnet++ & Feature |  | 0.557 80 | 0.735 55 | 0.661 79 | 0.686 61 | 0.491 77 | 0.744 78 | 0.392 72 | 0.539 83 | 0.451 81 | 0.375 84 | 0.946 46 | 0.376 77 | 0.205 63 | 0.403 83 | 0.356 86 | 0.553 78 | 0.643 79 | 0.497 78 | 0.824 72 | 0.756 74 | 0.515 79 |
|
CCRFNet | | 0.589 74 | 0.766 44 | 0.659 80 | 0.683 62 | 0.470 81 | 0.740 79 | 0.387 75 | 0.620 74 | 0.490 71 | 0.476 67 | 0.922 81 | 0.355 81 | 0.245 51 | 0.511 66 | 0.511 75 | 0.571 76 | 0.643 79 | 0.493 80 | 0.872 54 | 0.762 72 | 0.600 54 |
|
ROSMRF | | 0.580 75 | 0.772 39 | 0.707 60 | 0.681 63 | 0.563 65 | 0.764 72 | 0.362 79 | 0.515 86 | 0.465 79 | 0.465 70 | 0.936 68 | 0.427 63 | 0.207 61 | 0.438 77 | 0.577 67 | 0.536 79 | 0.675 71 | 0.486 81 | 0.723 84 | 0.779 65 | 0.524 77 |
|
PointMTL | | 0.632 59 | 0.731 57 | 0.688 73 | 0.675 64 | 0.591 54 | 0.784 63 | 0.444 54 | 0.565 81 | 0.610 30 | 0.492 63 | 0.949 39 | 0.456 48 | 0.254 46 | 0.587 43 | 0.706 52 | 0.599 67 | 0.665 74 | 0.612 38 | 0.868 59 | 0.791 62 | 0.579 60 |
|
APCF-Net | | 0.631 60 | 0.742 53 | 0.687 75 | 0.672 65 | 0.557 67 | 0.792 61 | 0.408 65 | 0.665 66 | 0.545 56 | 0.508 58 | 0.952 30 | 0.428 61 | 0.186 72 | 0.634 33 | 0.702 53 | 0.620 60 | 0.706 58 | 0.555 61 | 0.873 52 | 0.798 50 | 0.581 59 |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL |
PointNet2-SFPN | | 0.631 60 | 0.771 40 | 0.692 70 | 0.672 65 | 0.524 72 | 0.837 20 | 0.440 57 | 0.706 59 | 0.538 58 | 0.446 73 | 0.944 55 | 0.421 66 | 0.219 57 | 0.552 57 | 0.751 44 | 0.591 71 | 0.737 46 | 0.543 67 | 0.901 32 | 0.768 70 | 0.557 69 |
|
MVPNet |  | 0.641 50 | 0.831 22 | 0.715 57 | 0.671 67 | 0.590 55 | 0.781 64 | 0.394 71 | 0.679 63 | 0.642 17 | 0.553 42 | 0.937 66 | 0.462 45 | 0.256 45 | 0.649 28 | 0.406 83 | 0.626 59 | 0.691 65 | 0.666 17 | 0.877 47 | 0.792 58 | 0.608 49 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 |
TextureNet |  | 0.566 78 | 0.672 73 | 0.664 78 | 0.671 67 | 0.494 76 | 0.719 80 | 0.445 51 | 0.678 64 | 0.411 88 | 0.396 81 | 0.935 69 | 0.356 80 | 0.225 55 | 0.412 82 | 0.535 71 | 0.565 77 | 0.636 82 | 0.464 84 | 0.794 76 | 0.680 88 | 0.568 64 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR |
joint point-based |  | 0.634 58 | 0.614 81 | 0.778 30 | 0.667 69 | 0.633 45 | 0.825 31 | 0.420 63 | 0.804 31 | 0.467 78 | 0.561 40 | 0.951 31 | 0.494 33 | 0.291 25 | 0.566 51 | 0.458 78 | 0.579 75 | 0.764 33 | 0.559 59 | 0.838 68 | 0.814 41 | 0.598 55 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 |
SAFNet-seg |  | 0.654 48 | 0.752 48 | 0.734 53 | 0.664 70 | 0.583 59 | 0.815 44 | 0.399 69 | 0.754 43 | 0.639 19 | 0.535 49 | 0.942 58 | 0.470 42 | 0.309 18 | 0.665 25 | 0.539 70 | 0.650 48 | 0.708 57 | 0.635 26 | 0.857 64 | 0.793 55 | 0.642 38 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 |
SQN_0.1% | | 0.569 77 | 0.676 71 | 0.696 67 | 0.657 71 | 0.497 75 | 0.779 67 | 0.424 61 | 0.548 82 | 0.515 64 | 0.376 83 | 0.902 90 | 0.422 65 | 0.357 4 | 0.379 84 | 0.456 79 | 0.596 70 | 0.659 75 | 0.544 65 | 0.685 87 | 0.665 91 | 0.556 70 |
|
One-Thing-One-Click | | 0.693 31 | 0.743 52 | 0.794 22 | 0.655 72 | 0.684 29 | 0.822 35 | 0.497 28 | 0.719 53 | 0.622 24 | 0.617 21 | 0.977 4 | 0.447 54 | 0.339 6 | 0.750 16 | 0.664 61 | 0.703 30 | 0.790 21 | 0.596 43 | 0.946 4 | 0.855 20 | 0.647 37 |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 |
HPGCNN | | 0.656 47 | 0.698 68 | 0.743 49 | 0.650 73 | 0.564 64 | 0.820 38 | 0.505 21 | 0.758 41 | 0.631 22 | 0.479 66 | 0.945 49 | 0.480 38 | 0.226 54 | 0.572 49 | 0.774 40 | 0.690 35 | 0.735 47 | 0.614 35 | 0.853 65 | 0.776 68 | 0.597 56 |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. |
AttAN | | 0.609 69 | 0.760 45 | 0.667 77 | 0.649 74 | 0.521 73 | 0.793 59 | 0.457 43 | 0.648 68 | 0.528 61 | 0.434 78 | 0.947 43 | 0.401 72 | 0.153 84 | 0.454 76 | 0.721 50 | 0.648 51 | 0.717 53 | 0.536 70 | 0.904 28 | 0.765 71 | 0.485 83 |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 |
GMLPs | | 0.538 81 | 0.495 91 | 0.693 69 | 0.647 75 | 0.471 80 | 0.793 59 | 0.300 86 | 0.477 87 | 0.505 66 | 0.358 85 | 0.903 89 | 0.327 84 | 0.081 92 | 0.472 75 | 0.529 73 | 0.448 88 | 0.710 54 | 0.509 74 | 0.746 80 | 0.737 79 | 0.554 71 |
|
HPEIN | | 0.618 67 | 0.729 58 | 0.668 76 | 0.647 75 | 0.597 53 | 0.766 71 | 0.414 64 | 0.680 62 | 0.520 62 | 0.525 52 | 0.946 46 | 0.432 57 | 0.215 59 | 0.493 72 | 0.599 66 | 0.638 56 | 0.617 83 | 0.570 53 | 0.897 34 | 0.806 45 | 0.605 52 |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 |
3DMV | | 0.484 87 | 0.484 93 | 0.538 94 | 0.643 77 | 0.424 85 | 0.606 95 | 0.310 84 | 0.574 79 | 0.433 86 | 0.378 82 | 0.796 95 | 0.301 87 | 0.214 60 | 0.537 62 | 0.208 94 | 0.472 87 | 0.507 95 | 0.413 93 | 0.693 86 | 0.602 94 | 0.539 73 |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 |
PD-Net | | 0.638 54 | 0.797 31 | 0.769 34 | 0.641 78 | 0.590 55 | 0.820 38 | 0.461 41 | 0.537 84 | 0.637 20 | 0.536 48 | 0.947 43 | 0.388 75 | 0.206 62 | 0.656 26 | 0.668 59 | 0.647 52 | 0.732 49 | 0.585 50 | 0.868 59 | 0.793 55 | 0.473 87 |
|
LAP-D | | 0.594 72 | 0.720 61 | 0.692 70 | 0.637 79 | 0.456 82 | 0.773 69 | 0.391 74 | 0.730 51 | 0.587 42 | 0.445 75 | 0.940 63 | 0.381 76 | 0.288 26 | 0.434 79 | 0.453 80 | 0.591 71 | 0.649 76 | 0.581 51 | 0.777 77 | 0.749 77 | 0.610 48 |
|
subcloud_weak | | 0.516 83 | 0.676 71 | 0.591 92 | 0.609 80 | 0.442 83 | 0.774 68 | 0.335 82 | 0.597 75 | 0.422 87 | 0.357 86 | 0.932 75 | 0.341 83 | 0.094 91 | 0.298 88 | 0.528 74 | 0.473 86 | 0.676 70 | 0.495 79 | 0.602 93 | 0.721 83 | 0.349 94 |
|
SPLAT Net |  | 0.393 95 | 0.472 95 | 0.511 95 | 0.606 81 | 0.311 96 | 0.656 86 | 0.245 93 | 0.405 89 | 0.328 95 | 0.197 97 | 0.927 78 | 0.227 96 | 0.000 99 | 0.001 100 | 0.249 90 | 0.271 98 | 0.510 93 | 0.383 96 | 0.593 94 | 0.699 86 | 0.267 96 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 |
PanopticFusion-label | | 0.529 82 | 0.491 92 | 0.688 73 | 0.604 82 | 0.386 87 | 0.632 91 | 0.225 96 | 0.705 60 | 0.434 85 | 0.293 91 | 0.815 94 | 0.348 82 | 0.241 52 | 0.499 70 | 0.669 58 | 0.507 81 | 0.649 76 | 0.442 90 | 0.796 75 | 0.602 94 | 0.561 67 |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) |
DPC | | 0.592 73 | 0.720 61 | 0.700 64 | 0.602 83 | 0.480 78 | 0.762 74 | 0.380 77 | 0.713 57 | 0.585 45 | 0.437 76 | 0.940 63 | 0.369 78 | 0.288 26 | 0.434 79 | 0.509 76 | 0.590 73 | 0.639 81 | 0.567 56 | 0.772 78 | 0.755 75 | 0.592 58 |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 |
PNET2 | | 0.442 91 | 0.548 88 | 0.548 93 | 0.597 84 | 0.363 91 | 0.628 93 | 0.300 86 | 0.292 93 | 0.374 90 | 0.307 90 | 0.881 91 | 0.268 92 | 0.186 72 | 0.238 93 | 0.204 95 | 0.407 91 | 0.506 96 | 0.449 87 | 0.667 89 | 0.620 93 | 0.462 88 |
|
Online SegFusion | | 0.515 84 | 0.607 82 | 0.644 83 | 0.579 85 | 0.434 84 | 0.630 92 | 0.353 80 | 0.628 73 | 0.440 83 | 0.410 79 | 0.762 97 | 0.307 86 | 0.167 79 | 0.520 64 | 0.403 84 | 0.516 80 | 0.565 86 | 0.447 88 | 0.678 88 | 0.701 85 | 0.514 80 |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission |
FCPN |  | 0.447 89 | 0.679 70 | 0.604 91 | 0.578 86 | 0.380 88 | 0.682 85 | 0.291 89 | 0.106 98 | 0.483 74 | 0.258 96 | 0.920 82 | 0.258 93 | 0.025 96 | 0.231 95 | 0.325 87 | 0.480 85 | 0.560 88 | 0.463 85 | 0.725 83 | 0.666 90 | 0.231 98 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 |
PCNN | | 0.498 86 | 0.559 86 | 0.644 83 | 0.560 87 | 0.420 86 | 0.711 82 | 0.229 94 | 0.414 88 | 0.436 84 | 0.352 87 | 0.941 60 | 0.324 85 | 0.155 83 | 0.238 93 | 0.387 85 | 0.493 82 | 0.529 92 | 0.509 74 | 0.813 74 | 0.751 76 | 0.504 81 |
|
3DWSSS | | 0.425 94 | 0.525 89 | 0.647 81 | 0.522 88 | 0.324 94 | 0.488 98 | 0.077 99 | 0.712 58 | 0.353 92 | 0.401 80 | 0.636 99 | 0.281 90 | 0.176 75 | 0.340 86 | 0.565 68 | 0.175 99 | 0.551 89 | 0.398 94 | 0.370 99 | 0.602 94 | 0.361 92 |
|
ScanNet |  | 0.306 99 | 0.203 99 | 0.366 98 | 0.501 89 | 0.311 96 | 0.524 97 | 0.211 97 | 0.002 100 | 0.342 94 | 0.189 98 | 0.786 96 | 0.145 99 | 0.102 90 | 0.245 92 | 0.152 97 | 0.318 97 | 0.348 98 | 0.300 98 | 0.460 98 | 0.437 99 | 0.182 99 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 |
SPH3D-GCN |  | 0.610 68 | 0.858 16 | 0.772 32 | 0.489 90 | 0.532 71 | 0.792 61 | 0.404 68 | 0.643 71 | 0.570 51 | 0.507 60 | 0.935 69 | 0.414 68 | 0.046 95 | 0.510 67 | 0.702 53 | 0.602 66 | 0.705 59 | 0.549 64 | 0.859 63 | 0.773 69 | 0.534 75 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 |
Tangent Convolutions |  | 0.438 93 | 0.437 96 | 0.646 82 | 0.474 91 | 0.369 89 | 0.645 89 | 0.353 80 | 0.258 95 | 0.282 97 | 0.279 92 | 0.918 84 | 0.298 88 | 0.147 87 | 0.283 90 | 0.294 88 | 0.487 83 | 0.562 87 | 0.427 92 | 0.619 92 | 0.633 92 | 0.352 93 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 |
DGCNN_reproduce |  | 0.446 90 | 0.474 94 | 0.623 86 | 0.463 92 | 0.366 90 | 0.651 88 | 0.310 84 | 0.389 91 | 0.349 93 | 0.330 88 | 0.937 66 | 0.271 91 | 0.126 88 | 0.285 89 | 0.224 92 | 0.350 95 | 0.577 85 | 0.445 89 | 0.625 91 | 0.723 82 | 0.394 90 |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 |
PointNet++ |  | 0.339 97 | 0.584 84 | 0.478 97 | 0.458 93 | 0.256 98 | 0.360 99 | 0.250 91 | 0.247 96 | 0.278 98 | 0.261 95 | 0.677 98 | 0.183 97 | 0.117 89 | 0.212 97 | 0.145 98 | 0.364 93 | 0.346 99 | 0.232 99 | 0.548 95 | 0.523 98 | 0.252 97 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. |
SD-DETR | | 0.576 76 | 0.746 50 | 0.609 89 | 0.445 94 | 0.517 74 | 0.643 90 | 0.366 78 | 0.714 56 | 0.456 80 | 0.468 69 | 0.870 92 | 0.432 57 | 0.264 42 | 0.558 55 | 0.674 57 | 0.586 74 | 0.688 66 | 0.482 82 | 0.739 82 | 0.733 80 | 0.537 74 |
|
ScanNet+FTSDF | | 0.383 96 | 0.297 98 | 0.491 96 | 0.432 95 | 0.358 92 | 0.612 94 | 0.274 90 | 0.116 97 | 0.411 88 | 0.265 94 | 0.904 88 | 0.229 95 | 0.079 93 | 0.250 91 | 0.185 96 | 0.320 96 | 0.510 93 | 0.385 95 | 0.548 95 | 0.597 97 | 0.394 90 |
|
3DMV, FTSDF | | 0.501 85 | 0.558 87 | 0.608 90 | 0.424 96 | 0.478 79 | 0.690 83 | 0.246 92 | 0.586 77 | 0.468 77 | 0.450 72 | 0.911 85 | 0.394 73 | 0.160 82 | 0.438 77 | 0.212 93 | 0.432 89 | 0.541 91 | 0.475 83 | 0.742 81 | 0.727 81 | 0.477 85 |
|
SurfaceConvPF | | 0.442 91 | 0.505 90 | 0.622 87 | 0.380 97 | 0.342 93 | 0.654 87 | 0.227 95 | 0.397 90 | 0.367 91 | 0.276 93 | 0.924 79 | 0.240 94 | 0.198 68 | 0.359 85 | 0.262 89 | 0.366 92 | 0.581 84 | 0.435 91 | 0.640 90 | 0.668 89 | 0.398 89 |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. |
PointCNN with RGB |  | 0.458 88 | 0.577 85 | 0.611 88 | 0.356 98 | 0.321 95 | 0.715 81 | 0.299 88 | 0.376 92 | 0.328 95 | 0.319 89 | 0.944 55 | 0.285 89 | 0.164 80 | 0.216 96 | 0.229 91 | 0.484 84 | 0.545 90 | 0.456 86 | 0.755 79 | 0.709 84 | 0.475 86 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 |
SSC-UNet |  | 0.308 98 | 0.353 97 | 0.290 99 | 0.278 99 | 0.166 99 | 0.553 96 | 0.169 98 | 0.286 94 | 0.147 99 | 0.148 99 | 0.908 86 | 0.182 98 | 0.064 94 | 0.023 99 | 0.018 100 | 0.354 94 | 0.363 97 | 0.345 97 | 0.546 97 | 0.685 87 | 0.278 95 |
|
ERROR | | 0.054 100 | 0.000 100 | 0.041 100 | 0.172 100 | 0.030 100 | 0.062 100 | 0.001 100 | 0.035 99 | 0.004 100 | 0.051 100 | 0.143 100 | 0.019 100 | 0.003 98 | 0.041 98 | 0.050 99 | 0.003 100 | 0.054 100 | 0.018 100 | 0.005 100 | 0.264 100 | 0.082 100 |
|