3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
Mix3D | ![]() | 0.781 1 | 0.964 1 | 0.855 1 | 0.843 11 | 0.781 1 | 0.858 7 | 0.575 3 | 0.831 20 | 0.685 7 | 0.714 1 | 0.979 1 | 0.594 3 | 0.310 18 | 0.801 1 | 0.892 8 | 0.841 2 | 0.819 3 | 0.723 3 | 0.940 8 | 0.887 1 | 0.725 13 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
CU-Hybrid Net | 0.764 2 | 0.924 2 | 0.819 9 | 0.840 12 | 0.757 7 | 0.853 9 | 0.580 1 | 0.848 15 | 0.709 2 | 0.643 13 | 0.958 11 | 0.587 7 | 0.295 24 | 0.753 15 | 0.884 12 | 0.758 12 | 0.815 6 | 0.725 2 | 0.927 18 | 0.867 11 | 0.743 6 | |
OccuSeg+Semantic | 0.764 2 | 0.758 47 | 0.796 21 | 0.839 13 | 0.746 12 | 0.907 1 | 0.562 5 | 0.850 14 | 0.680 9 | 0.672 6 | 0.978 2 | 0.610 1 | 0.335 8 | 0.777 4 | 0.819 32 | 0.847 1 | 0.830 1 | 0.691 8 | 0.972 1 | 0.885 2 | 0.727 11 | |
O-CNN | ![]() | 0.762 4 | 0.924 2 | 0.823 6 | 0.844 10 | 0.770 3 | 0.852 10 | 0.577 2 | 0.847 16 | 0.711 1 | 0.640 17 | 0.958 11 | 0.592 4 | 0.217 59 | 0.762 11 | 0.888 9 | 0.758 12 | 0.813 7 | 0.726 1 | 0.932 16 | 0.868 10 | 0.744 5 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
OA-CNN-L_ScanNet20 | 0.756 5 | 0.783 35 | 0.826 5 | 0.858 4 | 0.776 2 | 0.837 20 | 0.548 9 | 0.896 5 | 0.649 17 | 0.675 5 | 0.962 8 | 0.586 8 | 0.335 8 | 0.771 7 | 0.802 36 | 0.770 8 | 0.787 22 | 0.691 8 | 0.936 11 | 0.880 5 | 0.761 3 | |
PointTransformerV2 | 0.752 6 | 0.742 54 | 0.809 15 | 0.872 1 | 0.758 6 | 0.860 6 | 0.552 7 | 0.891 6 | 0.610 31 | 0.687 2 | 0.960 9 | 0.559 15 | 0.304 21 | 0.766 9 | 0.926 2 | 0.767 9 | 0.797 14 | 0.644 23 | 0.942 6 | 0.876 8 | 0.722 15 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
DMF-Net | 0.752 6 | 0.906 6 | 0.793 24 | 0.802 30 | 0.689 28 | 0.825 32 | 0.556 6 | 0.867 10 | 0.681 8 | 0.602 31 | 0.960 9 | 0.555 17 | 0.365 3 | 0.779 3 | 0.859 17 | 0.747 15 | 0.795 18 | 0.717 4 | 0.917 21 | 0.856 19 | 0.764 2 | |
PointConvFormer | 0.749 8 | 0.793 32 | 0.790 25 | 0.807 25 | 0.750 11 | 0.856 8 | 0.524 17 | 0.881 8 | 0.588 42 | 0.642 16 | 0.977 4 | 0.591 5 | 0.274 35 | 0.781 2 | 0.929 1 | 0.804 3 | 0.796 15 | 0.642 24 | 0.947 3 | 0.885 2 | 0.715 18 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
BPNet | ![]() | 0.749 8 | 0.909 4 | 0.818 11 | 0.811 22 | 0.752 9 | 0.839 19 | 0.485 33 | 0.842 17 | 0.673 10 | 0.644 12 | 0.957 14 | 0.528 26 | 0.305 20 | 0.773 6 | 0.859 17 | 0.788 4 | 0.818 5 | 0.693 7 | 0.916 22 | 0.856 19 | 0.723 14 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
MSP | 0.748 10 | 0.623 80 | 0.804 17 | 0.859 3 | 0.745 13 | 0.824 34 | 0.501 24 | 0.912 2 | 0.690 6 | 0.685 3 | 0.956 15 | 0.567 12 | 0.320 14 | 0.768 8 | 0.918 3 | 0.720 23 | 0.802 10 | 0.676 13 | 0.921 19 | 0.881 4 | 0.779 1 | |
StratifiedFormer | ![]() | 0.747 11 | 0.901 7 | 0.803 18 | 0.845 9 | 0.757 7 | 0.846 14 | 0.512 20 | 0.825 23 | 0.696 5 | 0.645 11 | 0.956 15 | 0.576 10 | 0.262 45 | 0.744 19 | 0.861 16 | 0.742 16 | 0.770 31 | 0.705 5 | 0.899 34 | 0.860 16 | 0.734 7 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
Virtual MVFusion | 0.746 12 | 0.771 41 | 0.819 9 | 0.848 7 | 0.702 26 | 0.865 5 | 0.397 71 | 0.899 3 | 0.699 3 | 0.664 8 | 0.948 42 | 0.588 6 | 0.330 10 | 0.746 18 | 0.851 24 | 0.764 10 | 0.796 15 | 0.704 6 | 0.935 12 | 0.866 12 | 0.728 9 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
VMNet | ![]() | 0.746 12 | 0.870 12 | 0.838 2 | 0.858 4 | 0.729 18 | 0.850 12 | 0.501 24 | 0.874 9 | 0.587 43 | 0.658 9 | 0.956 15 | 0.564 13 | 0.299 22 | 0.765 10 | 0.900 5 | 0.716 26 | 0.812 8 | 0.631 29 | 0.939 9 | 0.858 17 | 0.709 19 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
Retro-FPN | 0.744 14 | 0.842 19 | 0.800 19 | 0.767 42 | 0.740 14 | 0.836 23 | 0.541 11 | 0.914 1 | 0.672 11 | 0.626 20 | 0.958 11 | 0.552 18 | 0.272 37 | 0.777 4 | 0.886 11 | 0.696 33 | 0.801 11 | 0.674 15 | 0.941 7 | 0.858 17 | 0.717 16 | |
EQ-Net | 0.743 15 | 0.620 81 | 0.799 20 | 0.849 6 | 0.730 17 | 0.822 36 | 0.493 31 | 0.897 4 | 0.664 12 | 0.681 4 | 0.955 19 | 0.562 14 | 0.378 1 | 0.760 12 | 0.903 4 | 0.738 17 | 0.801 11 | 0.673 16 | 0.907 27 | 0.877 6 | 0.745 4 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
SAT | 0.742 16 | 0.860 14 | 0.765 36 | 0.819 17 | 0.769 4 | 0.848 13 | 0.533 13 | 0.829 21 | 0.663 13 | 0.631 19 | 0.955 19 | 0.586 8 | 0.274 35 | 0.753 15 | 0.896 6 | 0.729 18 | 0.760 38 | 0.666 18 | 0.921 19 | 0.855 21 | 0.733 8 | |
LRPNet | 0.742 16 | 0.816 27 | 0.806 16 | 0.807 25 | 0.752 9 | 0.828 30 | 0.575 3 | 0.839 19 | 0.699 3 | 0.637 18 | 0.954 24 | 0.520 28 | 0.320 14 | 0.755 14 | 0.834 28 | 0.760 11 | 0.772 28 | 0.676 13 | 0.915 23 | 0.862 14 | 0.717 16 | |
TXC | 0.740 18 | 0.842 19 | 0.832 4 | 0.805 29 | 0.715 22 | 0.846 14 | 0.473 35 | 0.885 7 | 0.615 27 | 0.671 7 | 0.971 6 | 0.547 19 | 0.320 14 | 0.697 23 | 0.799 38 | 0.777 6 | 0.819 3 | 0.682 11 | 0.946 4 | 0.871 9 | 0.696 24 | |
LargeKernel3D | 0.739 19 | 0.909 4 | 0.820 8 | 0.806 27 | 0.740 14 | 0.852 10 | 0.545 10 | 0.826 22 | 0.594 41 | 0.643 13 | 0.955 19 | 0.541 21 | 0.263 44 | 0.723 21 | 0.858 19 | 0.775 7 | 0.767 32 | 0.678 12 | 0.933 14 | 0.848 25 | 0.694 25 | |
MinkowskiNet | ![]() | 0.736 20 | 0.859 15 | 0.818 11 | 0.832 14 | 0.709 23 | 0.840 18 | 0.521 19 | 0.853 13 | 0.660 15 | 0.643 13 | 0.951 32 | 0.544 20 | 0.286 29 | 0.731 20 | 0.893 7 | 0.675 40 | 0.772 28 | 0.683 10 | 0.874 52 | 0.852 23 | 0.727 11 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
IPCA | 0.731 21 | 0.890 8 | 0.837 3 | 0.864 2 | 0.726 19 | 0.873 2 | 0.530 16 | 0.824 24 | 0.489 74 | 0.647 10 | 0.978 2 | 0.609 2 | 0.336 7 | 0.624 38 | 0.733 47 | 0.758 12 | 0.776 26 | 0.570 54 | 0.949 2 | 0.877 6 | 0.728 9 | |
SparseConvNet | 0.725 22 | 0.647 77 | 0.821 7 | 0.846 8 | 0.721 20 | 0.869 3 | 0.533 13 | 0.754 44 | 0.603 37 | 0.614 24 | 0.955 19 | 0.572 11 | 0.325 12 | 0.710 22 | 0.870 13 | 0.724 21 | 0.823 2 | 0.628 30 | 0.934 13 | 0.865 13 | 0.683 28 | |
PointTransformer++ | 0.725 22 | 0.727 61 | 0.811 14 | 0.819 17 | 0.765 5 | 0.841 17 | 0.502 23 | 0.814 30 | 0.621 26 | 0.623 21 | 0.955 19 | 0.556 16 | 0.284 30 | 0.620 39 | 0.866 14 | 0.781 5 | 0.757 41 | 0.648 21 | 0.932 16 | 0.862 14 | 0.709 19 | |
MatchingNet | 0.724 24 | 0.812 29 | 0.812 13 | 0.810 23 | 0.735 16 | 0.834 25 | 0.495 30 | 0.860 12 | 0.572 49 | 0.602 31 | 0.954 24 | 0.512 30 | 0.280 32 | 0.757 13 | 0.845 26 | 0.725 20 | 0.780 24 | 0.606 40 | 0.937 10 | 0.851 24 | 0.700 22 | |
INS-Conv-semantic | 0.717 25 | 0.751 50 | 0.759 39 | 0.812 21 | 0.704 25 | 0.868 4 | 0.537 12 | 0.842 17 | 0.609 33 | 0.608 27 | 0.953 27 | 0.534 22 | 0.293 25 | 0.616 40 | 0.864 15 | 0.719 25 | 0.793 19 | 0.640 25 | 0.933 14 | 0.845 30 | 0.663 33 | |
PointMetaBase | 0.714 26 | 0.835 21 | 0.785 27 | 0.821 15 | 0.684 30 | 0.846 14 | 0.531 15 | 0.865 11 | 0.614 28 | 0.596 35 | 0.953 27 | 0.500 33 | 0.246 51 | 0.674 24 | 0.888 9 | 0.692 34 | 0.764 34 | 0.624 31 | 0.849 67 | 0.844 31 | 0.675 30 | |
contrastBoundary | ![]() | 0.705 27 | 0.769 44 | 0.775 32 | 0.809 24 | 0.687 29 | 0.820 39 | 0.439 59 | 0.812 31 | 0.661 14 | 0.591 37 | 0.945 50 | 0.515 29 | 0.171 77 | 0.633 35 | 0.856 20 | 0.720 23 | 0.796 15 | 0.668 17 | 0.889 41 | 0.847 27 | 0.689 26 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
RFCR | 0.702 28 | 0.889 9 | 0.745 48 | 0.813 20 | 0.672 32 | 0.818 43 | 0.493 31 | 0.815 28 | 0.623 24 | 0.610 25 | 0.947 44 | 0.470 43 | 0.249 50 | 0.594 43 | 0.848 25 | 0.705 30 | 0.779 25 | 0.646 22 | 0.892 39 | 0.823 37 | 0.611 47 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
One Thing One Click | 0.701 29 | 0.825 25 | 0.796 21 | 0.723 49 | 0.716 21 | 0.832 26 | 0.433 61 | 0.816 26 | 0.634 22 | 0.609 26 | 0.969 7 | 0.418 68 | 0.344 5 | 0.559 55 | 0.833 29 | 0.715 27 | 0.808 9 | 0.560 58 | 0.902 31 | 0.847 27 | 0.680 29 | |
JSENet | ![]() | 0.699 30 | 0.881 11 | 0.762 37 | 0.821 15 | 0.667 33 | 0.800 56 | 0.522 18 | 0.792 36 | 0.613 29 | 0.607 28 | 0.935 70 | 0.492 35 | 0.205 64 | 0.576 48 | 0.853 22 | 0.691 35 | 0.758 40 | 0.652 20 | 0.872 55 | 0.828 34 | 0.649 37 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
PicassoNet-II | ![]() | 0.696 31 | 0.704 66 | 0.790 25 | 0.787 34 | 0.709 23 | 0.837 20 | 0.459 43 | 0.815 28 | 0.543 58 | 0.615 23 | 0.956 15 | 0.529 24 | 0.250 48 | 0.551 60 | 0.790 39 | 0.703 31 | 0.799 13 | 0.619 35 | 0.908 26 | 0.848 25 | 0.700 22 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
One-Thing-One-Click | 0.693 32 | 0.743 53 | 0.794 23 | 0.655 73 | 0.684 30 | 0.822 36 | 0.497 29 | 0.719 54 | 0.622 25 | 0.617 22 | 0.977 4 | 0.447 55 | 0.339 6 | 0.750 17 | 0.664 62 | 0.703 31 | 0.790 21 | 0.596 44 | 0.946 4 | 0.855 21 | 0.647 38 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
Feature_GeometricNet | ![]() | 0.690 33 | 0.884 10 | 0.754 43 | 0.795 33 | 0.647 39 | 0.818 43 | 0.422 63 | 0.802 34 | 0.612 30 | 0.604 29 | 0.945 50 | 0.462 46 | 0.189 72 | 0.563 54 | 0.853 22 | 0.726 19 | 0.765 33 | 0.632 28 | 0.904 29 | 0.821 40 | 0.606 51 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
FusionNet | 0.688 34 | 0.704 66 | 0.741 52 | 0.754 46 | 0.656 35 | 0.829 28 | 0.501 24 | 0.741 49 | 0.609 33 | 0.548 44 | 0.950 36 | 0.522 27 | 0.371 2 | 0.633 35 | 0.756 42 | 0.715 27 | 0.771 30 | 0.623 32 | 0.861 63 | 0.814 42 | 0.658 34 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
Feature-Geometry Net | ![]() | 0.685 35 | 0.866 13 | 0.748 45 | 0.819 17 | 0.645 41 | 0.794 59 | 0.450 48 | 0.802 34 | 0.587 43 | 0.604 29 | 0.945 50 | 0.464 45 | 0.201 67 | 0.554 57 | 0.840 27 | 0.723 22 | 0.732 50 | 0.602 42 | 0.907 27 | 0.822 39 | 0.603 54 |
VACNN++ | 0.684 36 | 0.728 60 | 0.757 42 | 0.776 39 | 0.690 27 | 0.804 53 | 0.464 41 | 0.816 26 | 0.577 48 | 0.587 38 | 0.945 50 | 0.508 32 | 0.276 34 | 0.671 25 | 0.710 52 | 0.663 45 | 0.750 44 | 0.589 49 | 0.881 46 | 0.832 33 | 0.653 36 | |
KP-FCNN | 0.684 36 | 0.847 18 | 0.758 41 | 0.784 36 | 0.647 39 | 0.814 46 | 0.473 35 | 0.772 39 | 0.605 35 | 0.594 36 | 0.935 70 | 0.450 53 | 0.181 75 | 0.587 44 | 0.805 35 | 0.690 36 | 0.785 23 | 0.614 36 | 0.882 45 | 0.819 41 | 0.632 43 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
DGNet | 0.684 36 | 0.712 65 | 0.784 28 | 0.782 38 | 0.658 34 | 0.835 24 | 0.499 28 | 0.823 25 | 0.641 19 | 0.597 34 | 0.950 36 | 0.487 36 | 0.281 31 | 0.575 49 | 0.619 65 | 0.647 53 | 0.764 34 | 0.620 34 | 0.871 58 | 0.846 29 | 0.688 27 | |
Superpoint Network | 0.683 39 | 0.851 17 | 0.728 56 | 0.800 32 | 0.653 37 | 0.806 51 | 0.468 38 | 0.804 32 | 0.572 49 | 0.602 31 | 0.946 47 | 0.453 52 | 0.239 54 | 0.519 66 | 0.822 30 | 0.689 38 | 0.762 37 | 0.595 46 | 0.895 37 | 0.827 35 | 0.630 44 | |
PointContrast_LA_SEM | 0.683 39 | 0.757 48 | 0.784 28 | 0.786 35 | 0.639 43 | 0.824 34 | 0.408 66 | 0.775 38 | 0.604 36 | 0.541 46 | 0.934 74 | 0.532 23 | 0.269 40 | 0.552 58 | 0.777 40 | 0.645 56 | 0.793 19 | 0.640 25 | 0.913 24 | 0.824 36 | 0.671 31 | |
VI-PointConv | 0.676 41 | 0.770 43 | 0.754 43 | 0.783 37 | 0.621 47 | 0.814 46 | 0.552 7 | 0.758 42 | 0.571 51 | 0.557 42 | 0.954 24 | 0.529 24 | 0.268 42 | 0.530 64 | 0.682 57 | 0.675 40 | 0.719 53 | 0.603 41 | 0.888 42 | 0.833 32 | 0.665 32 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
ROSMRF3D | 0.673 42 | 0.789 33 | 0.748 45 | 0.763 44 | 0.635 45 | 0.814 46 | 0.407 68 | 0.747 46 | 0.581 47 | 0.573 39 | 0.950 36 | 0.484 37 | 0.271 39 | 0.607 41 | 0.754 43 | 0.649 50 | 0.774 27 | 0.596 44 | 0.883 44 | 0.823 37 | 0.606 51 | |
SALANet | 0.670 43 | 0.816 27 | 0.770 34 | 0.768 41 | 0.652 38 | 0.807 50 | 0.451 45 | 0.747 46 | 0.659 16 | 0.545 45 | 0.924 80 | 0.473 42 | 0.149 87 | 0.571 51 | 0.811 34 | 0.635 59 | 0.746 45 | 0.623 32 | 0.892 39 | 0.794 54 | 0.570 64 | |
PointConv | ![]() | 0.666 44 | 0.781 36 | 0.759 39 | 0.699 58 | 0.644 42 | 0.822 36 | 0.475 34 | 0.779 37 | 0.564 54 | 0.504 62 | 0.953 27 | 0.428 62 | 0.203 66 | 0.586 46 | 0.754 43 | 0.661 46 | 0.753 42 | 0.588 50 | 0.902 31 | 0.813 44 | 0.642 39 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointASNL | ![]() | 0.666 44 | 0.703 68 | 0.781 30 | 0.751 48 | 0.655 36 | 0.830 27 | 0.471 37 | 0.769 40 | 0.474 77 | 0.537 48 | 0.951 32 | 0.475 41 | 0.279 33 | 0.635 33 | 0.698 56 | 0.675 40 | 0.751 43 | 0.553 63 | 0.816 74 | 0.806 46 | 0.703 21 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
PPCNN++ | ![]() | 0.663 46 | 0.746 51 | 0.708 60 | 0.722 50 | 0.638 44 | 0.820 39 | 0.451 45 | 0.566 81 | 0.599 39 | 0.541 46 | 0.950 36 | 0.510 31 | 0.313 17 | 0.648 30 | 0.819 32 | 0.616 64 | 0.682 69 | 0.590 48 | 0.869 59 | 0.810 45 | 0.656 35 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
DCM-Net | 0.658 47 | 0.778 37 | 0.702 63 | 0.806 27 | 0.619 48 | 0.813 49 | 0.468 38 | 0.693 62 | 0.494 69 | 0.524 54 | 0.941 61 | 0.449 54 | 0.298 23 | 0.510 68 | 0.821 31 | 0.675 40 | 0.727 52 | 0.568 56 | 0.826 72 | 0.803 48 | 0.637 41 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
HPGCNN | 0.656 48 | 0.698 69 | 0.743 50 | 0.650 74 | 0.564 65 | 0.820 39 | 0.505 22 | 0.758 42 | 0.631 23 | 0.479 67 | 0.945 50 | 0.480 39 | 0.226 55 | 0.572 50 | 0.774 41 | 0.690 36 | 0.735 48 | 0.614 36 | 0.853 66 | 0.776 69 | 0.597 57 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
SAFNet-seg | ![]() | 0.654 49 | 0.752 49 | 0.734 54 | 0.664 71 | 0.583 60 | 0.815 45 | 0.399 70 | 0.754 44 | 0.639 20 | 0.535 50 | 0.942 59 | 0.470 43 | 0.309 19 | 0.665 26 | 0.539 71 | 0.650 49 | 0.708 58 | 0.635 27 | 0.857 65 | 0.793 56 | 0.642 39 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
RandLA-Net | ![]() | 0.645 50 | 0.778 37 | 0.731 55 | 0.699 58 | 0.577 61 | 0.829 28 | 0.446 50 | 0.736 50 | 0.477 76 | 0.523 56 | 0.945 50 | 0.454 50 | 0.269 40 | 0.484 75 | 0.749 46 | 0.618 62 | 0.738 46 | 0.599 43 | 0.827 71 | 0.792 59 | 0.621 46 |
MVPNet | ![]() | 0.641 51 | 0.831 22 | 0.715 58 | 0.671 68 | 0.590 56 | 0.781 65 | 0.394 72 | 0.679 64 | 0.642 18 | 0.553 43 | 0.937 67 | 0.462 46 | 0.256 46 | 0.649 29 | 0.406 84 | 0.626 60 | 0.691 66 | 0.666 18 | 0.877 48 | 0.792 59 | 0.608 50 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
PointConv-SFPN | 0.641 51 | 0.776 39 | 0.703 62 | 0.721 51 | 0.557 68 | 0.826 31 | 0.451 45 | 0.672 66 | 0.563 55 | 0.483 66 | 0.943 58 | 0.425 65 | 0.162 82 | 0.644 31 | 0.726 48 | 0.659 47 | 0.709 57 | 0.572 53 | 0.875 50 | 0.786 64 | 0.559 69 | |
PointMRNet | 0.640 53 | 0.717 64 | 0.701 64 | 0.692 61 | 0.576 62 | 0.801 55 | 0.467 40 | 0.716 55 | 0.563 55 | 0.459 72 | 0.953 27 | 0.429 61 | 0.169 79 | 0.581 47 | 0.854 21 | 0.605 65 | 0.710 55 | 0.550 64 | 0.894 38 | 0.793 56 | 0.575 62 | |
FPConv | ![]() | 0.639 54 | 0.785 34 | 0.760 38 | 0.713 56 | 0.603 51 | 0.798 57 | 0.392 73 | 0.534 86 | 0.603 37 | 0.524 54 | 0.948 42 | 0.457 48 | 0.250 48 | 0.538 62 | 0.723 50 | 0.598 69 | 0.696 64 | 0.614 36 | 0.872 55 | 0.799 49 | 0.567 66 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
PD-Net | 0.638 55 | 0.797 31 | 0.769 35 | 0.641 79 | 0.590 56 | 0.820 39 | 0.461 42 | 0.537 85 | 0.637 21 | 0.536 49 | 0.947 44 | 0.388 76 | 0.206 63 | 0.656 27 | 0.668 60 | 0.647 53 | 0.732 50 | 0.585 51 | 0.868 60 | 0.793 56 | 0.473 88 | |
PointSPNet | 0.637 56 | 0.734 57 | 0.692 71 | 0.714 55 | 0.576 62 | 0.797 58 | 0.446 50 | 0.743 48 | 0.598 40 | 0.437 77 | 0.942 59 | 0.403 72 | 0.150 86 | 0.626 37 | 0.800 37 | 0.649 50 | 0.697 63 | 0.557 61 | 0.846 68 | 0.777 68 | 0.563 67 | |
SConv | 0.636 57 | 0.830 23 | 0.697 67 | 0.752 47 | 0.572 64 | 0.780 67 | 0.445 52 | 0.716 55 | 0.529 61 | 0.530 51 | 0.951 32 | 0.446 56 | 0.170 78 | 0.507 70 | 0.666 61 | 0.636 58 | 0.682 69 | 0.541 69 | 0.886 43 | 0.799 49 | 0.594 58 | |
Supervoxel-CNN | 0.635 58 | 0.656 75 | 0.711 59 | 0.719 52 | 0.613 49 | 0.757 76 | 0.444 55 | 0.765 41 | 0.534 60 | 0.566 40 | 0.928 78 | 0.478 40 | 0.272 37 | 0.636 32 | 0.531 73 | 0.664 44 | 0.645 79 | 0.508 77 | 0.864 62 | 0.792 59 | 0.611 47 | |
joint point-based | ![]() | 0.634 59 | 0.614 82 | 0.778 31 | 0.667 70 | 0.633 46 | 0.825 32 | 0.420 64 | 0.804 32 | 0.467 79 | 0.561 41 | 0.951 32 | 0.494 34 | 0.291 26 | 0.566 52 | 0.458 79 | 0.579 76 | 0.764 34 | 0.559 60 | 0.838 69 | 0.814 42 | 0.598 56 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
PointMTL | 0.632 60 | 0.731 58 | 0.688 74 | 0.675 65 | 0.591 55 | 0.784 64 | 0.444 55 | 0.565 82 | 0.610 31 | 0.492 64 | 0.949 40 | 0.456 49 | 0.254 47 | 0.587 44 | 0.706 53 | 0.599 68 | 0.665 75 | 0.612 39 | 0.868 60 | 0.791 63 | 0.579 61 | |
3DSM_DMMF | 0.631 61 | 0.626 79 | 0.745 48 | 0.801 31 | 0.607 50 | 0.751 77 | 0.506 21 | 0.729 53 | 0.565 53 | 0.491 65 | 0.866 94 | 0.434 57 | 0.197 70 | 0.595 42 | 0.630 64 | 0.709 29 | 0.705 60 | 0.560 58 | 0.875 50 | 0.740 79 | 0.491 83 | |
PointNet2-SFPN | 0.631 61 | 0.771 41 | 0.692 71 | 0.672 66 | 0.524 73 | 0.837 20 | 0.440 58 | 0.706 60 | 0.538 59 | 0.446 74 | 0.944 56 | 0.421 67 | 0.219 58 | 0.552 58 | 0.751 45 | 0.591 72 | 0.737 47 | 0.543 68 | 0.901 33 | 0.768 71 | 0.557 70 | |
APCF-Net | 0.631 61 | 0.742 54 | 0.687 76 | 0.672 66 | 0.557 68 | 0.792 62 | 0.408 66 | 0.665 67 | 0.545 57 | 0.508 59 | 0.952 31 | 0.428 62 | 0.186 73 | 0.634 34 | 0.702 54 | 0.620 61 | 0.706 59 | 0.555 62 | 0.873 53 | 0.798 51 | 0.581 60 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
FusionAwareConv | 0.630 64 | 0.604 84 | 0.741 52 | 0.766 43 | 0.590 56 | 0.747 78 | 0.501 24 | 0.734 51 | 0.503 68 | 0.527 52 | 0.919 84 | 0.454 50 | 0.323 13 | 0.550 61 | 0.420 83 | 0.678 39 | 0.688 67 | 0.544 66 | 0.896 36 | 0.795 53 | 0.627 45 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
DenSeR | 0.628 65 | 0.800 30 | 0.625 86 | 0.719 52 | 0.545 71 | 0.806 51 | 0.445 52 | 0.597 76 | 0.448 83 | 0.519 57 | 0.938 66 | 0.481 38 | 0.328 11 | 0.489 74 | 0.499 78 | 0.657 48 | 0.759 39 | 0.592 47 | 0.881 46 | 0.797 52 | 0.634 42 | |
SegGroup_sem | ![]() | 0.627 66 | 0.818 26 | 0.747 47 | 0.701 57 | 0.602 52 | 0.764 73 | 0.385 77 | 0.629 73 | 0.490 72 | 0.508 59 | 0.931 77 | 0.409 70 | 0.201 67 | 0.564 53 | 0.725 49 | 0.618 62 | 0.692 65 | 0.539 70 | 0.873 53 | 0.794 54 | 0.548 73 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
SIConv | 0.625 67 | 0.830 23 | 0.694 69 | 0.757 45 | 0.563 66 | 0.772 71 | 0.448 49 | 0.647 70 | 0.520 63 | 0.509 58 | 0.949 40 | 0.431 60 | 0.191 71 | 0.496 72 | 0.614 66 | 0.647 53 | 0.672 73 | 0.535 72 | 0.876 49 | 0.783 65 | 0.571 63 | |
HPEIN | 0.618 68 | 0.729 59 | 0.668 77 | 0.647 76 | 0.597 54 | 0.766 72 | 0.414 65 | 0.680 63 | 0.520 63 | 0.525 53 | 0.946 47 | 0.432 58 | 0.215 60 | 0.493 73 | 0.599 67 | 0.638 57 | 0.617 84 | 0.570 54 | 0.897 35 | 0.806 46 | 0.605 53 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
SPH3D-GCN | ![]() | 0.610 69 | 0.858 16 | 0.772 33 | 0.489 91 | 0.532 72 | 0.792 62 | 0.404 69 | 0.643 72 | 0.570 52 | 0.507 61 | 0.935 70 | 0.414 69 | 0.046 96 | 0.510 68 | 0.702 54 | 0.602 67 | 0.705 60 | 0.549 65 | 0.859 64 | 0.773 70 | 0.534 76 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
AttAN | 0.609 70 | 0.760 46 | 0.667 78 | 0.649 75 | 0.521 74 | 0.793 60 | 0.457 44 | 0.648 69 | 0.528 62 | 0.434 79 | 0.947 44 | 0.401 73 | 0.153 85 | 0.454 77 | 0.721 51 | 0.648 52 | 0.717 54 | 0.536 71 | 0.904 29 | 0.765 72 | 0.485 84 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
wsss-transformer | 0.600 71 | 0.634 78 | 0.743 50 | 0.697 60 | 0.601 53 | 0.781 65 | 0.437 60 | 0.585 79 | 0.493 70 | 0.446 74 | 0.933 75 | 0.394 74 | 0.011 98 | 0.654 28 | 0.661 63 | 0.603 66 | 0.733 49 | 0.526 73 | 0.832 70 | 0.761 74 | 0.480 85 | |
dtc_net | 0.596 72 | 0.683 70 | 0.725 57 | 0.715 54 | 0.549 70 | 0.803 54 | 0.444 55 | 0.647 70 | 0.493 70 | 0.495 63 | 0.941 61 | 0.409 70 | 0.000 100 | 0.424 82 | 0.544 70 | 0.598 69 | 0.703 62 | 0.522 74 | 0.912 25 | 0.792 59 | 0.520 79 | |
LAP-D | 0.594 73 | 0.720 62 | 0.692 71 | 0.637 80 | 0.456 83 | 0.773 70 | 0.391 75 | 0.730 52 | 0.587 43 | 0.445 76 | 0.940 64 | 0.381 77 | 0.288 27 | 0.434 80 | 0.453 81 | 0.591 72 | 0.649 77 | 0.581 52 | 0.777 78 | 0.749 78 | 0.610 49 | |
DPC | 0.592 74 | 0.720 62 | 0.700 65 | 0.602 84 | 0.480 79 | 0.762 75 | 0.380 78 | 0.713 58 | 0.585 46 | 0.437 77 | 0.940 64 | 0.369 79 | 0.288 27 | 0.434 80 | 0.509 77 | 0.590 74 | 0.639 82 | 0.567 57 | 0.772 79 | 0.755 76 | 0.592 59 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
CCRFNet | 0.589 75 | 0.766 45 | 0.659 81 | 0.683 63 | 0.470 82 | 0.740 80 | 0.387 76 | 0.620 75 | 0.490 72 | 0.476 68 | 0.922 82 | 0.355 82 | 0.245 52 | 0.511 67 | 0.511 76 | 0.571 77 | 0.643 80 | 0.493 81 | 0.872 55 | 0.762 73 | 0.600 55 | |
ROSMRF | 0.580 76 | 0.772 40 | 0.707 61 | 0.681 64 | 0.563 66 | 0.764 73 | 0.362 80 | 0.515 87 | 0.465 80 | 0.465 71 | 0.936 69 | 0.427 64 | 0.207 62 | 0.438 78 | 0.577 68 | 0.536 80 | 0.675 72 | 0.486 82 | 0.723 85 | 0.779 66 | 0.524 78 | |
SD-DETR | 0.576 77 | 0.746 51 | 0.609 90 | 0.445 95 | 0.517 75 | 0.643 91 | 0.366 79 | 0.714 57 | 0.456 81 | 0.468 70 | 0.870 93 | 0.432 58 | 0.264 43 | 0.558 56 | 0.674 58 | 0.586 75 | 0.688 67 | 0.482 83 | 0.739 83 | 0.733 81 | 0.537 75 | |
SQN_0.1% | 0.569 78 | 0.676 72 | 0.696 68 | 0.657 72 | 0.497 76 | 0.779 68 | 0.424 62 | 0.548 83 | 0.515 65 | 0.376 84 | 0.902 91 | 0.422 66 | 0.357 4 | 0.379 85 | 0.456 80 | 0.596 71 | 0.659 76 | 0.544 66 | 0.685 88 | 0.665 92 | 0.556 71 | |
TextureNet | ![]() | 0.566 79 | 0.672 74 | 0.664 79 | 0.671 68 | 0.494 77 | 0.719 81 | 0.445 52 | 0.678 65 | 0.411 89 | 0.396 82 | 0.935 70 | 0.356 81 | 0.225 56 | 0.412 83 | 0.535 72 | 0.565 78 | 0.636 83 | 0.464 85 | 0.794 77 | 0.680 89 | 0.568 65 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
DVVNet | 0.562 80 | 0.648 76 | 0.700 65 | 0.770 40 | 0.586 59 | 0.687 85 | 0.333 84 | 0.650 68 | 0.514 66 | 0.475 69 | 0.906 88 | 0.359 80 | 0.223 57 | 0.340 87 | 0.442 82 | 0.422 91 | 0.668 74 | 0.501 78 | 0.708 86 | 0.779 66 | 0.534 76 | |
Pointnet++ & Feature | ![]() | 0.557 81 | 0.735 56 | 0.661 80 | 0.686 62 | 0.491 78 | 0.744 79 | 0.392 73 | 0.539 84 | 0.451 82 | 0.375 85 | 0.946 47 | 0.376 78 | 0.205 64 | 0.403 84 | 0.356 87 | 0.553 79 | 0.643 80 | 0.497 79 | 0.824 73 | 0.756 75 | 0.515 80 |
GMLPs | 0.538 82 | 0.495 92 | 0.693 70 | 0.647 76 | 0.471 81 | 0.793 60 | 0.300 87 | 0.477 88 | 0.505 67 | 0.358 86 | 0.903 90 | 0.327 85 | 0.081 93 | 0.472 76 | 0.529 74 | 0.448 89 | 0.710 55 | 0.509 75 | 0.746 81 | 0.737 80 | 0.554 72 | |
PanopticFusion-label | 0.529 83 | 0.491 93 | 0.688 74 | 0.604 83 | 0.386 88 | 0.632 92 | 0.225 97 | 0.705 61 | 0.434 86 | 0.293 92 | 0.815 95 | 0.348 83 | 0.241 53 | 0.499 71 | 0.669 59 | 0.507 82 | 0.649 77 | 0.442 91 | 0.796 76 | 0.602 95 | 0.561 68 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
subcloud_weak | 0.516 84 | 0.676 72 | 0.591 93 | 0.609 81 | 0.442 84 | 0.774 69 | 0.335 83 | 0.597 76 | 0.422 88 | 0.357 87 | 0.932 76 | 0.341 84 | 0.094 92 | 0.298 89 | 0.528 75 | 0.473 87 | 0.676 71 | 0.495 80 | 0.602 94 | 0.721 84 | 0.349 95 | |
Online SegFusion | 0.515 85 | 0.607 83 | 0.644 84 | 0.579 86 | 0.434 85 | 0.630 93 | 0.353 81 | 0.628 74 | 0.440 84 | 0.410 80 | 0.762 98 | 0.307 87 | 0.167 80 | 0.520 65 | 0.403 85 | 0.516 81 | 0.565 87 | 0.447 89 | 0.678 89 | 0.701 86 | 0.514 81 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
3DMV, FTSDF | 0.501 86 | 0.558 88 | 0.608 91 | 0.424 97 | 0.478 80 | 0.690 84 | 0.246 93 | 0.586 78 | 0.468 78 | 0.450 73 | 0.911 86 | 0.394 74 | 0.160 83 | 0.438 78 | 0.212 94 | 0.432 90 | 0.541 92 | 0.475 84 | 0.742 82 | 0.727 82 | 0.477 86 | |
PCNN | 0.498 87 | 0.559 87 | 0.644 84 | 0.560 88 | 0.420 87 | 0.711 83 | 0.229 95 | 0.414 89 | 0.436 85 | 0.352 88 | 0.941 61 | 0.324 86 | 0.155 84 | 0.238 94 | 0.387 86 | 0.493 83 | 0.529 93 | 0.509 75 | 0.813 75 | 0.751 77 | 0.504 82 | |
3DMV | 0.484 88 | 0.484 94 | 0.538 95 | 0.643 78 | 0.424 86 | 0.606 96 | 0.310 85 | 0.574 80 | 0.433 87 | 0.378 83 | 0.796 96 | 0.301 88 | 0.214 61 | 0.537 63 | 0.208 95 | 0.472 88 | 0.507 96 | 0.413 94 | 0.693 87 | 0.602 95 | 0.539 74 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
PointCNN with RGB | ![]() | 0.458 89 | 0.577 86 | 0.611 89 | 0.356 99 | 0.321 96 | 0.715 82 | 0.299 89 | 0.376 93 | 0.328 96 | 0.319 90 | 0.944 56 | 0.285 90 | 0.164 81 | 0.216 97 | 0.229 92 | 0.484 85 | 0.545 91 | 0.456 87 | 0.755 80 | 0.709 85 | 0.475 87 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
FCPN | ![]() | 0.447 90 | 0.679 71 | 0.604 92 | 0.578 87 | 0.380 89 | 0.682 86 | 0.291 90 | 0.106 99 | 0.483 75 | 0.258 97 | 0.920 83 | 0.258 94 | 0.025 97 | 0.231 96 | 0.325 88 | 0.480 86 | 0.560 89 | 0.463 86 | 0.725 84 | 0.666 91 | 0.231 99 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
DGCNN_reproduce | ![]() | 0.446 91 | 0.474 95 | 0.623 87 | 0.463 93 | 0.366 91 | 0.651 89 | 0.310 85 | 0.389 92 | 0.349 94 | 0.330 89 | 0.937 67 | 0.271 92 | 0.126 89 | 0.285 90 | 0.224 93 | 0.350 96 | 0.577 86 | 0.445 90 | 0.625 92 | 0.723 83 | 0.394 91 |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
SurfaceConvPF | 0.442 92 | 0.505 91 | 0.622 88 | 0.380 98 | 0.342 94 | 0.654 88 | 0.227 96 | 0.397 91 | 0.367 92 | 0.276 94 | 0.924 80 | 0.240 95 | 0.198 69 | 0.359 86 | 0.262 90 | 0.366 93 | 0.581 85 | 0.435 92 | 0.640 91 | 0.668 90 | 0.398 90 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
PNET2 | 0.442 92 | 0.548 89 | 0.548 94 | 0.597 85 | 0.363 92 | 0.628 94 | 0.300 87 | 0.292 94 | 0.374 91 | 0.307 91 | 0.881 92 | 0.268 93 | 0.186 73 | 0.238 94 | 0.204 96 | 0.407 92 | 0.506 97 | 0.449 88 | 0.667 90 | 0.620 94 | 0.462 89 | |
Tangent Convolutions | ![]() | 0.438 94 | 0.437 97 | 0.646 83 | 0.474 92 | 0.369 90 | 0.645 90 | 0.353 81 | 0.258 96 | 0.282 98 | 0.279 93 | 0.918 85 | 0.298 89 | 0.147 88 | 0.283 91 | 0.294 89 | 0.487 84 | 0.562 88 | 0.427 93 | 0.619 93 | 0.633 93 | 0.352 94 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
3DWSSS | 0.425 95 | 0.525 90 | 0.647 82 | 0.522 89 | 0.324 95 | 0.488 99 | 0.077 100 | 0.712 59 | 0.353 93 | 0.401 81 | 0.636 100 | 0.281 91 | 0.176 76 | 0.340 87 | 0.565 69 | 0.175 100 | 0.551 90 | 0.398 95 | 0.370 100 | 0.602 95 | 0.361 93 | |
SPLAT Net | ![]() | 0.393 96 | 0.472 96 | 0.511 96 | 0.606 82 | 0.311 97 | 0.656 87 | 0.245 94 | 0.405 90 | 0.328 96 | 0.197 98 | 0.927 79 | 0.227 97 | 0.000 100 | 0.001 101 | 0.249 91 | 0.271 99 | 0.510 94 | 0.383 97 | 0.593 95 | 0.699 87 | 0.267 97 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 97 | 0.297 99 | 0.491 97 | 0.432 96 | 0.358 93 | 0.612 95 | 0.274 91 | 0.116 98 | 0.411 89 | 0.265 95 | 0.904 89 | 0.229 96 | 0.079 94 | 0.250 92 | 0.185 97 | 0.320 97 | 0.510 94 | 0.385 96 | 0.548 96 | 0.597 98 | 0.394 91 | |
PointNet++ | ![]() | 0.339 98 | 0.584 85 | 0.478 98 | 0.458 94 | 0.256 99 | 0.360 100 | 0.250 92 | 0.247 97 | 0.278 99 | 0.261 96 | 0.677 99 | 0.183 98 | 0.117 90 | 0.212 98 | 0.145 99 | 0.364 94 | 0.346 100 | 0.232 100 | 0.548 96 | 0.523 99 | 0.252 98 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
SSC-UNet | ![]() | 0.308 99 | 0.353 98 | 0.290 100 | 0.278 100 | 0.166 100 | 0.553 97 | 0.169 99 | 0.286 95 | 0.147 100 | 0.148 100 | 0.908 87 | 0.182 99 | 0.064 95 | 0.023 100 | 0.018 101 | 0.354 95 | 0.363 98 | 0.345 98 | 0.546 98 | 0.685 88 | 0.278 96 |
ScanNet | ![]() | 0.306 100 | 0.203 100 | 0.366 99 | 0.501 90 | 0.311 97 | 0.524 98 | 0.211 98 | 0.002 101 | 0.342 95 | 0.189 99 | 0.786 97 | 0.145 100 | 0.102 91 | 0.245 93 | 0.152 98 | 0.318 98 | 0.348 99 | 0.300 99 | 0.460 99 | 0.437 100 | 0.182 100 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
ERROR | 0.054 101 | 0.000 101 | 0.041 101 | 0.172 101 | 0.030 101 | 0.062 101 | 0.001 101 | 0.035 100 | 0.004 101 | 0.051 101 | 0.143 101 | 0.019 101 | 0.003 99 | 0.041 99 | 0.050 100 | 0.003 101 | 0.054 101 | 0.018 101 | 0.005 101 | 0.264 101 | 0.082 101 | |