3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
Mix3D | ![]() | 0.781 1 | 0.964 1 | 0.855 1 | 0.843 12 | 0.781 3 | 0.858 9 | 0.575 4 | 0.831 24 | 0.685 9 | 0.714 2 | 0.979 1 | 0.594 4 | 0.310 20 | 0.801 1 | 0.892 11 | 0.841 2 | 0.819 3 | 0.723 3 | 0.940 9 | 0.887 2 | 0.725 17 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
Swin3D | ![]() | 0.779 2 | 0.861 16 | 0.818 11 | 0.836 15 | 0.790 1 | 0.875 2 | 0.576 3 | 0.905 3 | 0.704 3 | 0.739 1 | 0.969 7 | 0.611 1 | 0.349 6 | 0.756 15 | 0.958 1 | 0.702 34 | 0.805 12 | 0.708 6 | 0.916 24 | 0.898 1 | 0.801 1 |
PPT-SpUNet-Joint | 0.766 3 | 0.932 2 | 0.794 25 | 0.829 17 | 0.751 13 | 0.854 11 | 0.540 14 | 0.903 4 | 0.630 26 | 0.672 8 | 0.963 9 | 0.565 15 | 0.357 4 | 0.788 2 | 0.900 7 | 0.737 19 | 0.802 13 | 0.685 12 | 0.950 2 | 0.887 2 | 0.780 2 | |
OctFormer | ![]() | 0.766 3 | 0.925 3 | 0.808 17 | 0.849 6 | 0.786 2 | 0.846 17 | 0.566 6 | 0.876 11 | 0.690 7 | 0.674 7 | 0.960 11 | 0.576 11 | 0.226 56 | 0.753 17 | 0.904 5 | 0.777 6 | 0.815 6 | 0.722 4 | 0.923 20 | 0.877 8 | 0.776 4 |
Peng-Shuai Wang: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH 2023 | ||||||||||||||||||||||
CU-Hybrid Net | 0.764 5 | 0.924 4 | 0.819 9 | 0.840 13 | 0.757 9 | 0.853 12 | 0.580 1 | 0.848 18 | 0.709 2 | 0.643 16 | 0.958 14 | 0.587 8 | 0.295 26 | 0.753 17 | 0.884 15 | 0.758 13 | 0.815 6 | 0.725 2 | 0.927 19 | 0.867 14 | 0.743 9 | |
OccuSeg+Semantic | 0.764 5 | 0.758 50 | 0.796 23 | 0.839 14 | 0.746 15 | 0.907 1 | 0.562 7 | 0.850 17 | 0.680 11 | 0.672 8 | 0.978 2 | 0.610 2 | 0.335 10 | 0.777 5 | 0.819 35 | 0.847 1 | 0.830 1 | 0.691 10 | 0.972 1 | 0.885 4 | 0.727 15 | |
O-CNN | ![]() | 0.762 7 | 0.924 4 | 0.823 6 | 0.844 11 | 0.770 5 | 0.852 13 | 0.577 2 | 0.847 20 | 0.711 1 | 0.640 20 | 0.958 14 | 0.592 5 | 0.217 62 | 0.762 12 | 0.888 12 | 0.758 13 | 0.813 8 | 0.726 1 | 0.932 17 | 0.868 13 | 0.744 8 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
OA-CNN-L_ScanNet20 | 0.756 8 | 0.783 38 | 0.826 5 | 0.858 4 | 0.776 4 | 0.837 24 | 0.548 11 | 0.896 7 | 0.649 19 | 0.675 6 | 0.962 10 | 0.586 9 | 0.335 10 | 0.771 8 | 0.802 39 | 0.770 9 | 0.787 25 | 0.691 10 | 0.936 12 | 0.880 7 | 0.761 6 | |
PointTransformerV2 | 0.752 9 | 0.742 57 | 0.809 16 | 0.872 1 | 0.758 8 | 0.860 8 | 0.552 9 | 0.891 8 | 0.610 34 | 0.687 3 | 0.960 11 | 0.559 18 | 0.304 23 | 0.766 10 | 0.926 3 | 0.767 10 | 0.797 17 | 0.644 26 | 0.942 7 | 0.876 11 | 0.722 19 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
DMF-Net | 0.752 9 | 0.906 8 | 0.793 27 | 0.802 33 | 0.689 30 | 0.825 35 | 0.556 8 | 0.867 13 | 0.681 10 | 0.602 34 | 0.960 11 | 0.555 20 | 0.365 3 | 0.779 4 | 0.859 20 | 0.747 16 | 0.795 21 | 0.717 5 | 0.917 23 | 0.856 22 | 0.764 5 | |
PointConvFormer | 0.749 11 | 0.793 35 | 0.790 28 | 0.807 28 | 0.750 14 | 0.856 10 | 0.524 20 | 0.881 10 | 0.588 45 | 0.642 19 | 0.977 4 | 0.591 6 | 0.274 37 | 0.781 3 | 0.929 2 | 0.804 3 | 0.796 18 | 0.642 27 | 0.947 4 | 0.885 4 | 0.715 22 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
BPNet | ![]() | 0.749 11 | 0.909 6 | 0.818 11 | 0.811 25 | 0.752 11 | 0.839 23 | 0.485 37 | 0.842 21 | 0.673 12 | 0.644 15 | 0.957 17 | 0.528 29 | 0.305 22 | 0.773 7 | 0.859 20 | 0.788 4 | 0.818 5 | 0.693 9 | 0.916 24 | 0.856 22 | 0.723 18 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
MSP | 0.748 13 | 0.623 83 | 0.804 19 | 0.859 3 | 0.745 16 | 0.824 37 | 0.501 28 | 0.912 2 | 0.690 7 | 0.685 4 | 0.956 18 | 0.567 14 | 0.320 16 | 0.768 9 | 0.918 4 | 0.720 25 | 0.802 13 | 0.676 16 | 0.921 21 | 0.881 6 | 0.779 3 | |
StratifiedFormer | ![]() | 0.747 14 | 0.901 9 | 0.803 20 | 0.845 10 | 0.757 9 | 0.846 17 | 0.512 24 | 0.825 27 | 0.696 6 | 0.645 14 | 0.956 18 | 0.576 11 | 0.262 47 | 0.744 22 | 0.861 19 | 0.742 17 | 0.770 34 | 0.705 7 | 0.899 37 | 0.860 19 | 0.734 10 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
VMNet | ![]() | 0.746 15 | 0.870 14 | 0.838 2 | 0.858 4 | 0.729 21 | 0.850 15 | 0.501 28 | 0.874 12 | 0.587 46 | 0.658 12 | 0.956 18 | 0.564 16 | 0.299 24 | 0.765 11 | 0.900 7 | 0.716 28 | 0.812 9 | 0.631 32 | 0.939 10 | 0.858 20 | 0.709 23 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
Virtual MVFusion | 0.746 15 | 0.771 44 | 0.819 9 | 0.848 8 | 0.702 28 | 0.865 7 | 0.397 74 | 0.899 5 | 0.699 4 | 0.664 11 | 0.948 45 | 0.588 7 | 0.330 12 | 0.746 21 | 0.851 27 | 0.764 11 | 0.796 18 | 0.704 8 | 0.935 13 | 0.866 15 | 0.728 13 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
Retro-FPN | 0.744 17 | 0.842 22 | 0.800 21 | 0.767 45 | 0.740 17 | 0.836 26 | 0.541 13 | 0.914 1 | 0.672 13 | 0.626 23 | 0.958 14 | 0.552 21 | 0.272 39 | 0.777 5 | 0.886 14 | 0.696 35 | 0.801 15 | 0.674 18 | 0.941 8 | 0.858 20 | 0.717 20 | |
EQ-Net | 0.743 18 | 0.620 84 | 0.799 22 | 0.849 6 | 0.730 20 | 0.822 39 | 0.493 35 | 0.897 6 | 0.664 14 | 0.681 5 | 0.955 21 | 0.562 17 | 0.378 1 | 0.760 13 | 0.903 6 | 0.738 18 | 0.801 15 | 0.673 19 | 0.907 29 | 0.877 8 | 0.745 7 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
SAT | 0.742 19 | 0.860 17 | 0.765 39 | 0.819 20 | 0.769 6 | 0.848 16 | 0.533 16 | 0.829 25 | 0.663 15 | 0.631 22 | 0.955 21 | 0.586 9 | 0.274 37 | 0.753 17 | 0.896 9 | 0.729 20 | 0.760 41 | 0.666 21 | 0.921 21 | 0.855 24 | 0.733 11 | |
LRPNet | 0.742 19 | 0.816 30 | 0.806 18 | 0.807 28 | 0.752 11 | 0.828 33 | 0.575 4 | 0.839 23 | 0.699 4 | 0.637 21 | 0.954 26 | 0.520 31 | 0.320 16 | 0.755 16 | 0.834 31 | 0.760 12 | 0.772 31 | 0.676 16 | 0.915 26 | 0.862 17 | 0.717 20 | |
TXC | 0.740 21 | 0.842 22 | 0.832 4 | 0.805 32 | 0.715 25 | 0.846 17 | 0.473 39 | 0.885 9 | 0.615 30 | 0.671 10 | 0.971 6 | 0.547 22 | 0.320 16 | 0.697 26 | 0.799 41 | 0.777 6 | 0.819 3 | 0.682 14 | 0.946 5 | 0.871 12 | 0.696 27 | |
LargeKernel3D | 0.739 22 | 0.909 6 | 0.820 8 | 0.806 30 | 0.740 17 | 0.852 13 | 0.545 12 | 0.826 26 | 0.594 44 | 0.643 16 | 0.955 21 | 0.541 24 | 0.263 46 | 0.723 24 | 0.858 22 | 0.775 8 | 0.767 35 | 0.678 15 | 0.933 15 | 0.848 29 | 0.694 28 | |
Yukang Chen*, Jianhui Liu*, Xiangyu Zhang, Xiaojuan Qi, Jiaya Jia: LargeKernel3D: Scaling up Kernels in 3D Sparse CNNs. CVPR 2023 | ||||||||||||||||||||||
MinkowskiNet | ![]() | 0.736 23 | 0.859 18 | 0.818 11 | 0.832 16 | 0.709 26 | 0.840 22 | 0.521 22 | 0.853 16 | 0.660 17 | 0.643 16 | 0.951 35 | 0.544 23 | 0.286 31 | 0.731 23 | 0.893 10 | 0.675 43 | 0.772 31 | 0.683 13 | 0.874 55 | 0.852 27 | 0.727 15 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
IPCA | 0.731 24 | 0.890 10 | 0.837 3 | 0.864 2 | 0.726 22 | 0.873 3 | 0.530 19 | 0.824 28 | 0.489 77 | 0.647 13 | 0.978 2 | 0.609 3 | 0.336 9 | 0.624 41 | 0.733 49 | 0.758 13 | 0.776 29 | 0.570 56 | 0.949 3 | 0.877 8 | 0.728 13 | |
PointTransformer++ | 0.725 25 | 0.727 65 | 0.811 15 | 0.819 20 | 0.765 7 | 0.841 21 | 0.502 27 | 0.814 33 | 0.621 29 | 0.623 25 | 0.955 21 | 0.556 19 | 0.284 32 | 0.620 42 | 0.866 17 | 0.781 5 | 0.757 44 | 0.648 24 | 0.932 17 | 0.862 17 | 0.709 23 | |
SparseConvNet | 0.725 25 | 0.647 80 | 0.821 7 | 0.846 9 | 0.721 23 | 0.869 4 | 0.533 16 | 0.754 47 | 0.603 40 | 0.614 27 | 0.955 21 | 0.572 13 | 0.325 14 | 0.710 25 | 0.870 16 | 0.724 23 | 0.823 2 | 0.628 33 | 0.934 14 | 0.865 16 | 0.683 31 | |
MatchingNet | 0.724 27 | 0.812 32 | 0.812 14 | 0.810 26 | 0.735 19 | 0.834 28 | 0.495 34 | 0.860 15 | 0.572 52 | 0.602 34 | 0.954 26 | 0.512 33 | 0.280 34 | 0.757 14 | 0.845 29 | 0.725 22 | 0.780 27 | 0.606 42 | 0.937 11 | 0.851 28 | 0.700 26 | |
INS-Conv-semantic | 0.717 28 | 0.751 53 | 0.759 42 | 0.812 24 | 0.704 27 | 0.868 5 | 0.537 15 | 0.842 21 | 0.609 36 | 0.608 30 | 0.953 29 | 0.534 26 | 0.293 27 | 0.616 43 | 0.864 18 | 0.719 27 | 0.793 22 | 0.640 28 | 0.933 15 | 0.845 33 | 0.663 36 | |
PointMetaBase | 0.714 29 | 0.835 24 | 0.785 29 | 0.821 18 | 0.684 32 | 0.846 17 | 0.531 18 | 0.865 14 | 0.614 31 | 0.596 38 | 0.953 29 | 0.500 36 | 0.246 52 | 0.674 27 | 0.888 12 | 0.692 36 | 0.764 37 | 0.624 34 | 0.849 70 | 0.844 34 | 0.675 33 | |
contrastBoundary | ![]() | 0.705 30 | 0.769 47 | 0.775 34 | 0.809 27 | 0.687 31 | 0.820 42 | 0.439 62 | 0.812 34 | 0.661 16 | 0.591 40 | 0.945 53 | 0.515 32 | 0.171 80 | 0.633 38 | 0.856 23 | 0.720 25 | 0.796 18 | 0.668 20 | 0.889 44 | 0.847 30 | 0.689 29 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
RFCR | 0.702 31 | 0.889 11 | 0.745 51 | 0.813 23 | 0.672 35 | 0.818 46 | 0.493 35 | 0.815 32 | 0.623 27 | 0.610 28 | 0.947 47 | 0.470 46 | 0.249 51 | 0.594 46 | 0.848 28 | 0.705 32 | 0.779 28 | 0.646 25 | 0.892 42 | 0.823 40 | 0.611 50 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
One Thing One Click | 0.701 32 | 0.825 28 | 0.796 23 | 0.723 52 | 0.716 24 | 0.832 29 | 0.433 64 | 0.816 30 | 0.634 24 | 0.609 29 | 0.969 7 | 0.418 71 | 0.344 7 | 0.559 58 | 0.833 32 | 0.715 29 | 0.808 11 | 0.560 61 | 0.902 34 | 0.847 30 | 0.680 32 | |
JSENet | ![]() | 0.699 33 | 0.881 13 | 0.762 40 | 0.821 18 | 0.667 36 | 0.800 59 | 0.522 21 | 0.792 39 | 0.613 32 | 0.607 31 | 0.935 73 | 0.492 38 | 0.205 67 | 0.576 51 | 0.853 25 | 0.691 37 | 0.758 43 | 0.652 23 | 0.872 58 | 0.828 37 | 0.649 40 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
One-Thing-One-Click | 0.693 34 | 0.743 56 | 0.794 25 | 0.655 76 | 0.684 32 | 0.822 39 | 0.497 33 | 0.719 57 | 0.622 28 | 0.617 26 | 0.977 4 | 0.447 58 | 0.339 8 | 0.750 20 | 0.664 65 | 0.703 33 | 0.790 24 | 0.596 46 | 0.946 5 | 0.855 24 | 0.647 41 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
PicassoNet-II | ![]() | 0.692 35 | 0.732 61 | 0.772 35 | 0.786 37 | 0.677 34 | 0.866 6 | 0.517 23 | 0.848 18 | 0.509 69 | 0.626 23 | 0.952 33 | 0.536 25 | 0.225 58 | 0.545 64 | 0.704 56 | 0.689 40 | 0.810 10 | 0.564 60 | 0.903 33 | 0.854 26 | 0.729 12 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
Feature_GeometricNet | ![]() | 0.690 36 | 0.884 12 | 0.754 46 | 0.795 36 | 0.647 42 | 0.818 46 | 0.422 66 | 0.802 37 | 0.612 33 | 0.604 32 | 0.945 53 | 0.462 49 | 0.189 75 | 0.563 57 | 0.853 25 | 0.726 21 | 0.765 36 | 0.632 31 | 0.904 31 | 0.821 43 | 0.606 54 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
FusionNet | 0.688 37 | 0.704 70 | 0.741 55 | 0.754 49 | 0.656 38 | 0.829 31 | 0.501 28 | 0.741 52 | 0.609 36 | 0.548 47 | 0.950 39 | 0.522 30 | 0.371 2 | 0.633 38 | 0.756 44 | 0.715 29 | 0.771 33 | 0.623 35 | 0.861 66 | 0.814 45 | 0.658 37 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
Feature-Geometry Net | ![]() | 0.685 38 | 0.866 15 | 0.748 48 | 0.819 20 | 0.645 44 | 0.794 62 | 0.450 51 | 0.802 37 | 0.587 46 | 0.604 32 | 0.945 53 | 0.464 48 | 0.201 70 | 0.554 60 | 0.840 30 | 0.723 24 | 0.732 53 | 0.602 44 | 0.907 29 | 0.822 42 | 0.603 57 |
KP-FCNN | 0.684 39 | 0.847 21 | 0.758 44 | 0.784 39 | 0.647 42 | 0.814 49 | 0.473 39 | 0.772 42 | 0.605 38 | 0.594 39 | 0.935 73 | 0.450 56 | 0.181 78 | 0.587 47 | 0.805 38 | 0.690 38 | 0.785 26 | 0.614 38 | 0.882 48 | 0.819 44 | 0.632 46 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
DGNet | 0.684 39 | 0.712 69 | 0.784 30 | 0.782 41 | 0.658 37 | 0.835 27 | 0.499 32 | 0.823 29 | 0.641 21 | 0.597 37 | 0.950 39 | 0.487 39 | 0.281 33 | 0.575 52 | 0.619 68 | 0.647 56 | 0.764 37 | 0.620 37 | 0.871 61 | 0.846 32 | 0.688 30 | |
VACNN++ | 0.684 39 | 0.728 64 | 0.757 45 | 0.776 42 | 0.690 29 | 0.804 56 | 0.464 45 | 0.816 30 | 0.577 51 | 0.587 41 | 0.945 53 | 0.508 35 | 0.276 36 | 0.671 28 | 0.710 54 | 0.663 48 | 0.750 47 | 0.589 51 | 0.881 49 | 0.832 36 | 0.653 39 | |
Superpoint Network | 0.683 42 | 0.851 20 | 0.728 59 | 0.800 35 | 0.653 40 | 0.806 54 | 0.468 42 | 0.804 35 | 0.572 52 | 0.602 34 | 0.946 50 | 0.453 55 | 0.239 55 | 0.519 69 | 0.822 33 | 0.689 40 | 0.762 40 | 0.595 48 | 0.895 40 | 0.827 38 | 0.630 47 | |
PointContrast_LA_SEM | 0.683 42 | 0.757 51 | 0.784 30 | 0.786 37 | 0.639 46 | 0.824 37 | 0.408 69 | 0.775 41 | 0.604 39 | 0.541 49 | 0.934 77 | 0.532 27 | 0.269 42 | 0.552 61 | 0.777 42 | 0.645 59 | 0.793 22 | 0.640 28 | 0.913 27 | 0.824 39 | 0.671 34 | |
VI-PointConv | 0.676 44 | 0.770 46 | 0.754 46 | 0.783 40 | 0.621 50 | 0.814 49 | 0.552 9 | 0.758 45 | 0.571 54 | 0.557 45 | 0.954 26 | 0.529 28 | 0.268 44 | 0.530 67 | 0.682 60 | 0.675 43 | 0.719 56 | 0.603 43 | 0.888 45 | 0.833 35 | 0.665 35 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
ROSMRF3D | 0.673 45 | 0.789 36 | 0.748 48 | 0.763 47 | 0.635 48 | 0.814 49 | 0.407 71 | 0.747 49 | 0.581 50 | 0.573 42 | 0.950 39 | 0.484 40 | 0.271 41 | 0.607 44 | 0.754 45 | 0.649 53 | 0.774 30 | 0.596 46 | 0.883 47 | 0.823 40 | 0.606 54 | |
SALANet | 0.670 46 | 0.816 30 | 0.770 37 | 0.768 44 | 0.652 41 | 0.807 53 | 0.451 48 | 0.747 49 | 0.659 18 | 0.545 48 | 0.924 83 | 0.473 45 | 0.149 90 | 0.571 54 | 0.811 37 | 0.635 62 | 0.746 48 | 0.623 35 | 0.892 42 | 0.794 57 | 0.570 67 | |
PointConv | ![]() | 0.666 47 | 0.781 39 | 0.759 42 | 0.699 61 | 0.644 45 | 0.822 39 | 0.475 38 | 0.779 40 | 0.564 57 | 0.504 65 | 0.953 29 | 0.428 65 | 0.203 69 | 0.586 49 | 0.754 45 | 0.661 49 | 0.753 45 | 0.588 52 | 0.902 34 | 0.813 47 | 0.642 42 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointASNL | ![]() | 0.666 47 | 0.703 71 | 0.781 32 | 0.751 51 | 0.655 39 | 0.830 30 | 0.471 41 | 0.769 43 | 0.474 80 | 0.537 51 | 0.951 35 | 0.475 44 | 0.279 35 | 0.635 36 | 0.698 59 | 0.675 43 | 0.751 46 | 0.553 66 | 0.816 77 | 0.806 49 | 0.703 25 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
PPCNN++ | ![]() | 0.663 49 | 0.746 54 | 0.708 63 | 0.722 53 | 0.638 47 | 0.820 42 | 0.451 48 | 0.566 84 | 0.599 42 | 0.541 49 | 0.950 39 | 0.510 34 | 0.313 19 | 0.648 33 | 0.819 35 | 0.616 67 | 0.682 72 | 0.590 50 | 0.869 62 | 0.810 48 | 0.656 38 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
DCM-Net | 0.658 50 | 0.778 40 | 0.702 66 | 0.806 30 | 0.619 51 | 0.813 52 | 0.468 42 | 0.693 65 | 0.494 72 | 0.524 57 | 0.941 64 | 0.449 57 | 0.298 25 | 0.510 71 | 0.821 34 | 0.675 43 | 0.727 55 | 0.568 58 | 0.826 75 | 0.803 51 | 0.637 44 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
HPGCNN | 0.656 51 | 0.698 72 | 0.743 53 | 0.650 77 | 0.564 68 | 0.820 42 | 0.505 26 | 0.758 45 | 0.631 25 | 0.479 70 | 0.945 53 | 0.480 42 | 0.226 56 | 0.572 53 | 0.774 43 | 0.690 38 | 0.735 51 | 0.614 38 | 0.853 69 | 0.776 72 | 0.597 60 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
SAFNet-seg | ![]() | 0.654 52 | 0.752 52 | 0.734 57 | 0.664 74 | 0.583 63 | 0.815 48 | 0.399 73 | 0.754 47 | 0.639 22 | 0.535 53 | 0.942 62 | 0.470 46 | 0.309 21 | 0.665 29 | 0.539 74 | 0.650 52 | 0.708 61 | 0.635 30 | 0.857 68 | 0.793 59 | 0.642 42 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
RandLA-Net | ![]() | 0.645 53 | 0.778 40 | 0.731 58 | 0.699 61 | 0.577 64 | 0.829 31 | 0.446 53 | 0.736 53 | 0.477 79 | 0.523 59 | 0.945 53 | 0.454 53 | 0.269 42 | 0.484 78 | 0.749 48 | 0.618 65 | 0.738 49 | 0.599 45 | 0.827 74 | 0.792 62 | 0.621 49 |
MVPNet | ![]() | 0.641 54 | 0.831 25 | 0.715 61 | 0.671 71 | 0.590 59 | 0.781 68 | 0.394 75 | 0.679 67 | 0.642 20 | 0.553 46 | 0.937 70 | 0.462 49 | 0.256 48 | 0.649 32 | 0.406 87 | 0.626 63 | 0.691 69 | 0.666 21 | 0.877 51 | 0.792 62 | 0.608 53 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
PointConv-SFPN | 0.641 54 | 0.776 42 | 0.703 65 | 0.721 54 | 0.557 71 | 0.826 34 | 0.451 48 | 0.672 69 | 0.563 58 | 0.483 69 | 0.943 61 | 0.425 68 | 0.162 85 | 0.644 34 | 0.726 50 | 0.659 50 | 0.709 60 | 0.572 55 | 0.875 53 | 0.786 67 | 0.559 72 | |
PointMRNet | 0.640 56 | 0.717 68 | 0.701 67 | 0.692 64 | 0.576 65 | 0.801 58 | 0.467 44 | 0.716 58 | 0.563 58 | 0.459 75 | 0.953 29 | 0.429 64 | 0.169 82 | 0.581 50 | 0.854 24 | 0.605 68 | 0.710 58 | 0.550 67 | 0.894 41 | 0.793 59 | 0.575 65 | |
FPConv | ![]() | 0.639 57 | 0.785 37 | 0.760 41 | 0.713 59 | 0.603 54 | 0.798 60 | 0.392 76 | 0.534 89 | 0.603 40 | 0.524 57 | 0.948 45 | 0.457 51 | 0.250 50 | 0.538 65 | 0.723 52 | 0.598 72 | 0.696 67 | 0.614 38 | 0.872 58 | 0.799 52 | 0.567 69 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
PD-Net | 0.638 58 | 0.797 34 | 0.769 38 | 0.641 82 | 0.590 59 | 0.820 42 | 0.461 46 | 0.537 88 | 0.637 23 | 0.536 52 | 0.947 47 | 0.388 79 | 0.206 66 | 0.656 30 | 0.668 63 | 0.647 56 | 0.732 53 | 0.585 53 | 0.868 63 | 0.793 59 | 0.473 91 | |
PointSPNet | 0.637 59 | 0.734 60 | 0.692 74 | 0.714 58 | 0.576 65 | 0.797 61 | 0.446 53 | 0.743 51 | 0.598 43 | 0.437 80 | 0.942 62 | 0.403 75 | 0.150 89 | 0.626 40 | 0.800 40 | 0.649 53 | 0.697 66 | 0.557 64 | 0.846 71 | 0.777 71 | 0.563 70 | |
SConv | 0.636 60 | 0.830 26 | 0.697 70 | 0.752 50 | 0.572 67 | 0.780 70 | 0.445 55 | 0.716 58 | 0.529 63 | 0.530 54 | 0.951 35 | 0.446 59 | 0.170 81 | 0.507 73 | 0.666 64 | 0.636 61 | 0.682 72 | 0.541 72 | 0.886 46 | 0.799 52 | 0.594 61 | |
Supervoxel-CNN | 0.635 61 | 0.656 78 | 0.711 62 | 0.719 55 | 0.613 52 | 0.757 79 | 0.444 58 | 0.765 44 | 0.534 62 | 0.566 43 | 0.928 81 | 0.478 43 | 0.272 39 | 0.636 35 | 0.531 76 | 0.664 47 | 0.645 82 | 0.508 80 | 0.864 65 | 0.792 62 | 0.611 50 | |
joint point-based | ![]() | 0.634 62 | 0.614 85 | 0.778 33 | 0.667 73 | 0.633 49 | 0.825 35 | 0.420 67 | 0.804 35 | 0.467 82 | 0.561 44 | 0.951 35 | 0.494 37 | 0.291 28 | 0.566 55 | 0.458 82 | 0.579 79 | 0.764 37 | 0.559 63 | 0.838 72 | 0.814 45 | 0.598 59 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
PointMTL | 0.632 63 | 0.731 62 | 0.688 77 | 0.675 68 | 0.591 58 | 0.784 67 | 0.444 58 | 0.565 85 | 0.610 34 | 0.492 67 | 0.949 43 | 0.456 52 | 0.254 49 | 0.587 47 | 0.706 55 | 0.599 71 | 0.665 78 | 0.612 41 | 0.868 63 | 0.791 66 | 0.579 64 | |
APCF-Net | 0.631 64 | 0.742 57 | 0.687 79 | 0.672 69 | 0.557 71 | 0.792 65 | 0.408 69 | 0.665 70 | 0.545 60 | 0.508 62 | 0.952 33 | 0.428 65 | 0.186 76 | 0.634 37 | 0.702 57 | 0.620 64 | 0.706 62 | 0.555 65 | 0.873 56 | 0.798 54 | 0.581 63 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
3DSM_DMMF | 0.631 64 | 0.626 82 | 0.745 51 | 0.801 34 | 0.607 53 | 0.751 80 | 0.506 25 | 0.729 56 | 0.565 56 | 0.491 68 | 0.866 97 | 0.434 60 | 0.197 73 | 0.595 45 | 0.630 67 | 0.709 31 | 0.705 63 | 0.560 61 | 0.875 53 | 0.740 82 | 0.491 86 | |
PointNet2-SFPN | 0.631 64 | 0.771 44 | 0.692 74 | 0.672 69 | 0.524 76 | 0.837 24 | 0.440 61 | 0.706 63 | 0.538 61 | 0.446 77 | 0.944 59 | 0.421 70 | 0.219 61 | 0.552 61 | 0.751 47 | 0.591 75 | 0.737 50 | 0.543 71 | 0.901 36 | 0.768 74 | 0.557 73 | |
FusionAwareConv | 0.630 67 | 0.604 87 | 0.741 55 | 0.766 46 | 0.590 59 | 0.747 81 | 0.501 28 | 0.734 54 | 0.503 71 | 0.527 55 | 0.919 87 | 0.454 53 | 0.323 15 | 0.550 63 | 0.420 86 | 0.678 42 | 0.688 70 | 0.544 69 | 0.896 39 | 0.795 56 | 0.627 48 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
DenSeR | 0.628 68 | 0.800 33 | 0.625 89 | 0.719 55 | 0.545 74 | 0.806 54 | 0.445 55 | 0.597 79 | 0.448 86 | 0.519 60 | 0.938 69 | 0.481 41 | 0.328 13 | 0.489 77 | 0.499 81 | 0.657 51 | 0.759 42 | 0.592 49 | 0.881 49 | 0.797 55 | 0.634 45 | |
SegGroup_sem | ![]() | 0.627 69 | 0.818 29 | 0.747 50 | 0.701 60 | 0.602 55 | 0.764 76 | 0.385 80 | 0.629 76 | 0.490 75 | 0.508 62 | 0.931 80 | 0.409 73 | 0.201 70 | 0.564 56 | 0.725 51 | 0.618 65 | 0.692 68 | 0.539 73 | 0.873 56 | 0.794 57 | 0.548 76 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
SIConv | 0.625 70 | 0.830 26 | 0.694 72 | 0.757 48 | 0.563 69 | 0.772 74 | 0.448 52 | 0.647 73 | 0.520 65 | 0.509 61 | 0.949 43 | 0.431 63 | 0.191 74 | 0.496 75 | 0.614 69 | 0.647 56 | 0.672 76 | 0.535 75 | 0.876 52 | 0.783 68 | 0.571 66 | |
HPEIN | 0.618 71 | 0.729 63 | 0.668 80 | 0.647 79 | 0.597 57 | 0.766 75 | 0.414 68 | 0.680 66 | 0.520 65 | 0.525 56 | 0.946 50 | 0.432 61 | 0.215 63 | 0.493 76 | 0.599 70 | 0.638 60 | 0.617 87 | 0.570 56 | 0.897 38 | 0.806 49 | 0.605 56 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
SPH3D-GCN | ![]() | 0.610 72 | 0.858 19 | 0.772 35 | 0.489 94 | 0.532 75 | 0.792 65 | 0.404 72 | 0.643 75 | 0.570 55 | 0.507 64 | 0.935 73 | 0.414 72 | 0.046 99 | 0.510 71 | 0.702 57 | 0.602 70 | 0.705 63 | 0.549 68 | 0.859 67 | 0.773 73 | 0.534 79 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
AttAN | 0.609 73 | 0.760 49 | 0.667 81 | 0.649 78 | 0.521 77 | 0.793 63 | 0.457 47 | 0.648 72 | 0.528 64 | 0.434 82 | 0.947 47 | 0.401 76 | 0.153 88 | 0.454 80 | 0.721 53 | 0.648 55 | 0.717 57 | 0.536 74 | 0.904 31 | 0.765 75 | 0.485 87 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
wsss-transformer | 0.600 74 | 0.634 81 | 0.743 53 | 0.697 63 | 0.601 56 | 0.781 68 | 0.437 63 | 0.585 82 | 0.493 73 | 0.446 77 | 0.933 78 | 0.394 77 | 0.011 101 | 0.654 31 | 0.661 66 | 0.603 69 | 0.733 52 | 0.526 76 | 0.832 73 | 0.761 77 | 0.480 88 | |
dtc_net | 0.596 75 | 0.683 73 | 0.725 60 | 0.715 57 | 0.549 73 | 0.803 57 | 0.444 58 | 0.647 73 | 0.493 73 | 0.495 66 | 0.941 64 | 0.409 73 | 0.000 103 | 0.424 85 | 0.544 73 | 0.598 72 | 0.703 65 | 0.522 77 | 0.912 28 | 0.792 62 | 0.520 82 | |
LAP-D | 0.594 76 | 0.720 66 | 0.692 74 | 0.637 83 | 0.456 86 | 0.773 73 | 0.391 78 | 0.730 55 | 0.587 46 | 0.445 79 | 0.940 67 | 0.381 80 | 0.288 29 | 0.434 83 | 0.453 84 | 0.591 75 | 0.649 80 | 0.581 54 | 0.777 81 | 0.749 81 | 0.610 52 | |
DPC | 0.592 77 | 0.720 66 | 0.700 68 | 0.602 87 | 0.480 82 | 0.762 78 | 0.380 81 | 0.713 61 | 0.585 49 | 0.437 80 | 0.940 67 | 0.369 82 | 0.288 29 | 0.434 83 | 0.509 80 | 0.590 77 | 0.639 85 | 0.567 59 | 0.772 82 | 0.755 79 | 0.592 62 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
CCRFNet | 0.589 78 | 0.766 48 | 0.659 84 | 0.683 66 | 0.470 85 | 0.740 83 | 0.387 79 | 0.620 78 | 0.490 75 | 0.476 71 | 0.922 85 | 0.355 85 | 0.245 53 | 0.511 70 | 0.511 79 | 0.571 80 | 0.643 83 | 0.493 84 | 0.872 58 | 0.762 76 | 0.600 58 | |
ROSMRF | 0.580 79 | 0.772 43 | 0.707 64 | 0.681 67 | 0.563 69 | 0.764 76 | 0.362 83 | 0.515 90 | 0.465 83 | 0.465 74 | 0.936 72 | 0.427 67 | 0.207 65 | 0.438 81 | 0.577 71 | 0.536 83 | 0.675 75 | 0.486 85 | 0.723 88 | 0.779 69 | 0.524 81 | |
SD-DETR | 0.576 80 | 0.746 54 | 0.609 93 | 0.445 98 | 0.517 78 | 0.643 94 | 0.366 82 | 0.714 60 | 0.456 84 | 0.468 73 | 0.870 96 | 0.432 61 | 0.264 45 | 0.558 59 | 0.674 61 | 0.586 78 | 0.688 70 | 0.482 86 | 0.739 86 | 0.733 84 | 0.537 78 | |
SQN_0.1% | 0.569 81 | 0.676 75 | 0.696 71 | 0.657 75 | 0.497 79 | 0.779 71 | 0.424 65 | 0.548 86 | 0.515 67 | 0.376 87 | 0.902 94 | 0.422 69 | 0.357 4 | 0.379 88 | 0.456 83 | 0.596 74 | 0.659 79 | 0.544 69 | 0.685 91 | 0.665 95 | 0.556 74 | |
TextureNet | ![]() | 0.566 82 | 0.672 77 | 0.664 82 | 0.671 71 | 0.494 80 | 0.719 84 | 0.445 55 | 0.678 68 | 0.411 92 | 0.396 85 | 0.935 73 | 0.356 84 | 0.225 58 | 0.412 86 | 0.535 75 | 0.565 81 | 0.636 86 | 0.464 88 | 0.794 80 | 0.680 92 | 0.568 68 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
DVVNet | 0.562 83 | 0.648 79 | 0.700 68 | 0.770 43 | 0.586 62 | 0.687 88 | 0.333 87 | 0.650 71 | 0.514 68 | 0.475 72 | 0.906 91 | 0.359 83 | 0.223 60 | 0.340 90 | 0.442 85 | 0.422 94 | 0.668 77 | 0.501 81 | 0.708 89 | 0.779 69 | 0.534 79 | |
Pointnet++ & Feature | ![]() | 0.557 84 | 0.735 59 | 0.661 83 | 0.686 65 | 0.491 81 | 0.744 82 | 0.392 76 | 0.539 87 | 0.451 85 | 0.375 88 | 0.946 50 | 0.376 81 | 0.205 67 | 0.403 87 | 0.356 90 | 0.553 82 | 0.643 83 | 0.497 82 | 0.824 76 | 0.756 78 | 0.515 83 |
GMLPs | 0.538 85 | 0.495 95 | 0.693 73 | 0.647 79 | 0.471 84 | 0.793 63 | 0.300 90 | 0.477 91 | 0.505 70 | 0.358 89 | 0.903 93 | 0.327 88 | 0.081 96 | 0.472 79 | 0.529 77 | 0.448 92 | 0.710 58 | 0.509 78 | 0.746 84 | 0.737 83 | 0.554 75 | |
PanopticFusion-label | 0.529 86 | 0.491 96 | 0.688 77 | 0.604 86 | 0.386 91 | 0.632 95 | 0.225 100 | 0.705 64 | 0.434 89 | 0.293 95 | 0.815 98 | 0.348 86 | 0.241 54 | 0.499 74 | 0.669 62 | 0.507 85 | 0.649 80 | 0.442 94 | 0.796 79 | 0.602 98 | 0.561 71 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
subcloud_weak | 0.516 87 | 0.676 75 | 0.591 96 | 0.609 84 | 0.442 87 | 0.774 72 | 0.335 86 | 0.597 79 | 0.422 91 | 0.357 90 | 0.932 79 | 0.341 87 | 0.094 95 | 0.298 92 | 0.528 78 | 0.473 90 | 0.676 74 | 0.495 83 | 0.602 97 | 0.721 87 | 0.349 98 | |
Online SegFusion | 0.515 88 | 0.607 86 | 0.644 87 | 0.579 89 | 0.434 88 | 0.630 96 | 0.353 84 | 0.628 77 | 0.440 87 | 0.410 83 | 0.762 101 | 0.307 90 | 0.167 83 | 0.520 68 | 0.403 88 | 0.516 84 | 0.565 90 | 0.447 92 | 0.678 92 | 0.701 89 | 0.514 84 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
3DMV, FTSDF | 0.501 89 | 0.558 91 | 0.608 94 | 0.424 100 | 0.478 83 | 0.690 87 | 0.246 96 | 0.586 81 | 0.468 81 | 0.450 76 | 0.911 89 | 0.394 77 | 0.160 86 | 0.438 81 | 0.212 97 | 0.432 93 | 0.541 95 | 0.475 87 | 0.742 85 | 0.727 85 | 0.477 89 | |
PCNN | 0.498 90 | 0.559 90 | 0.644 87 | 0.560 91 | 0.420 90 | 0.711 86 | 0.229 98 | 0.414 92 | 0.436 88 | 0.352 91 | 0.941 64 | 0.324 89 | 0.155 87 | 0.238 97 | 0.387 89 | 0.493 86 | 0.529 96 | 0.509 78 | 0.813 78 | 0.751 80 | 0.504 85 | |
3DMV | 0.484 91 | 0.484 97 | 0.538 98 | 0.643 81 | 0.424 89 | 0.606 99 | 0.310 88 | 0.574 83 | 0.433 90 | 0.378 86 | 0.796 99 | 0.301 91 | 0.214 64 | 0.537 66 | 0.208 98 | 0.472 91 | 0.507 99 | 0.413 97 | 0.693 90 | 0.602 98 | 0.539 77 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
PointCNN with RGB | ![]() | 0.458 92 | 0.577 89 | 0.611 92 | 0.356 102 | 0.321 99 | 0.715 85 | 0.299 92 | 0.376 96 | 0.328 99 | 0.319 93 | 0.944 59 | 0.285 93 | 0.164 84 | 0.216 100 | 0.229 95 | 0.484 88 | 0.545 94 | 0.456 90 | 0.755 83 | 0.709 88 | 0.475 90 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
FCPN | ![]() | 0.447 93 | 0.679 74 | 0.604 95 | 0.578 90 | 0.380 92 | 0.682 89 | 0.291 93 | 0.106 102 | 0.483 78 | 0.258 100 | 0.920 86 | 0.258 97 | 0.025 100 | 0.231 99 | 0.325 91 | 0.480 89 | 0.560 92 | 0.463 89 | 0.725 87 | 0.666 94 | 0.231 102 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
DGCNN_reproduce | ![]() | 0.446 94 | 0.474 98 | 0.623 90 | 0.463 96 | 0.366 94 | 0.651 92 | 0.310 88 | 0.389 95 | 0.349 97 | 0.330 92 | 0.937 70 | 0.271 95 | 0.126 92 | 0.285 93 | 0.224 96 | 0.350 99 | 0.577 89 | 0.445 93 | 0.625 95 | 0.723 86 | 0.394 94 |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
PNET2 | 0.442 95 | 0.548 92 | 0.548 97 | 0.597 88 | 0.363 95 | 0.628 97 | 0.300 90 | 0.292 97 | 0.374 94 | 0.307 94 | 0.881 95 | 0.268 96 | 0.186 76 | 0.238 97 | 0.204 99 | 0.407 95 | 0.506 100 | 0.449 91 | 0.667 93 | 0.620 97 | 0.462 92 | |
SurfaceConvPF | 0.442 95 | 0.505 94 | 0.622 91 | 0.380 101 | 0.342 97 | 0.654 91 | 0.227 99 | 0.397 94 | 0.367 95 | 0.276 97 | 0.924 83 | 0.240 98 | 0.198 72 | 0.359 89 | 0.262 93 | 0.366 96 | 0.581 88 | 0.435 95 | 0.640 94 | 0.668 93 | 0.398 93 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
Tangent Convolutions | ![]() | 0.438 97 | 0.437 100 | 0.646 86 | 0.474 95 | 0.369 93 | 0.645 93 | 0.353 84 | 0.258 99 | 0.282 101 | 0.279 96 | 0.918 88 | 0.298 92 | 0.147 91 | 0.283 94 | 0.294 92 | 0.487 87 | 0.562 91 | 0.427 96 | 0.619 96 | 0.633 96 | 0.352 97 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
3DWSSS | 0.425 98 | 0.525 93 | 0.647 85 | 0.522 92 | 0.324 98 | 0.488 102 | 0.077 103 | 0.712 62 | 0.353 96 | 0.401 84 | 0.636 103 | 0.281 94 | 0.176 79 | 0.340 90 | 0.565 72 | 0.175 103 | 0.551 93 | 0.398 98 | 0.370 103 | 0.602 98 | 0.361 96 | |
SPLAT Net | ![]() | 0.393 99 | 0.472 99 | 0.511 99 | 0.606 85 | 0.311 100 | 0.656 90 | 0.245 97 | 0.405 93 | 0.328 99 | 0.197 101 | 0.927 82 | 0.227 100 | 0.000 103 | 0.001 104 | 0.249 94 | 0.271 102 | 0.510 97 | 0.383 100 | 0.593 98 | 0.699 90 | 0.267 100 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 100 | 0.297 102 | 0.491 100 | 0.432 99 | 0.358 96 | 0.612 98 | 0.274 94 | 0.116 101 | 0.411 92 | 0.265 98 | 0.904 92 | 0.229 99 | 0.079 97 | 0.250 95 | 0.185 100 | 0.320 100 | 0.510 97 | 0.385 99 | 0.548 99 | 0.597 101 | 0.394 94 | |
PointNet++ | ![]() | 0.339 101 | 0.584 88 | 0.478 101 | 0.458 97 | 0.256 102 | 0.360 103 | 0.250 95 | 0.247 100 | 0.278 102 | 0.261 99 | 0.677 102 | 0.183 101 | 0.117 93 | 0.212 101 | 0.145 102 | 0.364 97 | 0.346 103 | 0.232 103 | 0.548 99 | 0.523 102 | 0.252 101 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
SSC-UNet | ![]() | 0.308 102 | 0.353 101 | 0.290 103 | 0.278 103 | 0.166 103 | 0.553 100 | 0.169 102 | 0.286 98 | 0.147 103 | 0.148 103 | 0.908 90 | 0.182 102 | 0.064 98 | 0.023 103 | 0.018 104 | 0.354 98 | 0.363 101 | 0.345 101 | 0.546 101 | 0.685 91 | 0.278 99 |
ScanNet | ![]() | 0.306 103 | 0.203 103 | 0.366 102 | 0.501 93 | 0.311 100 | 0.524 101 | 0.211 101 | 0.002 104 | 0.342 98 | 0.189 102 | 0.786 100 | 0.145 103 | 0.102 94 | 0.245 96 | 0.152 101 | 0.318 101 | 0.348 102 | 0.300 102 | 0.460 102 | 0.437 103 | 0.182 103 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
ERROR | 0.054 104 | 0.000 104 | 0.041 104 | 0.172 104 | 0.030 104 | 0.062 104 | 0.001 104 | 0.035 103 | 0.004 104 | 0.051 104 | 0.143 104 | 0.019 104 | 0.003 102 | 0.041 102 | 0.050 103 | 0.003 104 | 0.054 104 | 0.018 104 | 0.005 104 | 0.264 104 | 0.082 104 | |