3D Semantic Label Benchmark
The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.
Evaluation and metricsOur evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
Mix3D | ![]() | 0.781 1 | 0.964 1 | 0.855 1 | 0.843 10 | 0.781 1 | 0.858 7 | 0.575 3 | 0.831 18 | 0.685 7 | 0.714 1 | 0.979 1 | 0.594 3 | 0.310 16 | 0.801 1 | 0.892 8 | 0.841 2 | 0.819 3 | 0.723 3 | 0.940 7 | 0.887 1 | 0.725 12 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
PointConvFormer | 0.749 7 | 0.793 31 | 0.790 23 | 0.807 24 | 0.750 10 | 0.856 8 | 0.524 16 | 0.881 6 | 0.588 39 | 0.642 14 | 0.977 4 | 0.591 5 | 0.274 32 | 0.781 2 | 0.929 1 | 0.804 3 | 0.796 14 | 0.642 22 | 0.947 3 | 0.885 2 | 0.715 17 | |
Wenxuan Wu, Qi Shan, Li Fuxin: PointConvFormer: Revenge of the Point-based Convolution. | ||||||||||||||||||||||
OccuSeg+Semantic | 0.764 2 | 0.758 45 | 0.796 19 | 0.839 12 | 0.746 11 | 0.907 1 | 0.562 5 | 0.850 12 | 0.680 9 | 0.672 5 | 0.978 2 | 0.610 1 | 0.335 8 | 0.777 4 | 0.819 32 | 0.847 1 | 0.830 1 | 0.691 8 | 0.972 1 | 0.885 2 | 0.727 10 | |
MSP | 0.748 9 | 0.623 76 | 0.804 15 | 0.859 3 | 0.745 12 | 0.824 32 | 0.501 23 | 0.912 2 | 0.690 6 | 0.685 3 | 0.956 13 | 0.567 11 | 0.320 13 | 0.768 7 | 0.918 3 | 0.720 21 | 0.802 9 | 0.676 11 | 0.921 17 | 0.881 4 | 0.779 1 | |
EQ-Net | 0.743 14 | 0.620 77 | 0.799 18 | 0.849 5 | 0.730 16 | 0.822 34 | 0.493 29 | 0.897 4 | 0.664 12 | 0.681 4 | 0.955 17 | 0.562 13 | 0.378 1 | 0.760 11 | 0.903 4 | 0.738 15 | 0.801 10 | 0.673 14 | 0.907 24 | 0.877 5 | 0.745 3 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
IPCA | 0.731 19 | 0.890 8 | 0.837 3 | 0.864 2 | 0.726 18 | 0.873 2 | 0.530 15 | 0.824 22 | 0.489 70 | 0.647 8 | 0.978 2 | 0.609 2 | 0.336 7 | 0.624 36 | 0.733 45 | 0.758 10 | 0.776 25 | 0.570 52 | 0.949 2 | 0.877 5 | 0.728 8 | |
PointTransformerV2 | 0.752 5 | 0.742 52 | 0.809 13 | 0.872 1 | 0.758 5 | 0.860 6 | 0.552 7 | 0.891 5 | 0.610 28 | 0.687 2 | 0.960 7 | 0.559 14 | 0.304 19 | 0.766 8 | 0.926 2 | 0.767 7 | 0.797 13 | 0.644 21 | 0.942 5 | 0.876 7 | 0.722 14 | |
Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, Hengshuang Zhao: Point Transformer V2: Grouped Vector Attention and Partition-based Pooling. NeurIPS 2022 | ||||||||||||||||||||||
O-CNN | ![]() | 0.762 4 | 0.924 2 | 0.823 4 | 0.844 9 | 0.770 2 | 0.852 10 | 0.577 2 | 0.847 14 | 0.711 1 | 0.640 15 | 0.958 9 | 0.592 4 | 0.217 57 | 0.762 10 | 0.888 9 | 0.758 10 | 0.813 6 | 0.726 1 | 0.932 14 | 0.868 8 | 0.744 4 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
CU-Hybrid Net | 0.764 2 | 0.924 2 | 0.819 7 | 0.840 11 | 0.757 6 | 0.853 9 | 0.580 1 | 0.848 13 | 0.709 2 | 0.643 11 | 0.958 9 | 0.587 7 | 0.295 22 | 0.753 14 | 0.884 12 | 0.758 10 | 0.815 5 | 0.725 2 | 0.927 16 | 0.867 9 | 0.743 5 | |
Virtual MVFusion | 0.746 11 | 0.771 39 | 0.819 7 | 0.848 6 | 0.702 25 | 0.865 5 | 0.397 68 | 0.899 3 | 0.699 3 | 0.664 6 | 0.948 39 | 0.588 6 | 0.330 9 | 0.746 17 | 0.851 24 | 0.764 8 | 0.796 14 | 0.704 6 | 0.935 10 | 0.866 10 | 0.728 8 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
SparseConvNet | 0.725 20 | 0.647 73 | 0.821 5 | 0.846 7 | 0.721 20 | 0.869 3 | 0.533 12 | 0.754 41 | 0.603 34 | 0.614 22 | 0.955 17 | 0.572 10 | 0.325 11 | 0.710 21 | 0.870 13 | 0.724 19 | 0.823 2 | 0.628 28 | 0.934 11 | 0.865 11 | 0.683 25 | |
PointTransformer++ | 0.725 20 | 0.727 59 | 0.811 12 | 0.819 16 | 0.765 4 | 0.841 16 | 0.502 22 | 0.814 27 | 0.621 24 | 0.623 19 | 0.955 17 | 0.556 15 | 0.284 28 | 0.620 37 | 0.866 14 | 0.781 5 | 0.757 39 | 0.648 19 | 0.932 14 | 0.862 12 | 0.709 18 | |
LRPNet | 0.742 15 | 0.816 26 | 0.806 14 | 0.807 24 | 0.752 8 | 0.828 28 | 0.575 3 | 0.839 17 | 0.699 3 | 0.637 16 | 0.954 22 | 0.520 26 | 0.320 13 | 0.755 13 | 0.834 28 | 0.760 9 | 0.772 27 | 0.676 11 | 0.915 21 | 0.862 12 | 0.717 15 | |
StratifiedFormer | ![]() | 0.747 10 | 0.901 7 | 0.803 16 | 0.845 8 | 0.757 6 | 0.846 14 | 0.512 19 | 0.825 21 | 0.696 5 | 0.645 9 | 0.956 13 | 0.576 9 | 0.262 43 | 0.744 18 | 0.861 16 | 0.742 14 | 0.770 30 | 0.705 5 | 0.899 31 | 0.860 14 | 0.734 6 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
VMNet | ![]() | 0.746 11 | 0.870 12 | 0.838 2 | 0.858 4 | 0.729 17 | 0.850 12 | 0.501 23 | 0.874 7 | 0.587 40 | 0.658 7 | 0.956 13 | 0.564 12 | 0.299 20 | 0.765 9 | 0.900 5 | 0.716 24 | 0.812 7 | 0.631 27 | 0.939 8 | 0.858 15 | 0.709 18 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
Retro-FPN | 0.744 13 | 0.842 19 | 0.800 17 | 0.767 40 | 0.740 13 | 0.836 22 | 0.541 10 | 0.914 1 | 0.672 11 | 0.626 18 | 0.958 9 | 0.552 17 | 0.272 34 | 0.777 4 | 0.886 11 | 0.696 31 | 0.801 10 | 0.674 13 | 0.941 6 | 0.858 15 | 0.717 15 | |
BPNet | ![]() | 0.749 7 | 0.909 4 | 0.818 9 | 0.811 21 | 0.752 8 | 0.839 18 | 0.485 31 | 0.842 15 | 0.673 10 | 0.644 10 | 0.957 12 | 0.528 24 | 0.305 18 | 0.773 6 | 0.859 17 | 0.788 4 | 0.818 4 | 0.693 7 | 0.916 20 | 0.856 17 | 0.723 13 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
DMF-Net | 0.752 5 | 0.906 6 | 0.793 22 | 0.802 28 | 0.689 27 | 0.825 30 | 0.556 6 | 0.867 8 | 0.681 8 | 0.602 29 | 0.960 7 | 0.555 16 | 0.365 3 | 0.779 3 | 0.859 17 | 0.747 13 | 0.795 17 | 0.717 4 | 0.917 19 | 0.856 17 | 0.764 2 | |
SAT | 0.742 15 | 0.860 14 | 0.765 34 | 0.819 16 | 0.769 3 | 0.848 13 | 0.533 12 | 0.829 19 | 0.663 13 | 0.631 17 | 0.955 17 | 0.586 8 | 0.274 32 | 0.753 14 | 0.896 6 | 0.729 16 | 0.760 36 | 0.666 16 | 0.921 17 | 0.855 19 | 0.733 7 | |
One-Thing-One-Click | 0.693 30 | 0.743 51 | 0.794 21 | 0.655 70 | 0.684 29 | 0.822 34 | 0.497 27 | 0.719 51 | 0.622 23 | 0.617 20 | 0.977 4 | 0.447 52 | 0.339 6 | 0.750 16 | 0.664 60 | 0.703 29 | 0.790 20 | 0.596 42 | 0.946 4 | 0.855 19 | 0.647 35 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
MinkowskiNet | ![]() | 0.736 18 | 0.859 15 | 0.818 9 | 0.832 13 | 0.709 22 | 0.840 17 | 0.521 18 | 0.853 11 | 0.660 15 | 0.643 11 | 0.951 30 | 0.544 18 | 0.286 27 | 0.731 19 | 0.893 7 | 0.675 38 | 0.772 27 | 0.683 9 | 0.874 49 | 0.852 21 | 0.727 10 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
MatchingNet | 0.724 22 | 0.812 28 | 0.812 11 | 0.810 22 | 0.735 15 | 0.834 23 | 0.495 28 | 0.860 10 | 0.572 46 | 0.602 29 | 0.954 22 | 0.512 28 | 0.280 29 | 0.757 12 | 0.845 26 | 0.725 18 | 0.780 23 | 0.606 38 | 0.937 9 | 0.851 22 | 0.700 21 | |
LargeKernel3D | 0.739 17 | 0.909 4 | 0.820 6 | 0.806 26 | 0.740 13 | 0.852 10 | 0.545 9 | 0.826 20 | 0.594 38 | 0.643 11 | 0.955 17 | 0.541 19 | 0.263 42 | 0.723 20 | 0.858 19 | 0.775 6 | 0.767 31 | 0.678 10 | 0.933 12 | 0.848 23 | 0.694 23 | |
PicassoNet-II | ![]() | 0.696 29 | 0.704 63 | 0.790 23 | 0.787 32 | 0.709 22 | 0.837 20 | 0.459 40 | 0.815 25 | 0.543 55 | 0.615 21 | 0.956 13 | 0.529 22 | 0.250 46 | 0.551 57 | 0.790 37 | 0.703 29 | 0.799 12 | 0.619 33 | 0.908 23 | 0.848 23 | 0.700 21 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
One Thing One Click | 0.701 27 | 0.825 24 | 0.796 19 | 0.723 47 | 0.716 21 | 0.832 24 | 0.433 57 | 0.816 23 | 0.634 20 | 0.609 24 | 0.969 6 | 0.418 65 | 0.344 5 | 0.559 52 | 0.833 29 | 0.715 25 | 0.808 8 | 0.560 56 | 0.902 28 | 0.847 25 | 0.680 26 | |
contrastBoundary | ![]() | 0.705 25 | 0.769 42 | 0.775 30 | 0.809 23 | 0.687 28 | 0.820 37 | 0.439 55 | 0.812 28 | 0.661 14 | 0.591 35 | 0.945 48 | 0.515 27 | 0.171 75 | 0.633 33 | 0.856 20 | 0.720 21 | 0.796 14 | 0.668 15 | 0.889 38 | 0.847 25 | 0.689 24 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
INS-Conv-semantic | 0.717 23 | 0.751 48 | 0.759 37 | 0.812 20 | 0.704 24 | 0.868 4 | 0.537 11 | 0.842 15 | 0.609 30 | 0.608 25 | 0.953 25 | 0.534 20 | 0.293 23 | 0.616 38 | 0.864 15 | 0.719 23 | 0.793 18 | 0.640 23 | 0.933 12 | 0.845 27 | 0.663 30 | |
PointMetaBase | 0.714 24 | 0.835 20 | 0.785 25 | 0.821 14 | 0.684 29 | 0.846 14 | 0.531 14 | 0.865 9 | 0.614 25 | 0.596 32 | 0.953 25 | 0.500 31 | 0.246 49 | 0.674 22 | 0.888 9 | 0.692 32 | 0.764 33 | 0.624 29 | 0.849 63 | 0.844 28 | 0.675 27 | |
SimConv | 0.410 92 | 0.000 97 | 0.782 27 | 0.772 37 | 0.722 19 | 0.838 19 | 0.407 64 | 0.000 98 | 0.000 98 | 0.595 33 | 0.947 41 | 0.000 98 | 0.270 37 | 0.000 98 | 0.000 98 | 0.000 98 | 0.786 21 | 0.621 32 | 0.000 98 | 0.841 29 | 0.621 43 | |
VI-PointConv | 0.676 38 | 0.770 41 | 0.754 41 | 0.783 35 | 0.621 45 | 0.814 44 | 0.552 7 | 0.758 39 | 0.571 48 | 0.557 40 | 0.954 22 | 0.529 22 | 0.268 40 | 0.530 61 | 0.682 55 | 0.675 38 | 0.719 51 | 0.603 39 | 0.888 39 | 0.833 30 | 0.665 29 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
VACNN++ | 0.684 34 | 0.728 58 | 0.757 40 | 0.776 36 | 0.690 26 | 0.804 51 | 0.464 38 | 0.816 23 | 0.577 45 | 0.587 36 | 0.945 48 | 0.508 30 | 0.276 31 | 0.671 23 | 0.710 50 | 0.663 43 | 0.750 42 | 0.589 47 | 0.881 43 | 0.832 31 | 0.653 33 | |
JSENet | ![]() | 0.699 28 | 0.881 11 | 0.762 35 | 0.821 14 | 0.667 32 | 0.800 53 | 0.522 17 | 0.792 33 | 0.613 26 | 0.607 26 | 0.935 67 | 0.492 33 | 0.205 62 | 0.576 46 | 0.853 22 | 0.691 33 | 0.758 38 | 0.652 18 | 0.872 52 | 0.828 32 | 0.649 34 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
Superpoint Network | 0.683 36 | 0.851 17 | 0.728 54 | 0.800 30 | 0.653 35 | 0.806 49 | 0.468 35 | 0.804 29 | 0.572 46 | 0.602 29 | 0.946 45 | 0.453 49 | 0.239 52 | 0.519 63 | 0.822 30 | 0.689 36 | 0.762 35 | 0.595 44 | 0.895 34 | 0.827 33 | 0.630 41 | |
PointContrast_LA_SEM | 0.683 36 | 0.757 46 | 0.784 26 | 0.786 33 | 0.639 41 | 0.824 32 | 0.408 62 | 0.775 35 | 0.604 33 | 0.541 44 | 0.934 71 | 0.532 21 | 0.269 38 | 0.552 55 | 0.777 38 | 0.645 53 | 0.793 18 | 0.640 23 | 0.913 22 | 0.824 34 | 0.671 28 | |
RFCR | 0.702 26 | 0.889 9 | 0.745 46 | 0.813 19 | 0.672 31 | 0.818 41 | 0.493 29 | 0.815 25 | 0.623 22 | 0.610 23 | 0.947 41 | 0.470 40 | 0.249 48 | 0.594 41 | 0.848 25 | 0.705 28 | 0.779 24 | 0.646 20 | 0.892 36 | 0.823 35 | 0.611 45 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
ROSMRF3D | 0.673 39 | 0.789 32 | 0.748 43 | 0.763 42 | 0.635 43 | 0.814 44 | 0.407 64 | 0.747 43 | 0.581 44 | 0.573 37 | 0.950 34 | 0.484 34 | 0.271 36 | 0.607 39 | 0.754 41 | 0.649 48 | 0.774 26 | 0.596 42 | 0.883 41 | 0.823 35 | 0.606 49 | |
Feature-Geometry Net | ![]() | 0.685 33 | 0.866 13 | 0.748 43 | 0.819 16 | 0.645 39 | 0.794 56 | 0.450 45 | 0.802 31 | 0.587 40 | 0.604 27 | 0.945 48 | 0.464 42 | 0.201 65 | 0.554 54 | 0.840 27 | 0.723 20 | 0.732 48 | 0.602 40 | 0.907 24 | 0.822 37 | 0.603 52 |
Feature_GeometricNet | ![]() | 0.690 31 | 0.884 10 | 0.754 41 | 0.795 31 | 0.647 37 | 0.818 41 | 0.422 59 | 0.802 31 | 0.612 27 | 0.604 27 | 0.945 48 | 0.462 43 | 0.189 70 | 0.563 51 | 0.853 22 | 0.726 17 | 0.765 32 | 0.632 26 | 0.904 26 | 0.821 38 | 0.606 49 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
KP-FCNN | 0.684 34 | 0.847 18 | 0.758 39 | 0.784 34 | 0.647 37 | 0.814 44 | 0.473 33 | 0.772 36 | 0.605 32 | 0.594 34 | 0.935 67 | 0.450 50 | 0.181 73 | 0.587 42 | 0.805 35 | 0.690 34 | 0.785 22 | 0.614 34 | 0.882 42 | 0.819 39 | 0.632 40 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
joint point-based | ![]() | 0.634 56 | 0.614 78 | 0.778 29 | 0.667 67 | 0.633 44 | 0.825 30 | 0.420 60 | 0.804 29 | 0.467 75 | 0.561 39 | 0.951 30 | 0.494 32 | 0.291 24 | 0.566 49 | 0.458 75 | 0.579 72 | 0.764 33 | 0.559 58 | 0.838 65 | 0.814 40 | 0.598 54 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
FusionNet | 0.688 32 | 0.704 63 | 0.741 50 | 0.754 44 | 0.656 33 | 0.829 26 | 0.501 23 | 0.741 46 | 0.609 30 | 0.548 42 | 0.950 34 | 0.522 25 | 0.371 2 | 0.633 33 | 0.756 40 | 0.715 25 | 0.771 29 | 0.623 30 | 0.861 59 | 0.814 40 | 0.658 31 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
PointConv | ![]() | 0.666 41 | 0.781 34 | 0.759 37 | 0.699 55 | 0.644 40 | 0.822 34 | 0.475 32 | 0.779 34 | 0.564 51 | 0.504 60 | 0.953 25 | 0.428 59 | 0.203 64 | 0.586 44 | 0.754 41 | 0.661 44 | 0.753 40 | 0.588 48 | 0.902 28 | 0.813 42 | 0.642 36 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PPCNN++ | ![]() | 0.663 43 | 0.746 49 | 0.708 57 | 0.722 48 | 0.638 42 | 0.820 37 | 0.451 42 | 0.566 77 | 0.599 36 | 0.541 44 | 0.950 34 | 0.510 29 | 0.313 15 | 0.648 28 | 0.819 32 | 0.616 61 | 0.682 66 | 0.590 46 | 0.869 55 | 0.810 43 | 0.656 32 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
HPEIN | 0.618 65 | 0.729 57 | 0.668 74 | 0.647 73 | 0.597 52 | 0.766 69 | 0.414 61 | 0.680 60 | 0.520 60 | 0.525 51 | 0.946 45 | 0.432 55 | 0.215 58 | 0.493 70 | 0.599 64 | 0.638 54 | 0.617 81 | 0.570 52 | 0.897 32 | 0.806 44 | 0.605 51 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
PointASNL | ![]() | 0.666 41 | 0.703 65 | 0.781 28 | 0.751 46 | 0.655 34 | 0.830 25 | 0.471 34 | 0.769 37 | 0.474 73 | 0.537 46 | 0.951 30 | 0.475 38 | 0.279 30 | 0.635 31 | 0.698 54 | 0.675 38 | 0.751 41 | 0.553 61 | 0.816 70 | 0.806 44 | 0.703 20 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
DCM-Net | 0.658 44 | 0.778 35 | 0.702 60 | 0.806 26 | 0.619 46 | 0.813 47 | 0.468 35 | 0.693 59 | 0.494 66 | 0.524 52 | 0.941 59 | 0.449 51 | 0.298 21 | 0.510 65 | 0.821 31 | 0.675 38 | 0.727 50 | 0.568 54 | 0.826 68 | 0.803 46 | 0.637 38 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
FPConv | ![]() | 0.639 51 | 0.785 33 | 0.760 36 | 0.713 53 | 0.603 49 | 0.798 54 | 0.392 70 | 0.534 82 | 0.603 34 | 0.524 52 | 0.948 39 | 0.457 45 | 0.250 46 | 0.538 59 | 0.723 48 | 0.598 66 | 0.696 61 | 0.614 34 | 0.872 52 | 0.799 47 | 0.567 64 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
SConv | 0.636 54 | 0.830 22 | 0.697 64 | 0.752 45 | 0.572 62 | 0.780 64 | 0.445 49 | 0.716 52 | 0.529 58 | 0.530 49 | 0.951 30 | 0.446 53 | 0.170 76 | 0.507 67 | 0.666 59 | 0.636 55 | 0.682 66 | 0.541 67 | 0.886 40 | 0.799 47 | 0.594 56 | |
APCF-Net | 0.631 58 | 0.742 52 | 0.687 73 | 0.672 63 | 0.557 66 | 0.792 59 | 0.408 62 | 0.665 64 | 0.545 54 | 0.508 57 | 0.952 29 | 0.428 59 | 0.186 71 | 0.634 32 | 0.702 52 | 0.620 58 | 0.706 57 | 0.555 60 | 0.873 50 | 0.798 49 | 0.581 58 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
DenSeR | 0.628 62 | 0.800 29 | 0.625 83 | 0.719 50 | 0.545 68 | 0.806 49 | 0.445 49 | 0.597 72 | 0.448 79 | 0.519 55 | 0.938 63 | 0.481 35 | 0.328 10 | 0.489 71 | 0.499 74 | 0.657 46 | 0.759 37 | 0.592 45 | 0.881 43 | 0.797 50 | 0.634 39 | |
FusionAwareConv | 0.630 61 | 0.604 80 | 0.741 50 | 0.766 41 | 0.590 54 | 0.747 75 | 0.501 23 | 0.734 48 | 0.503 65 | 0.527 50 | 0.919 81 | 0.454 47 | 0.323 12 | 0.550 58 | 0.420 79 | 0.678 37 | 0.688 64 | 0.544 64 | 0.896 33 | 0.795 51 | 0.627 42 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
SALANet | 0.670 40 | 0.816 26 | 0.770 32 | 0.768 39 | 0.652 36 | 0.807 48 | 0.451 42 | 0.747 43 | 0.659 16 | 0.545 43 | 0.924 77 | 0.473 39 | 0.149 85 | 0.571 48 | 0.811 34 | 0.635 56 | 0.746 43 | 0.623 30 | 0.892 36 | 0.794 52 | 0.570 62 | |
SegGroup_sem | ![]() | 0.627 63 | 0.818 25 | 0.747 45 | 0.701 54 | 0.602 50 | 0.764 70 | 0.385 74 | 0.629 69 | 0.490 68 | 0.508 57 | 0.931 74 | 0.409 67 | 0.201 65 | 0.564 50 | 0.725 47 | 0.618 59 | 0.692 62 | 0.539 68 | 0.873 50 | 0.794 52 | 0.548 71 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. TIP 2022 | ||||||||||||||||||||||
SAFNet-seg | ![]() | 0.654 46 | 0.752 47 | 0.734 52 | 0.664 68 | 0.583 58 | 0.815 43 | 0.399 67 | 0.754 41 | 0.639 18 | 0.535 48 | 0.942 57 | 0.470 40 | 0.309 17 | 0.665 24 | 0.539 67 | 0.650 47 | 0.708 56 | 0.635 25 | 0.857 61 | 0.793 54 | 0.642 36 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
PointMRNet | 0.640 50 | 0.717 62 | 0.701 61 | 0.692 58 | 0.576 60 | 0.801 52 | 0.467 37 | 0.716 52 | 0.563 52 | 0.459 69 | 0.953 25 | 0.429 58 | 0.169 77 | 0.581 45 | 0.854 21 | 0.605 62 | 0.710 53 | 0.550 62 | 0.894 35 | 0.793 54 | 0.575 60 | |
PD-Net | 0.638 52 | 0.797 30 | 0.769 33 | 0.641 76 | 0.590 54 | 0.820 37 | 0.461 39 | 0.537 81 | 0.637 19 | 0.536 47 | 0.947 41 | 0.388 72 | 0.206 61 | 0.656 25 | 0.668 58 | 0.647 51 | 0.732 48 | 0.585 49 | 0.868 56 | 0.793 54 | 0.473 85 | |
MVPNet | ![]() | 0.641 48 | 0.831 21 | 0.715 55 | 0.671 65 | 0.590 54 | 0.781 62 | 0.394 69 | 0.679 61 | 0.642 17 | 0.553 41 | 0.937 64 | 0.462 43 | 0.256 44 | 0.649 27 | 0.406 80 | 0.626 57 | 0.691 63 | 0.666 16 | 0.877 45 | 0.792 57 | 0.608 48 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
Supervoxel-CNN | 0.635 55 | 0.656 71 | 0.711 56 | 0.719 50 | 0.613 47 | 0.757 73 | 0.444 52 | 0.765 38 | 0.534 57 | 0.566 38 | 0.928 75 | 0.478 37 | 0.272 34 | 0.636 30 | 0.531 69 | 0.664 42 | 0.645 76 | 0.508 74 | 0.864 58 | 0.792 57 | 0.611 45 | |
RandLA-Net | ![]() | 0.645 47 | 0.778 35 | 0.731 53 | 0.699 55 | 0.577 59 | 0.829 26 | 0.446 47 | 0.736 47 | 0.477 72 | 0.523 54 | 0.945 48 | 0.454 47 | 0.269 38 | 0.484 72 | 0.749 44 | 0.618 59 | 0.738 44 | 0.599 41 | 0.827 67 | 0.792 57 | 0.621 43 |
PointMTL | 0.632 57 | 0.731 56 | 0.688 71 | 0.675 62 | 0.591 53 | 0.784 61 | 0.444 52 | 0.565 78 | 0.610 28 | 0.492 61 | 0.949 37 | 0.456 46 | 0.254 45 | 0.587 42 | 0.706 51 | 0.599 65 | 0.665 72 | 0.612 37 | 0.868 56 | 0.791 60 | 0.579 59 | |
PointConv-SFPN | 0.641 48 | 0.776 37 | 0.703 59 | 0.721 49 | 0.557 66 | 0.826 29 | 0.451 42 | 0.672 63 | 0.563 52 | 0.483 63 | 0.943 56 | 0.425 62 | 0.162 80 | 0.644 29 | 0.726 46 | 0.659 45 | 0.709 55 | 0.572 51 | 0.875 47 | 0.786 61 | 0.559 67 | |
SIConv | 0.625 64 | 0.830 22 | 0.694 66 | 0.757 43 | 0.563 64 | 0.772 68 | 0.448 46 | 0.647 67 | 0.520 60 | 0.509 56 | 0.949 37 | 0.431 57 | 0.191 69 | 0.496 69 | 0.614 63 | 0.647 51 | 0.672 70 | 0.535 70 | 0.876 46 | 0.783 62 | 0.571 61 | |
DVVNet | 0.562 76 | 0.648 72 | 0.700 62 | 0.770 38 | 0.586 57 | 0.687 82 | 0.333 81 | 0.650 65 | 0.514 63 | 0.475 66 | 0.906 85 | 0.359 76 | 0.223 55 | 0.340 83 | 0.442 78 | 0.422 87 | 0.668 71 | 0.501 75 | 0.708 82 | 0.779 63 | 0.534 74 | |
ROSMRF | 0.580 72 | 0.772 38 | 0.707 58 | 0.681 61 | 0.563 64 | 0.764 70 | 0.362 77 | 0.515 83 | 0.465 76 | 0.465 68 | 0.936 66 | 0.427 61 | 0.207 60 | 0.438 75 | 0.577 65 | 0.536 76 | 0.675 69 | 0.486 79 | 0.723 81 | 0.779 63 | 0.524 76 | |
PointSPNet | 0.637 53 | 0.734 55 | 0.692 68 | 0.714 52 | 0.576 60 | 0.797 55 | 0.446 47 | 0.743 45 | 0.598 37 | 0.437 74 | 0.942 57 | 0.403 68 | 0.150 84 | 0.626 35 | 0.800 36 | 0.649 48 | 0.697 60 | 0.557 59 | 0.846 64 | 0.777 65 | 0.563 65 | |
HPGCNN | 0.656 45 | 0.698 66 | 0.743 48 | 0.650 71 | 0.564 63 | 0.820 37 | 0.505 21 | 0.758 39 | 0.631 21 | 0.479 64 | 0.945 48 | 0.480 36 | 0.226 53 | 0.572 47 | 0.774 39 | 0.690 34 | 0.735 46 | 0.614 34 | 0.853 62 | 0.776 66 | 0.597 55 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
SPH3D-GCN | ![]() | 0.610 66 | 0.858 16 | 0.772 31 | 0.489 88 | 0.532 69 | 0.792 59 | 0.404 66 | 0.643 68 | 0.570 49 | 0.507 59 | 0.935 67 | 0.414 66 | 0.046 94 | 0.510 65 | 0.702 52 | 0.602 64 | 0.705 58 | 0.549 63 | 0.859 60 | 0.773 67 | 0.534 74 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
PointNet2-SFPN | 0.631 58 | 0.771 39 | 0.692 68 | 0.672 63 | 0.524 70 | 0.837 20 | 0.440 54 | 0.706 57 | 0.538 56 | 0.446 71 | 0.944 54 | 0.421 64 | 0.219 56 | 0.552 55 | 0.751 43 | 0.591 68 | 0.737 45 | 0.543 66 | 0.901 30 | 0.768 68 | 0.557 68 | |
AttAN | 0.609 67 | 0.760 44 | 0.667 75 | 0.649 72 | 0.521 71 | 0.793 57 | 0.457 41 | 0.648 66 | 0.528 59 | 0.434 76 | 0.947 41 | 0.401 69 | 0.153 83 | 0.454 74 | 0.721 49 | 0.648 50 | 0.717 52 | 0.536 69 | 0.904 26 | 0.765 69 | 0.485 81 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
CCRFNet | 0.589 71 | 0.766 43 | 0.659 78 | 0.683 60 | 0.470 79 | 0.740 77 | 0.387 73 | 0.620 71 | 0.490 68 | 0.476 65 | 0.922 79 | 0.355 78 | 0.245 50 | 0.511 64 | 0.511 72 | 0.571 73 | 0.643 77 | 0.493 78 | 0.872 52 | 0.762 70 | 0.600 53 | |
wsss-transformer | 0.600 68 | 0.634 74 | 0.743 48 | 0.697 57 | 0.601 51 | 0.781 62 | 0.437 56 | 0.585 75 | 0.493 67 | 0.446 71 | 0.933 72 | 0.394 70 | 0.011 96 | 0.654 26 | 0.661 61 | 0.603 63 | 0.733 47 | 0.526 71 | 0.832 66 | 0.761 71 | 0.480 82 | |
Pointnet++ & Feature | ![]() | 0.557 77 | 0.735 54 | 0.661 77 | 0.686 59 | 0.491 75 | 0.744 76 | 0.392 70 | 0.539 80 | 0.451 78 | 0.375 82 | 0.946 45 | 0.376 74 | 0.205 62 | 0.403 80 | 0.356 83 | 0.553 75 | 0.643 77 | 0.497 76 | 0.824 69 | 0.756 72 | 0.515 77 |
DPC | 0.592 70 | 0.720 60 | 0.700 62 | 0.602 81 | 0.480 76 | 0.762 72 | 0.380 75 | 0.713 55 | 0.585 43 | 0.437 74 | 0.940 61 | 0.369 75 | 0.288 25 | 0.434 77 | 0.509 73 | 0.590 70 | 0.639 79 | 0.567 55 | 0.772 75 | 0.755 73 | 0.592 57 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
PCNN | 0.498 83 | 0.559 83 | 0.644 81 | 0.560 85 | 0.420 84 | 0.711 80 | 0.229 92 | 0.414 85 | 0.436 81 | 0.352 85 | 0.941 59 | 0.324 82 | 0.155 82 | 0.238 90 | 0.387 82 | 0.493 79 | 0.529 90 | 0.509 72 | 0.813 71 | 0.751 74 | 0.504 79 | |
LAP-D | 0.594 69 | 0.720 60 | 0.692 68 | 0.637 77 | 0.456 80 | 0.773 67 | 0.391 72 | 0.730 49 | 0.587 40 | 0.445 73 | 0.940 61 | 0.381 73 | 0.288 25 | 0.434 77 | 0.453 77 | 0.591 68 | 0.649 74 | 0.581 50 | 0.777 74 | 0.749 75 | 0.610 47 | |
3DSM_DMMF | 0.631 58 | 0.626 75 | 0.745 46 | 0.801 29 | 0.607 48 | 0.751 74 | 0.506 20 | 0.729 50 | 0.565 50 | 0.491 62 | 0.866 91 | 0.434 54 | 0.197 68 | 0.595 40 | 0.630 62 | 0.709 27 | 0.705 58 | 0.560 56 | 0.875 47 | 0.740 76 | 0.491 80 | |
GMLPs | 0.538 78 | 0.495 88 | 0.693 67 | 0.647 73 | 0.471 78 | 0.793 57 | 0.300 84 | 0.477 84 | 0.505 64 | 0.358 83 | 0.903 87 | 0.327 81 | 0.081 91 | 0.472 73 | 0.529 70 | 0.448 85 | 0.710 53 | 0.509 72 | 0.746 77 | 0.737 77 | 0.554 70 | |
SD-DETR | 0.576 73 | 0.746 49 | 0.609 87 | 0.445 92 | 0.517 72 | 0.643 88 | 0.366 76 | 0.714 54 | 0.456 77 | 0.468 67 | 0.870 90 | 0.432 55 | 0.264 41 | 0.558 53 | 0.674 56 | 0.586 71 | 0.688 64 | 0.482 80 | 0.739 79 | 0.733 78 | 0.537 73 | |
3DMV, FTSDF | 0.501 82 | 0.558 84 | 0.608 88 | 0.424 94 | 0.478 77 | 0.690 81 | 0.246 90 | 0.586 74 | 0.468 74 | 0.450 70 | 0.911 83 | 0.394 70 | 0.160 81 | 0.438 75 | 0.212 90 | 0.432 86 | 0.541 89 | 0.475 81 | 0.742 78 | 0.727 79 | 0.477 83 | |
DGCNN_reproduce | ![]() | 0.446 87 | 0.474 91 | 0.623 84 | 0.463 90 | 0.366 88 | 0.651 86 | 0.310 82 | 0.389 88 | 0.349 90 | 0.330 86 | 0.937 64 | 0.271 88 | 0.126 87 | 0.285 86 | 0.224 89 | 0.350 92 | 0.577 83 | 0.445 87 | 0.625 88 | 0.723 80 | 0.394 88 |
Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon: Dynamic Graph CNN for Learning on Point Clouds. TOG 2019 | ||||||||||||||||||||||
subcloud_weak | 0.516 80 | 0.676 68 | 0.591 90 | 0.609 78 | 0.442 81 | 0.774 66 | 0.335 80 | 0.597 72 | 0.422 84 | 0.357 84 | 0.932 73 | 0.341 80 | 0.094 90 | 0.298 85 | 0.528 71 | 0.473 83 | 0.676 68 | 0.495 77 | 0.602 90 | 0.721 81 | 0.349 92 | |
PointCNN with RGB | ![]() | 0.458 85 | 0.577 82 | 0.611 86 | 0.356 96 | 0.321 93 | 0.715 79 | 0.299 86 | 0.376 89 | 0.328 92 | 0.319 87 | 0.944 54 | 0.285 86 | 0.164 79 | 0.216 93 | 0.229 88 | 0.484 81 | 0.545 88 | 0.456 84 | 0.755 76 | 0.709 82 | 0.475 84 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
Online SegFusion | 0.515 81 | 0.607 79 | 0.644 81 | 0.579 83 | 0.434 82 | 0.630 90 | 0.353 78 | 0.628 70 | 0.440 80 | 0.410 77 | 0.762 95 | 0.307 83 | 0.167 78 | 0.520 62 | 0.403 81 | 0.516 77 | 0.565 84 | 0.447 86 | 0.678 85 | 0.701 83 | 0.514 78 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
SPLAT Net | ![]() | 0.393 93 | 0.472 92 | 0.511 93 | 0.606 79 | 0.311 94 | 0.656 84 | 0.245 91 | 0.405 86 | 0.328 92 | 0.197 95 | 0.927 76 | 0.227 93 | 0.000 98 | 0.001 97 | 0.249 87 | 0.271 95 | 0.510 91 | 0.383 94 | 0.593 91 | 0.699 84 | 0.267 94 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
SSC-UNet | ![]() | 0.308 96 | 0.353 94 | 0.290 97 | 0.278 97 | 0.166 97 | 0.553 94 | 0.169 96 | 0.286 91 | 0.147 96 | 0.148 97 | 0.908 84 | 0.182 95 | 0.064 93 | 0.023 96 | 0.018 97 | 0.354 91 | 0.363 95 | 0.345 95 | 0.546 94 | 0.685 85 | 0.278 93 |
TextureNet | ![]() | 0.566 75 | 0.672 70 | 0.664 76 | 0.671 65 | 0.494 74 | 0.719 78 | 0.445 49 | 0.678 62 | 0.411 85 | 0.396 79 | 0.935 67 | 0.356 77 | 0.225 54 | 0.412 79 | 0.535 68 | 0.565 74 | 0.636 80 | 0.464 82 | 0.794 73 | 0.680 86 | 0.568 63 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
SurfaceConvPF | 0.442 88 | 0.505 87 | 0.622 85 | 0.380 95 | 0.342 91 | 0.654 85 | 0.227 93 | 0.397 87 | 0.367 88 | 0.276 91 | 0.924 77 | 0.240 91 | 0.198 67 | 0.359 82 | 0.262 86 | 0.366 89 | 0.581 82 | 0.435 89 | 0.640 87 | 0.668 87 | 0.398 87 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
FCPN | ![]() | 0.447 86 | 0.679 67 | 0.604 89 | 0.578 84 | 0.380 86 | 0.682 83 | 0.291 87 | 0.106 95 | 0.483 71 | 0.258 94 | 0.920 80 | 0.258 90 | 0.025 95 | 0.231 92 | 0.325 84 | 0.480 82 | 0.560 86 | 0.463 83 | 0.725 80 | 0.666 88 | 0.231 96 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
SQN_0.1% | 0.569 74 | 0.676 68 | 0.696 65 | 0.657 69 | 0.497 73 | 0.779 65 | 0.424 58 | 0.548 79 | 0.515 62 | 0.376 81 | 0.902 88 | 0.422 63 | 0.357 4 | 0.379 81 | 0.456 76 | 0.596 67 | 0.659 73 | 0.544 64 | 0.685 84 | 0.665 89 | 0.556 69 | |
Tangent Convolutions | ![]() | 0.438 90 | 0.437 93 | 0.646 80 | 0.474 89 | 0.369 87 | 0.645 87 | 0.353 78 | 0.258 92 | 0.282 94 | 0.279 90 | 0.918 82 | 0.298 85 | 0.147 86 | 0.283 87 | 0.294 85 | 0.487 80 | 0.562 85 | 0.427 90 | 0.619 89 | 0.633 90 | 0.352 91 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
PNET2 | 0.442 88 | 0.548 85 | 0.548 91 | 0.597 82 | 0.363 89 | 0.628 91 | 0.300 84 | 0.292 90 | 0.374 87 | 0.307 88 | 0.881 89 | 0.268 89 | 0.186 71 | 0.238 90 | 0.204 92 | 0.407 88 | 0.506 94 | 0.449 85 | 0.667 86 | 0.620 91 | 0.462 86 | |
3DWSSS | 0.425 91 | 0.525 86 | 0.647 79 | 0.522 86 | 0.324 92 | 0.488 96 | 0.077 97 | 0.712 56 | 0.353 89 | 0.401 78 | 0.636 97 | 0.281 87 | 0.176 74 | 0.340 83 | 0.565 66 | 0.175 96 | 0.551 87 | 0.398 92 | 0.370 96 | 0.602 92 | 0.361 90 | |
PanopticFusion-label | 0.529 79 | 0.491 89 | 0.688 71 | 0.604 80 | 0.386 85 | 0.632 89 | 0.225 94 | 0.705 58 | 0.434 82 | 0.293 89 | 0.815 92 | 0.348 79 | 0.241 51 | 0.499 68 | 0.669 57 | 0.507 78 | 0.649 74 | 0.442 88 | 0.796 72 | 0.602 92 | 0.561 66 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
3DMV | 0.484 84 | 0.484 90 | 0.538 92 | 0.643 75 | 0.424 83 | 0.606 93 | 0.310 82 | 0.574 76 | 0.433 83 | 0.378 80 | 0.796 93 | 0.301 84 | 0.214 59 | 0.537 60 | 0.208 91 | 0.472 84 | 0.507 93 | 0.413 91 | 0.693 83 | 0.602 92 | 0.539 72 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 94 | 0.297 95 | 0.491 94 | 0.432 93 | 0.358 90 | 0.612 92 | 0.274 88 | 0.116 94 | 0.411 85 | 0.265 92 | 0.904 86 | 0.229 92 | 0.079 92 | 0.250 88 | 0.185 93 | 0.320 93 | 0.510 91 | 0.385 93 | 0.548 92 | 0.597 95 | 0.394 88 | |
PointNet++ | ![]() | 0.339 95 | 0.584 81 | 0.478 95 | 0.458 91 | 0.256 96 | 0.360 97 | 0.250 89 | 0.247 93 | 0.278 95 | 0.261 93 | 0.677 96 | 0.183 94 | 0.117 88 | 0.212 94 | 0.145 95 | 0.364 90 | 0.346 97 | 0.232 97 | 0.548 92 | 0.523 96 | 0.252 95 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
ScanNet | ![]() | 0.306 97 | 0.203 96 | 0.366 96 | 0.501 87 | 0.311 94 | 0.524 95 | 0.211 95 | 0.002 97 | 0.342 91 | 0.189 96 | 0.786 94 | 0.145 96 | 0.102 89 | 0.245 89 | 0.152 94 | 0.318 94 | 0.348 96 | 0.300 96 | 0.460 95 | 0.437 97 | 0.182 97 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
ERROR | 0.054 98 | 0.000 97 | 0.041 98 | 0.172 98 | 0.030 98 | 0.062 98 | 0.001 98 | 0.035 96 | 0.004 97 | 0.051 98 | 0.143 98 | 0.019 97 | 0.003 97 | 0.041 95 | 0.050 96 | 0.003 97 | 0.054 98 | 0.018 98 | 0.005 97 | 0.264 98 | 0.082 98 | |