3D Semantic label benchmark
This table lists the benchmark results for the 3D semantic label scenario.
Method | Info | avg iou | bathtub | bed | bookshelf | cabinet | chair | counter | curtain | desk | door | floor | otherfurniture | picture | refrigerator | shower curtain | sink | sofa | table | toilet | wall | window |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ![]() | ||
Mix3D | ![]() | 0.781 1 | 0.964 1 | 0.855 1 | 0.843 8 | 0.781 1 | 0.858 6 | 0.575 2 | 0.831 11 | 0.685 4 | 0.714 1 | 0.979 1 | 0.594 3 | 0.310 13 | 0.801 1 | 0.892 4 | 0.841 2 | 0.819 3 | 0.723 2 | 0.940 4 | 0.887 1 | 0.725 8 |
Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, Francis Engelmann: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. 3DV 2021 (Oral) | ||||||||||||||||||||||
EQ-Net | 0.743 8 | 0.620 67 | 0.799 11 | 0.849 3 | 0.730 7 | 0.822 22 | 0.493 18 | 0.897 2 | 0.664 7 | 0.681 2 | 0.955 11 | 0.562 9 | 0.378 1 | 0.760 6 | 0.903 1 | 0.738 8 | 0.801 9 | 0.673 8 | 0.907 14 | 0.877 3 | 0.745 1 | |
Zetong Yang*, Li Jiang*, Yanan Sun, Bernt Schiele, Jiaya JIa: A Unified Query-based Paradigm for Point Cloud Understanding. CVPR 2022 | ||||||||||||||||||||||
OccuSeg+Semantic | 0.764 2 | 0.758 38 | 0.796 12 | 0.839 9 | 0.746 5 | 0.907 1 | 0.562 3 | 0.850 6 | 0.680 5 | 0.672 3 | 0.978 2 | 0.610 1 | 0.335 7 | 0.777 2 | 0.819 22 | 0.847 1 | 0.830 1 | 0.691 6 | 0.972 1 | 0.885 2 | 0.727 6 | |
Virtual MVFusion | 0.746 6 | 0.771 32 | 0.819 6 | 0.848 4 | 0.702 15 | 0.865 5 | 0.397 58 | 0.899 1 | 0.699 2 | 0.664 4 | 0.948 29 | 0.588 5 | 0.330 8 | 0.746 9 | 0.851 15 | 0.764 4 | 0.796 11 | 0.704 4 | 0.935 7 | 0.866 6 | 0.728 4 | |
Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru: Virtual Multi-view Fusion for 3D Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
VMNet | ![]() | 0.746 6 | 0.870 10 | 0.838 2 | 0.858 2 | 0.729 8 | 0.850 8 | 0.501 13 | 0.874 3 | 0.587 30 | 0.658 5 | 0.956 8 | 0.564 8 | 0.299 16 | 0.765 4 | 0.900 2 | 0.716 16 | 0.812 6 | 0.631 18 | 0.939 5 | 0.858 9 | 0.709 10 |
Zeyu HU, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai: VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. ICCV 2021 (Oral) | ||||||||||||||||||||||
IPCA | 0.731 10 | 0.890 5 | 0.837 3 | 0.864 1 | 0.726 9 | 0.873 2 | 0.530 7 | 0.824 13 | 0.489 61 | 0.647 6 | 0.978 2 | 0.609 2 | 0.336 6 | 0.624 25 | 0.733 34 | 0.758 5 | 0.776 19 | 0.570 41 | 0.949 2 | 0.877 3 | 0.728 4 | |
StratifiedFormer | ![]() | 0.747 5 | 0.901 4 | 0.803 10 | 0.845 6 | 0.757 3 | 0.846 9 | 0.512 10 | 0.825 12 | 0.696 3 | 0.645 7 | 0.956 8 | 0.576 6 | 0.262 31 | 0.744 10 | 0.861 8 | 0.742 7 | 0.770 23 | 0.705 3 | 0.899 22 | 0.860 8 | 0.734 3 |
Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia: Stratified Transformer for 3D Point Cloud Segmentation. CVPR 2022 | ||||||||||||||||||||||
BPNet | ![]() | 0.749 4 | 0.909 3 | 0.818 7 | 0.811 14 | 0.752 4 | 0.839 11 | 0.485 20 | 0.842 9 | 0.673 6 | 0.644 8 | 0.957 7 | 0.528 15 | 0.305 15 | 0.773 3 | 0.859 9 | 0.788 3 | 0.818 4 | 0.693 5 | 0.916 11 | 0.856 10 | 0.723 9 |
Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong: Bidirectional Projection Network for Cross Dimension Scene Understanding. CVPR 2021 (Oral) | ||||||||||||||||||||||
MinkowskiNet | ![]() | 0.736 9 | 0.859 12 | 0.818 7 | 0.832 10 | 0.709 12 | 0.840 10 | 0.521 9 | 0.853 5 | 0.660 9 | 0.643 9 | 0.951 19 | 0.544 10 | 0.286 22 | 0.731 11 | 0.893 3 | 0.675 28 | 0.772 21 | 0.683 7 | 0.874 42 | 0.852 12 | 0.727 6 |
C. Choy, J. Gwak, S. Savarese: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. CVPR 2019 | ||||||||||||||||||||||
O-CNN | ![]() | 0.762 3 | 0.924 2 | 0.823 4 | 0.844 7 | 0.770 2 | 0.852 7 | 0.577 1 | 0.847 7 | 0.711 1 | 0.640 10 | 0.958 6 | 0.592 4 | 0.217 45 | 0.762 5 | 0.888 5 | 0.758 5 | 0.813 5 | 0.726 1 | 0.932 10 | 0.868 5 | 0.744 2 |
Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, Xin Tong: O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH 2017 | ||||||||||||||||||||||
One-Thing-One-Click | 0.693 19 | 0.743 42 | 0.794 14 | 0.655 61 | 0.684 18 | 0.822 22 | 0.497 16 | 0.719 41 | 0.622 17 | 0.617 11 | 0.977 4 | 0.447 43 | 0.339 5 | 0.750 8 | 0.664 49 | 0.703 21 | 0.790 15 | 0.596 30 | 0.946 3 | 0.855 11 | 0.647 25 | |
Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation. CVPR 2021 | ||||||||||||||||||||||
PicassoNet-II | ![]() | 0.696 18 | 0.704 53 | 0.790 15 | 0.787 23 | 0.709 12 | 0.837 12 | 0.459 30 | 0.815 16 | 0.543 45 | 0.615 12 | 0.956 8 | 0.529 13 | 0.250 34 | 0.551 47 | 0.790 26 | 0.703 21 | 0.799 10 | 0.619 22 | 0.908 13 | 0.848 14 | 0.700 12 |
Huan Lei, Naveed Akhtar, Mubarak Shah, and Ajmal Mian: Geometric feature learning for 3D meshes. | ||||||||||||||||||||||
SparseConvNet | 0.725 11 | 0.647 63 | 0.821 5 | 0.846 5 | 0.721 10 | 0.869 3 | 0.533 6 | 0.754 31 | 0.603 26 | 0.614 13 | 0.955 11 | 0.572 7 | 0.325 10 | 0.710 12 | 0.870 6 | 0.724 13 | 0.823 2 | 0.628 19 | 0.934 8 | 0.865 7 | 0.683 15 | |
RFCR | 0.702 15 | 0.889 6 | 0.745 35 | 0.813 12 | 0.672 20 | 0.818 28 | 0.493 18 | 0.815 16 | 0.623 16 | 0.610 14 | 0.947 32 | 0.470 29 | 0.249 36 | 0.594 30 | 0.848 16 | 0.705 20 | 0.779 18 | 0.646 12 | 0.892 28 | 0.823 24 | 0.611 35 | |
Jingyu Gong, Jiachen Xu, Xin Tan, Haichuan Song, Yanyun Qu, Yuan Xie, Lizhuang Ma: Omni-Supervised Point Cloud Segmentation via Gradual Receptive Field Component Reasoning. CVPR2021 | ||||||||||||||||||||||
One Thing One Click | 0.701 16 | 0.825 19 | 0.796 12 | 0.723 37 | 0.716 11 | 0.832 15 | 0.433 45 | 0.816 14 | 0.634 14 | 0.609 15 | 0.969 5 | 0.418 55 | 0.344 4 | 0.559 44 | 0.833 19 | 0.715 17 | 0.808 7 | 0.560 45 | 0.902 19 | 0.847 15 | 0.680 16 | |
INS-Conv-semantic | 0.717 13 | 0.751 41 | 0.759 26 | 0.812 13 | 0.704 14 | 0.868 4 | 0.537 5 | 0.842 9 | 0.609 22 | 0.608 16 | 0.953 15 | 0.534 11 | 0.293 18 | 0.616 26 | 0.864 7 | 0.719 15 | 0.793 13 | 0.640 13 | 0.933 9 | 0.845 17 | 0.663 19 | |
JSENet | ![]() | 0.699 17 | 0.881 9 | 0.762 24 | 0.821 11 | 0.667 21 | 0.800 42 | 0.522 8 | 0.792 23 | 0.613 18 | 0.607 17 | 0.935 57 | 0.492 22 | 0.205 50 | 0.576 36 | 0.853 12 | 0.691 23 | 0.758 29 | 0.652 11 | 0.872 45 | 0.828 21 | 0.649 24 |
Zeyu HU, Mingmin Zhen, Xuyang BAI, Hongbo Fu, Chiew-lan Tai: JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds. ECCV 2020 | ||||||||||||||||||||||
Feature-Geometry Net | ![]() | 0.690 21 | 0.884 7 | 0.754 30 | 0.795 21 | 0.647 26 | 0.818 28 | 0.422 47 | 0.802 21 | 0.612 19 | 0.604 18 | 0.945 38 | 0.462 32 | 0.189 57 | 0.563 42 | 0.853 12 | 0.726 10 | 0.765 24 | 0.632 16 | 0.904 16 | 0.821 26 | 0.606 39 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
Feature_GeometricNet | ![]() | 0.690 21 | 0.884 7 | 0.754 30 | 0.795 21 | 0.647 26 | 0.818 28 | 0.422 47 | 0.802 21 | 0.612 19 | 0.604 18 | 0.945 38 | 0.462 32 | 0.189 57 | 0.563 42 | 0.853 12 | 0.726 10 | 0.765 24 | 0.632 16 | 0.904 16 | 0.821 26 | 0.606 39 |
Kangcheng Liu, Ben M. Chen: https://arxiv.org/abs/2012.09439. arXiv Preprint | ||||||||||||||||||||||
MatchingNet | 0.724 12 | 0.812 22 | 0.812 9 | 0.810 15 | 0.735 6 | 0.834 14 | 0.495 17 | 0.860 4 | 0.572 35 | 0.602 20 | 0.954 13 | 0.512 18 | 0.280 23 | 0.757 7 | 0.845 18 | 0.725 12 | 0.780 17 | 0.606 27 | 0.937 6 | 0.851 13 | 0.700 12 | |
Superpoint Network | 0.683 26 | 0.851 14 | 0.728 44 | 0.800 20 | 0.653 24 | 0.806 38 | 0.468 25 | 0.804 19 | 0.572 35 | 0.602 20 | 0.946 35 | 0.453 39 | 0.239 39 | 0.519 53 | 0.822 20 | 0.689 26 | 0.762 27 | 0.595 32 | 0.895 25 | 0.827 22 | 0.630 31 | |
KP-FCNN | 0.684 24 | 0.847 15 | 0.758 28 | 0.784 25 | 0.647 26 | 0.814 32 | 0.473 22 | 0.772 26 | 0.605 24 | 0.594 22 | 0.935 57 | 0.450 41 | 0.181 61 | 0.587 31 | 0.805 24 | 0.690 24 | 0.785 16 | 0.614 23 | 0.882 34 | 0.819 28 | 0.632 30 | |
H. Thomas, C. Qi, J. Deschaud, B. Marcotegui, F. Goulette, L. Guibas.: KPConv: Flexible and Deformable Convolution for Point Clouds. ICCV 2019 | ||||||||||||||||||||||
CU-Hybrid Net | 0.693 19 | 0.596 71 | 0.789 16 | 0.803 18 | 0.677 19 | 0.800 42 | 0.469 24 | 0.846 8 | 0.554 43 | 0.591 23 | 0.948 29 | 0.500 20 | 0.316 12 | 0.609 27 | 0.847 17 | 0.732 9 | 0.808 7 | 0.593 33 | 0.894 26 | 0.839 18 | 0.652 22 | |
contrastBoundary | ![]() | 0.705 14 | 0.769 35 | 0.775 20 | 0.809 16 | 0.687 17 | 0.820 25 | 0.439 43 | 0.812 18 | 0.661 8 | 0.591 23 | 0.945 38 | 0.515 17 | 0.171 63 | 0.633 22 | 0.856 10 | 0.720 14 | 0.796 11 | 0.668 9 | 0.889 30 | 0.847 15 | 0.689 14 |
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao: Contrastive Boundary Learning for Point Cloud Segmentation. CVPR2022 | ||||||||||||||||||||||
VACNN++ | 0.684 24 | 0.728 48 | 0.757 29 | 0.776 27 | 0.690 16 | 0.804 40 | 0.464 28 | 0.816 14 | 0.577 34 | 0.587 25 | 0.945 38 | 0.508 19 | 0.276 25 | 0.671 13 | 0.710 39 | 0.663 33 | 0.750 32 | 0.589 35 | 0.881 36 | 0.832 20 | 0.653 21 | |
ROSMRF3D | 0.673 29 | 0.789 25 | 0.748 33 | 0.763 32 | 0.635 32 | 0.814 32 | 0.407 54 | 0.747 33 | 0.581 33 | 0.573 26 | 0.950 23 | 0.484 23 | 0.271 27 | 0.607 28 | 0.754 30 | 0.649 39 | 0.774 20 | 0.596 30 | 0.883 33 | 0.823 24 | 0.606 39 | |
Supervoxel-CNN | 0.635 45 | 0.656 61 | 0.711 46 | 0.719 39 | 0.613 36 | 0.757 64 | 0.444 40 | 0.765 28 | 0.534 47 | 0.566 27 | 0.928 65 | 0.478 26 | 0.272 26 | 0.636 19 | 0.531 59 | 0.664 32 | 0.645 65 | 0.508 64 | 0.864 50 | 0.792 47 | 0.611 35 | |
joint point-based | ![]() | 0.634 46 | 0.614 68 | 0.778 19 | 0.667 57 | 0.633 33 | 0.825 20 | 0.420 49 | 0.804 19 | 0.467 66 | 0.561 28 | 0.951 19 | 0.494 21 | 0.291 19 | 0.566 40 | 0.458 65 | 0.579 62 | 0.764 26 | 0.559 47 | 0.838 56 | 0.814 29 | 0.598 44 |
Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu: A Unified Point-Based Framework for 3D Segmentation. 3DV 2019 | ||||||||||||||||||||||
VI-PointConv | 0.676 28 | 0.770 34 | 0.754 30 | 0.783 26 | 0.621 34 | 0.814 32 | 0.552 4 | 0.758 29 | 0.571 37 | 0.557 29 | 0.954 13 | 0.529 13 | 0.268 30 | 0.530 51 | 0.682 44 | 0.675 28 | 0.719 40 | 0.603 28 | 0.888 31 | 0.833 19 | 0.665 18 | |
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin: The Devils in the Point Clouds: Studying the Robustness of Point Cloud Convolutions. | ||||||||||||||||||||||
MVPNet | ![]() | 0.641 37 | 0.831 16 | 0.715 45 | 0.671 55 | 0.590 43 | 0.781 52 | 0.394 59 | 0.679 51 | 0.642 11 | 0.553 30 | 0.937 55 | 0.462 32 | 0.256 32 | 0.649 17 | 0.406 71 | 0.626 48 | 0.691 53 | 0.666 10 | 0.877 38 | 0.792 47 | 0.608 38 |
Maximilian Jaritz, Jiayuan Gu, Hao Su: Multi-view PointNet for 3D Scene Understanding. GMDL Workshop, ICCV 2019 | ||||||||||||||||||||||
FusionNet | 0.688 23 | 0.704 53 | 0.741 39 | 0.754 34 | 0.656 22 | 0.829 17 | 0.501 13 | 0.741 36 | 0.609 22 | 0.548 31 | 0.950 23 | 0.522 16 | 0.371 2 | 0.633 22 | 0.756 29 | 0.715 17 | 0.771 22 | 0.623 20 | 0.861 51 | 0.814 29 | 0.658 20 | |
Feihu Zhang, Jin Fang, Benjamin Wah, Philip Torr: Deep FusionNet for Point Cloud Semantic Segmentation. ECCV 2020 | ||||||||||||||||||||||
SALANet | 0.670 30 | 0.816 21 | 0.770 22 | 0.768 30 | 0.652 25 | 0.807 37 | 0.451 32 | 0.747 33 | 0.659 10 | 0.545 32 | 0.924 67 | 0.473 28 | 0.149 73 | 0.571 38 | 0.811 23 | 0.635 47 | 0.746 33 | 0.623 20 | 0.892 28 | 0.794 42 | 0.570 52 | |
PointContrast_LA_SEM | 0.683 26 | 0.757 39 | 0.784 17 | 0.786 24 | 0.639 30 | 0.824 21 | 0.408 52 | 0.775 25 | 0.604 25 | 0.541 33 | 0.934 61 | 0.532 12 | 0.269 28 | 0.552 45 | 0.777 27 | 0.645 44 | 0.793 13 | 0.640 13 | 0.913 12 | 0.824 23 | 0.671 17 | |
PointASNL | ![]() | 0.666 31 | 0.703 55 | 0.781 18 | 0.751 36 | 0.655 23 | 0.830 16 | 0.471 23 | 0.769 27 | 0.474 64 | 0.537 34 | 0.951 19 | 0.475 27 | 0.279 24 | 0.635 20 | 0.698 43 | 0.675 28 | 0.751 31 | 0.553 50 | 0.816 62 | 0.806 33 | 0.703 11 |
Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, Shuguang Cui: PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CVPR 2020 | ||||||||||||||||||||||
PD-Net | 0.638 41 | 0.797 24 | 0.769 23 | 0.641 67 | 0.590 43 | 0.820 25 | 0.461 29 | 0.537 72 | 0.637 13 | 0.536 35 | 0.947 32 | 0.388 62 | 0.206 49 | 0.656 15 | 0.668 47 | 0.647 42 | 0.732 38 | 0.585 37 | 0.868 48 | 0.793 44 | 0.473 74 | |
SAFNet-seg | ![]() | 0.654 35 | 0.752 40 | 0.734 41 | 0.664 58 | 0.583 47 | 0.815 31 | 0.399 57 | 0.754 31 | 0.639 12 | 0.535 36 | 0.942 47 | 0.470 29 | 0.309 14 | 0.665 14 | 0.539 57 | 0.650 38 | 0.708 46 | 0.635 15 | 0.857 53 | 0.793 44 | 0.642 26 |
Linqing Zhao, Jiwen Lu, Jie Zhou: Similarity-Aware Fusion Network for 3D Semantic Segmentation. IROS 2021 | ||||||||||||||||||||||
PPCNN++ | ![]() | 0.636 43 | 0.724 49 | 0.697 53 | 0.672 52 | 0.636 31 | 0.775 56 | 0.403 56 | 0.582 67 | 0.588 29 | 0.533 37 | 0.949 25 | 0.453 39 | 0.218 44 | 0.571 38 | 0.676 45 | 0.663 33 | 0.635 70 | 0.580 39 | 0.906 15 | 0.808 32 | 0.650 23 |
Pyunghwan Ahn, Juyoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim: Projection-based Point Convolution for Efficient Point Cloud Segmentation. IEEE Access | ||||||||||||||||||||||
SConv | 0.636 43 | 0.830 17 | 0.697 53 | 0.752 35 | 0.572 52 | 0.780 54 | 0.445 37 | 0.716 42 | 0.529 48 | 0.530 38 | 0.951 19 | 0.446 44 | 0.170 64 | 0.507 57 | 0.666 48 | 0.636 46 | 0.682 55 | 0.541 57 | 0.886 32 | 0.799 36 | 0.594 46 | |
FusionAwareConv | 0.630 52 | 0.604 70 | 0.741 39 | 0.766 31 | 0.590 43 | 0.747 66 | 0.501 13 | 0.734 38 | 0.503 55 | 0.527 39 | 0.919 71 | 0.454 37 | 0.323 11 | 0.550 48 | 0.420 70 | 0.678 27 | 0.688 54 | 0.544 53 | 0.896 24 | 0.795 40 | 0.627 32 | |
Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu: Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation. CVPR 2020 | ||||||||||||||||||||||
HPEIN | 0.618 56 | 0.729 47 | 0.668 64 | 0.647 64 | 0.597 41 | 0.766 60 | 0.414 50 | 0.680 50 | 0.520 50 | 0.525 40 | 0.946 35 | 0.432 46 | 0.215 46 | 0.493 60 | 0.599 54 | 0.638 45 | 0.617 71 | 0.570 41 | 0.897 23 | 0.806 33 | 0.605 42 | |
Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, Jiaya Jia: Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation. ICCV 2019 | ||||||||||||||||||||||
DCM-Net | 0.658 33 | 0.778 28 | 0.702 49 | 0.806 17 | 0.619 35 | 0.813 35 | 0.468 25 | 0.693 48 | 0.494 57 | 0.524 41 | 0.941 49 | 0.449 42 | 0.298 17 | 0.510 55 | 0.821 21 | 0.675 28 | 0.727 39 | 0.568 43 | 0.826 59 | 0.803 35 | 0.637 28 | |
Jonas Schult*, Francis Engelmann*, Theodora Kontogianni, Bastian Leibe: DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. CVPR 2020 [Oral] | ||||||||||||||||||||||
FPConv | ![]() | 0.639 40 | 0.785 26 | 0.760 25 | 0.713 42 | 0.603 38 | 0.798 45 | 0.392 60 | 0.534 73 | 0.603 26 | 0.524 41 | 0.948 29 | 0.457 35 | 0.250 34 | 0.538 49 | 0.723 37 | 0.598 57 | 0.696 51 | 0.614 23 | 0.872 45 | 0.799 36 | 0.567 54 |
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han: FPConv: Learning Local Flattening for Point Convolution. CVPR 2020 | ||||||||||||||||||||||
RandLA-Net | ![]() | 0.645 36 | 0.778 28 | 0.731 42 | 0.699 44 | 0.577 48 | 0.829 17 | 0.446 35 | 0.736 37 | 0.477 63 | 0.523 43 | 0.945 38 | 0.454 37 | 0.269 28 | 0.484 62 | 0.749 33 | 0.618 51 | 0.738 34 | 0.599 29 | 0.827 58 | 0.792 47 | 0.621 33 |
DenSeR | 0.628 53 | 0.800 23 | 0.625 74 | 0.719 39 | 0.545 58 | 0.806 38 | 0.445 37 | 0.597 62 | 0.448 70 | 0.519 44 | 0.938 54 | 0.481 24 | 0.328 9 | 0.489 61 | 0.499 64 | 0.657 37 | 0.759 28 | 0.592 34 | 0.881 36 | 0.797 39 | 0.634 29 | |
SIConv | 0.625 55 | 0.830 17 | 0.694 56 | 0.757 33 | 0.563 54 | 0.772 59 | 0.448 34 | 0.647 57 | 0.520 50 | 0.509 45 | 0.949 25 | 0.431 47 | 0.191 56 | 0.496 59 | 0.614 53 | 0.647 42 | 0.672 59 | 0.535 60 | 0.876 39 | 0.783 52 | 0.571 51 | |
SegGroup_sem | ![]() | 0.627 54 | 0.818 20 | 0.747 34 | 0.701 43 | 0.602 39 | 0.764 61 | 0.385 65 | 0.629 59 | 0.490 59 | 0.508 46 | 0.931 64 | 0.409 57 | 0.201 53 | 0.564 41 | 0.725 36 | 0.618 51 | 0.692 52 | 0.539 58 | 0.873 43 | 0.794 42 | 0.548 61 |
An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie Zhou: SegGroup: Seg-Level Supervision for 3D Instance and Semantic Segmentation. | ||||||||||||||||||||||
APCF-Net | 0.631 49 | 0.742 43 | 0.687 63 | 0.672 52 | 0.557 56 | 0.792 49 | 0.408 52 | 0.665 54 | 0.545 44 | 0.508 46 | 0.952 18 | 0.428 49 | 0.186 59 | 0.634 21 | 0.702 41 | 0.620 49 | 0.706 47 | 0.555 49 | 0.873 43 | 0.798 38 | 0.581 48 | |
Haojia, Lin: Adaptive Pyramid Context Fusion for Point Cloud Perception. GRSL | ||||||||||||||||||||||
SPH3D-GCN | ![]() | 0.610 57 | 0.858 13 | 0.772 21 | 0.489 79 | 0.532 59 | 0.792 49 | 0.404 55 | 0.643 58 | 0.570 38 | 0.507 48 | 0.935 57 | 0.414 56 | 0.046 83 | 0.510 55 | 0.702 41 | 0.602 55 | 0.705 48 | 0.549 52 | 0.859 52 | 0.773 57 | 0.534 63 |
Huan Lei, Naveed Akhtar, and Ajmal Mian: Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds. TPAMI 2020 | ||||||||||||||||||||||
PointConv | ![]() | 0.666 31 | 0.781 27 | 0.759 26 | 0.699 44 | 0.644 29 | 0.822 22 | 0.475 21 | 0.779 24 | 0.564 40 | 0.504 49 | 0.953 15 | 0.428 49 | 0.203 52 | 0.586 33 | 0.754 30 | 0.661 35 | 0.753 30 | 0.588 36 | 0.902 19 | 0.813 31 | 0.642 26 |
Wenxuan Wu, Zhongang Qi, Li Fuxin: PointConv: Deep Convolutional Networks on 3D Point Clouds. CVPR 2019 | ||||||||||||||||||||||
PointMTL | 0.632 48 | 0.731 46 | 0.688 61 | 0.675 51 | 0.591 42 | 0.784 51 | 0.444 40 | 0.565 69 | 0.610 21 | 0.492 50 | 0.949 25 | 0.456 36 | 0.254 33 | 0.587 31 | 0.706 40 | 0.599 56 | 0.665 61 | 0.612 26 | 0.868 48 | 0.791 50 | 0.579 49 | |
MCCNN | ![]() | 0.633 47 | 0.866 11 | 0.731 42 | 0.771 28 | 0.576 49 | 0.809 36 | 0.410 51 | 0.684 49 | 0.497 56 | 0.491 51 | 0.949 25 | 0.466 31 | 0.105 77 | 0.581 34 | 0.646 51 | 0.620 49 | 0.680 56 | 0.542 56 | 0.817 61 | 0.795 40 | 0.618 34 |
P. Hermosilla, T. Ritschel, P.P. Vazquez, A. Vinacua, T. Ropinski: Monte Carlo Convolution for Learning on Non-Uniformly Sampled Point Clouds. SIGGRAPH Asia 2018 | ||||||||||||||||||||||
3DSM_DMMF | 0.631 49 | 0.626 66 | 0.745 35 | 0.801 19 | 0.607 37 | 0.751 65 | 0.506 11 | 0.729 40 | 0.565 39 | 0.491 51 | 0.866 80 | 0.434 45 | 0.197 55 | 0.595 29 | 0.630 52 | 0.709 19 | 0.705 48 | 0.560 45 | 0.875 40 | 0.740 66 | 0.491 69 | |
PointConv-SFPN | 0.641 37 | 0.776 30 | 0.703 48 | 0.721 38 | 0.557 56 | 0.826 19 | 0.451 32 | 0.672 53 | 0.563 41 | 0.483 53 | 0.943 46 | 0.425 52 | 0.162 68 | 0.644 18 | 0.726 35 | 0.659 36 | 0.709 45 | 0.572 40 | 0.875 40 | 0.786 51 | 0.559 57 | |
HPGCNN | 0.656 34 | 0.698 56 | 0.743 37 | 0.650 62 | 0.564 53 | 0.820 25 | 0.505 12 | 0.758 29 | 0.631 15 | 0.479 54 | 0.945 38 | 0.480 25 | 0.226 40 | 0.572 37 | 0.774 28 | 0.690 24 | 0.735 36 | 0.614 23 | 0.853 54 | 0.776 56 | 0.597 45 | |
Jisheng Dang, Qingyong Hu, Yulan Guo, Jun Yang: HPGCNN. | ||||||||||||||||||||||
CCRFNet | 0.589 62 | 0.766 36 | 0.659 68 | 0.683 49 | 0.470 68 | 0.740 68 | 0.387 64 | 0.620 61 | 0.490 59 | 0.476 55 | 0.922 69 | 0.355 69 | 0.245 37 | 0.511 54 | 0.511 62 | 0.571 63 | 0.643 66 | 0.493 68 | 0.872 45 | 0.762 60 | 0.600 43 | |
DVVNet | 0.562 66 | 0.648 62 | 0.700 51 | 0.770 29 | 0.586 46 | 0.687 73 | 0.333 71 | 0.650 55 | 0.514 53 | 0.475 56 | 0.906 75 | 0.359 67 | 0.223 42 | 0.340 74 | 0.442 69 | 0.422 78 | 0.668 60 | 0.501 65 | 0.708 73 | 0.779 53 | 0.534 63 | |
ROSMRF | 0.580 63 | 0.772 31 | 0.707 47 | 0.681 50 | 0.563 54 | 0.764 61 | 0.362 67 | 0.515 74 | 0.465 67 | 0.465 57 | 0.936 56 | 0.427 51 | 0.207 48 | 0.438 65 | 0.577 55 | 0.536 67 | 0.675 58 | 0.486 69 | 0.723 72 | 0.779 53 | 0.524 65 | |
PointMRNet | 0.640 39 | 0.717 52 | 0.701 50 | 0.692 47 | 0.576 49 | 0.801 41 | 0.467 27 | 0.716 42 | 0.563 41 | 0.459 58 | 0.953 15 | 0.429 48 | 0.169 65 | 0.581 34 | 0.854 11 | 0.605 53 | 0.710 43 | 0.550 51 | 0.894 26 | 0.793 44 | 0.575 50 | |
3DMV, FTSDF | 0.501 73 | 0.558 75 | 0.608 77 | 0.424 83 | 0.478 66 | 0.690 72 | 0.246 79 | 0.586 65 | 0.468 65 | 0.450 59 | 0.911 73 | 0.394 60 | 0.160 69 | 0.438 65 | 0.212 80 | 0.432 77 | 0.541 78 | 0.475 71 | 0.742 70 | 0.727 68 | 0.477 72 | |
wsss-transformer | 0.600 59 | 0.634 64 | 0.743 37 | 0.697 46 | 0.601 40 | 0.781 52 | 0.437 44 | 0.585 66 | 0.493 58 | 0.446 60 | 0.933 62 | 0.394 60 | 0.011 85 | 0.654 16 | 0.661 50 | 0.603 54 | 0.733 37 | 0.526 61 | 0.832 57 | 0.761 61 | 0.480 71 | |
PointNet2-SFPN | 0.631 49 | 0.771 32 | 0.692 58 | 0.672 52 | 0.524 60 | 0.837 12 | 0.440 42 | 0.706 46 | 0.538 46 | 0.446 60 | 0.944 44 | 0.421 54 | 0.219 43 | 0.552 45 | 0.751 32 | 0.591 59 | 0.737 35 | 0.543 55 | 0.901 21 | 0.768 58 | 0.557 58 | |
LAP-D | 0.594 60 | 0.720 50 | 0.692 58 | 0.637 68 | 0.456 69 | 0.773 58 | 0.391 62 | 0.730 39 | 0.587 30 | 0.445 62 | 0.940 51 | 0.381 63 | 0.288 20 | 0.434 67 | 0.453 67 | 0.591 59 | 0.649 63 | 0.581 38 | 0.777 66 | 0.749 65 | 0.610 37 | |
DPC | 0.592 61 | 0.720 50 | 0.700 51 | 0.602 72 | 0.480 65 | 0.762 63 | 0.380 66 | 0.713 44 | 0.585 32 | 0.437 63 | 0.940 51 | 0.369 65 | 0.288 20 | 0.434 67 | 0.509 63 | 0.590 61 | 0.639 68 | 0.567 44 | 0.772 67 | 0.755 63 | 0.592 47 | |
Francis Engelmann, Theodora Kontogianni, Bastian Leibe: Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. ICRA 2020 | ||||||||||||||||||||||
PointSPNet | 0.637 42 | 0.734 45 | 0.692 58 | 0.714 41 | 0.576 49 | 0.797 46 | 0.446 35 | 0.743 35 | 0.598 28 | 0.437 63 | 0.942 47 | 0.403 58 | 0.150 72 | 0.626 24 | 0.800 25 | 0.649 39 | 0.697 50 | 0.557 48 | 0.846 55 | 0.777 55 | 0.563 55 | |
AttAN | 0.609 58 | 0.760 37 | 0.667 65 | 0.649 63 | 0.521 61 | 0.793 47 | 0.457 31 | 0.648 56 | 0.528 49 | 0.434 65 | 0.947 32 | 0.401 59 | 0.153 71 | 0.454 64 | 0.721 38 | 0.648 41 | 0.717 41 | 0.536 59 | 0.904 16 | 0.765 59 | 0.485 70 | |
Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu and Qigong Sun: AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation. IJCAI2020 | ||||||||||||||||||||||
Online SegFusion | 0.515 72 | 0.607 69 | 0.644 72 | 0.579 74 | 0.434 71 | 0.630 79 | 0.353 68 | 0.628 60 | 0.440 71 | 0.410 66 | 0.762 84 | 0.307 74 | 0.167 66 | 0.520 52 | 0.403 72 | 0.516 68 | 0.565 73 | 0.447 76 | 0.678 76 | 0.701 72 | 0.514 67 | |
Davide Menini, Suryansh Kumar, Martin R. Oswald, Erik Sandstroem, Cristian Sminchisescu, Luc van Gool: A Real-Time Learning Framework for Joint 3D Reconstruction and Semantic Segmentation. Robotics and Automation Letters Submission | ||||||||||||||||||||||
3DWSSS | 0.425 81 | 0.525 77 | 0.647 70 | 0.522 77 | 0.324 81 | 0.488 85 | 0.077 86 | 0.712 45 | 0.353 80 | 0.401 67 | 0.636 86 | 0.281 78 | 0.176 62 | 0.340 74 | 0.565 56 | 0.175 86 | 0.551 76 | 0.398 81 | 0.370 86 | 0.602 81 | 0.361 79 | |
TextureNet | ![]() | 0.566 65 | 0.672 60 | 0.664 66 | 0.671 55 | 0.494 63 | 0.719 69 | 0.445 37 | 0.678 52 | 0.411 76 | 0.396 68 | 0.935 57 | 0.356 68 | 0.225 41 | 0.412 69 | 0.535 58 | 0.565 64 | 0.636 69 | 0.464 72 | 0.794 65 | 0.680 75 | 0.568 53 |
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkerhouser, Matthias Niessner, Leonidas Guibas: TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes. CVPR | ||||||||||||||||||||||
3DMV | 0.484 75 | 0.484 81 | 0.538 81 | 0.643 66 | 0.424 73 | 0.606 82 | 0.310 72 | 0.574 68 | 0.433 74 | 0.378 69 | 0.796 82 | 0.301 75 | 0.214 47 | 0.537 50 | 0.208 81 | 0.472 75 | 0.507 82 | 0.413 80 | 0.693 74 | 0.602 81 | 0.539 62 | |
Angela Dai, Matthias Niessner: 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. ECCV'18 | ||||||||||||||||||||||
SQN_0.1% | 0.569 64 | 0.676 58 | 0.696 55 | 0.657 60 | 0.497 62 | 0.779 55 | 0.424 46 | 0.548 70 | 0.515 52 | 0.376 70 | 0.902 78 | 0.422 53 | 0.357 3 | 0.379 71 | 0.456 66 | 0.596 58 | 0.659 62 | 0.544 53 | 0.685 75 | 0.665 78 | 0.556 59 | |
Pointnet++ & Feature | ![]() | 0.557 67 | 0.735 44 | 0.661 67 | 0.686 48 | 0.491 64 | 0.744 67 | 0.392 60 | 0.539 71 | 0.451 69 | 0.375 71 | 0.946 35 | 0.376 64 | 0.205 50 | 0.403 70 | 0.356 74 | 0.553 66 | 0.643 66 | 0.497 66 | 0.824 60 | 0.756 62 | 0.515 66 |
PointMRNet-lite | 0.553 68 | 0.633 65 | 0.648 69 | 0.659 59 | 0.430 72 | 0.800 42 | 0.390 63 | 0.592 64 | 0.454 68 | 0.371 72 | 0.939 53 | 0.368 66 | 0.136 75 | 0.368 72 | 0.448 68 | 0.560 65 | 0.715 42 | 0.486 69 | 0.882 34 | 0.720 70 | 0.462 75 | |
GMLPs | 0.538 69 | 0.495 79 | 0.693 57 | 0.647 64 | 0.471 67 | 0.793 47 | 0.300 73 | 0.477 75 | 0.505 54 | 0.358 73 | 0.903 77 | 0.327 72 | 0.081 80 | 0.472 63 | 0.529 60 | 0.448 76 | 0.710 43 | 0.509 62 | 0.746 69 | 0.737 67 | 0.554 60 | |
subcloud_weak | 0.516 71 | 0.676 58 | 0.591 79 | 0.609 69 | 0.442 70 | 0.774 57 | 0.335 70 | 0.597 62 | 0.422 75 | 0.357 74 | 0.932 63 | 0.341 71 | 0.094 79 | 0.298 76 | 0.528 61 | 0.473 74 | 0.676 57 | 0.495 67 | 0.602 80 | 0.721 69 | 0.349 81 | |
PCNN | 0.498 74 | 0.559 74 | 0.644 72 | 0.560 76 | 0.420 74 | 0.711 71 | 0.229 81 | 0.414 76 | 0.436 72 | 0.352 75 | 0.941 49 | 0.324 73 | 0.155 70 | 0.238 80 | 0.387 73 | 0.493 70 | 0.529 79 | 0.509 62 | 0.813 63 | 0.751 64 | 0.504 68 | |
PointCNN with RGB | ![]() | 0.458 76 | 0.577 73 | 0.611 76 | 0.356 85 | 0.321 82 | 0.715 70 | 0.299 75 | 0.376 79 | 0.328 82 | 0.319 76 | 0.944 44 | 0.285 77 | 0.164 67 | 0.216 83 | 0.229 79 | 0.484 72 | 0.545 77 | 0.456 74 | 0.755 68 | 0.709 71 | 0.475 73 |
Yangyan Li, Rui Bu, Mingchao Sun, Baoquan Chen: PointCNN. NeurIPS 2018 | ||||||||||||||||||||||
PNET2 | 0.442 78 | 0.548 76 | 0.548 80 | 0.597 73 | 0.363 78 | 0.628 80 | 0.300 73 | 0.292 80 | 0.374 78 | 0.307 77 | 0.881 79 | 0.268 79 | 0.186 59 | 0.238 80 | 0.204 82 | 0.407 79 | 0.506 83 | 0.449 75 | 0.667 77 | 0.620 80 | 0.462 75 | |
PanopticFusion-label | 0.529 70 | 0.491 80 | 0.688 61 | 0.604 71 | 0.386 75 | 0.632 78 | 0.225 83 | 0.705 47 | 0.434 73 | 0.293 78 | 0.815 81 | 0.348 70 | 0.241 38 | 0.499 58 | 0.669 46 | 0.507 69 | 0.649 63 | 0.442 77 | 0.796 64 | 0.602 81 | 0.561 56 | |
Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji: PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things. IROS 2019 (to appear) | ||||||||||||||||||||||
Tangent Convolutions | ![]() | 0.438 80 | 0.437 83 | 0.646 71 | 0.474 80 | 0.369 77 | 0.645 77 | 0.353 68 | 0.258 82 | 0.282 84 | 0.279 79 | 0.918 72 | 0.298 76 | 0.147 74 | 0.283 77 | 0.294 76 | 0.487 71 | 0.562 74 | 0.427 79 | 0.619 79 | 0.633 79 | 0.352 80 |
Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, Qian-Yi Zhou: Tangent convolutions for dense prediction in 3d. CVPR 2018 | ||||||||||||||||||||||
SurfaceConvPF | 0.442 78 | 0.505 78 | 0.622 75 | 0.380 84 | 0.342 80 | 0.654 76 | 0.227 82 | 0.397 78 | 0.367 79 | 0.276 80 | 0.924 67 | 0.240 81 | 0.198 54 | 0.359 73 | 0.262 77 | 0.366 80 | 0.581 72 | 0.435 78 | 0.640 78 | 0.668 76 | 0.398 77 | |
Hao Pan, Shilin Liu, Yang Liu, Xin Tong: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames. | ||||||||||||||||||||||
ScanNet+FTSDF | 0.383 83 | 0.297 85 | 0.491 83 | 0.432 82 | 0.358 79 | 0.612 81 | 0.274 77 | 0.116 84 | 0.411 76 | 0.265 81 | 0.904 76 | 0.229 82 | 0.079 81 | 0.250 78 | 0.185 83 | 0.320 83 | 0.510 80 | 0.385 82 | 0.548 82 | 0.597 84 | 0.394 78 | |
PointNet++ | ![]() | 0.339 84 | 0.584 72 | 0.478 84 | 0.458 81 | 0.256 85 | 0.360 86 | 0.250 78 | 0.247 83 | 0.278 85 | 0.261 82 | 0.677 85 | 0.183 84 | 0.117 76 | 0.212 84 | 0.145 85 | 0.364 81 | 0.346 86 | 0.232 86 | 0.548 82 | 0.523 85 | 0.252 84 |
Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas: pointnet++: deep hierarchical feature learning on point sets in a metric space. | ||||||||||||||||||||||
FCPN | ![]() | 0.447 77 | 0.679 57 | 0.604 78 | 0.578 75 | 0.380 76 | 0.682 74 | 0.291 76 | 0.106 85 | 0.483 62 | 0.258 83 | 0.920 70 | 0.258 80 | 0.025 84 | 0.231 82 | 0.325 75 | 0.480 73 | 0.560 75 | 0.463 73 | 0.725 71 | 0.666 77 | 0.231 85 |
Dario Rethage, Johanna Wald, Jürgen Sturm, Nassir Navab, Federico Tombari: Fully-Convolutional Point Networks for Large-Scale Point Clouds. ECCV 2018 | ||||||||||||||||||||||
SPLAT Net | ![]() | 0.393 82 | 0.472 82 | 0.511 82 | 0.606 70 | 0.311 83 | 0.656 75 | 0.245 80 | 0.405 77 | 0.328 82 | 0.197 84 | 0.927 66 | 0.227 83 | 0.000 87 | 0.001 87 | 0.249 78 | 0.271 85 | 0.510 80 | 0.383 83 | 0.593 81 | 0.699 73 | 0.267 83 |
Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz: SPLATNet: Sparse Lattice Networks for Point Cloud Processing. CVPR 2018 | ||||||||||||||||||||||
ScanNet | ![]() | 0.306 86 | 0.203 86 | 0.366 85 | 0.501 78 | 0.311 83 | 0.524 84 | 0.211 84 | 0.002 87 | 0.342 81 | 0.189 85 | 0.786 83 | 0.145 86 | 0.102 78 | 0.245 79 | 0.152 84 | 0.318 84 | 0.348 85 | 0.300 85 | 0.460 85 | 0.437 86 | 0.182 86 |
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner: ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. CVPR'17 | ||||||||||||||||||||||
SSC-UNet | ![]() | 0.308 85 | 0.353 84 | 0.290 86 | 0.278 86 | 0.166 86 | 0.553 83 | 0.169 85 | 0.286 81 | 0.147 86 | 0.148 86 | 0.908 74 | 0.182 85 | 0.064 82 | 0.023 86 | 0.018 87 | 0.354 82 | 0.363 84 | 0.345 84 | 0.546 84 | 0.685 74 | 0.278 82 |
ERROR | 0.054 87 | 0.000 87 | 0.041 87 | 0.172 87 | 0.030 87 | 0.062 87 | 0.001 87 | 0.035 86 | 0.004 87 | 0.051 87 | 0.143 87 | 0.019 87 | 0.003 86 | 0.041 85 | 0.050 86 | 0.003 87 | 0.054 87 | 0.018 87 | 0.005 87 | 0.264 87 | 0.082 87 | |