ScanRefer Benchmark
This table lists the benchmark results for the ScanRefer Localization Benchmark scenario.
Unique | Unique | Multiple | Multiple | Overall | Overall | ||
---|---|---|---|---|---|---|---|
Method | Info | acc@0.25IoU | acc@0.5IoU | acc@0.25IoU | acc@0.5IoU | acc@0.25IoU | acc@0.5IoU |
Chat-Scene | 0.8887 1 | 0.8005 1 | 0.5421 1 | 0.4861 1 | 0.6198 1 | 0.5566 1 | |
Haifeng Huang, Yilun Chen, Zehan Wang, et al.: Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers. NeurIPS 2024 | |||||||
ConcreteNet | 0.8607 2 | 0.7923 2 | 0.4746 7 | 0.4091 2 | 0.5612 6 | 0.4950 2 | |
Ozan Unal, Christos Sakaridis, Suman Saha, Luc Van Gool: Four Ways to Improve Verbo-visual Fusion for Dense 3D Visual Grounding. ECCV 2024 | |||||||
cus3d | 0.8384 4 | 0.7073 6 | 0.4908 5 | 0.4000 3 | 0.5688 4 | 0.4689 3 | |
D-LISA | 0.8195 6 | 0.6900 8 | 0.4975 3 | 0.3967 5 | 0.5697 3 | 0.4625 4 | |
Haomeng Zhang, Chiao-An Yang, Raymond A. Yeh: Multi-Object 3D Grounding with Dynamic Modules and Language-Informed Spatial Attention. NeurIPS 2024 | |||||||
M3DRef-test | 0.7865 14 | 0.6793 14 | 0.4963 4 | 0.3977 4 | 0.5614 5 | 0.4608 5 | |
pointclip | 0.8211 5 | 0.7082 5 | 0.4803 6 | 0.3884 6 | 0.5567 7 | 0.4601 6 | |
M3DRef-SCLIP | 0.7997 9 | 0.7123 3 | 0.4708 8 | 0.3805 9 | 0.5445 8 | 0.4549 7 | |
M3DRef-CLIP | 0.7980 10 | 0.7085 4 | 0.4692 9 | 0.3807 8 | 0.5433 9 | 0.4545 8 | |
Yiming Zhang, ZeMing Gong, Angel X. Chang: Multi3DRefer: Grounding Text Description to Multiple 3D Objects. ICCV 2023 | |||||||
CORE-3DVG | 0.8557 3 | 0.6867 9 | 0.5275 2 | 0.3850 7 | 0.6011 2 | 0.4527 9 | |
3DInsVG | 0.8170 7 | 0.6925 7 | 0.4582 11 | 0.3617 10 | 0.5386 10 | 0.4359 10 | |
RG-SAN | 0.7964 11 | 0.6785 15 | 0.4591 10 | 0.3600 11 | 0.5348 12 | 0.4314 11 | |
HAM | 0.7799 19 | 0.6373 20 | 0.4148 21 | 0.3324 12 | 0.4967 21 | 0.4007 12 | |
Jiaming Chen, Weixin Luo, Ran Song, Xiaolin Wei, Lin Ma, Wei Zhang: Learning Point-Language Hierarchical Alignment for 3D Visual Grounding. | |||||||
CSA-M3LM | 0.8137 8 | 0.6241 21 | 0.4544 12 | 0.3317 13 | 0.5349 11 | 0.3972 13 | |
D3Net | 0.7923 13 | 0.6843 10 | 0.3905 25 | 0.3074 19 | 0.4806 24 | 0.3919 14 | |
Dave Zhenyu Chen, Qirui Wu, Matthias Niessner, Angel X. Chang: D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding. 17th European Conference on Computer Vision (ECCV), 2022 | |||||||
ContraRefer | 0.7832 17 | 0.6801 13 | 0.3850 26 | 0.2947 21 | 0.4743 25 | 0.3811 15 | |
Clip | 0.7733 24 | 0.6810 12 | 0.3619 35 | 0.2919 26 | 0.4542 30 | 0.3791 16 | |
Clip-pre | 0.7766 22 | 0.6843 10 | 0.3617 37 | 0.2904 27 | 0.4547 29 | 0.3787 17 | |
3DJCG(Grounding) | 0.7675 27 | 0.6059 23 | 0.4389 16 | 0.3117 17 | 0.5126 16 | 0.3776 18 | |
Daigang Cai, Lichen Zhao, Jing Zhang†, Lu Sheng, Dong Xu: 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds. CVPR2022 Oral | |||||||
GALA-Grounder + 2D | 0.7947 12 | 0.5713 30 | 0.4525 14 | 0.3202 14 | 0.5292 13 | 0.3765 19 | |
GALA-Grounder | 0.7824 18 | 0.5796 28 | 0.4391 15 | 0.3131 15 | 0.5161 15 | 0.3728 20 | |
3DVG-Trans + | 0.7733 24 | 0.5787 29 | 0.4370 17 | 0.3102 18 | 0.5124 17 | 0.3704 21 | |
Lichen Zhao∗, Daigang Cai∗, Lu Sheng†, Dong Xu: 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds. ICCV2021 | |||||||
bo3d-1 | 0.7469 33 | 0.5606 33 | 0.4539 13 | 0.3124 16 | 0.5196 14 | 0.3680 22 | |
Se2d | 0.7799 19 | 0.6628 17 | 0.3636 33 | 0.2823 29 | 0.4569 27 | 0.3677 23 | |
secg | 0.7288 35 | 0.6175 22 | 0.3696 32 | 0.2933 23 | 0.4501 33 | 0.3660 24 | |
SAF | 0.6348 42 | 0.5647 32 | 0.3726 30 | 0.3009 20 | 0.4314 36 | 0.3601 25 | |
FE-3DGQA | 0.7857 15 | 0.5862 27 | 0.4317 18 | 0.2935 22 | 0.5111 18 | 0.3592 26 | |
D3Net - Pretrained | 0.7659 28 | 0.6579 18 | 0.3619 35 | 0.2726 30 | 0.4525 32 | 0.3590 27 | |
Dave Zhenyu Chen, Qirui Wu, Matthias Niessner, Angel X. Chang: D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding. 17th European Conference on Computer Vision (ECCV), 2022 | |||||||
HGT | 0.7692 26 | 0.5886 26 | 0.4141 22 | 0.2924 25 | 0.4937 22 | 0.3588 28 | |
InstanceRefer | 0.7782 21 | 0.6669 16 | 0.3457 40 | 0.2688 31 | 0.4427 35 | 0.3580 29 | |
Zhihao Yuan, Xu Yan, Yinghong Liao, Ruimao Zhang, Zhen Li*, Shuguang Cui: InstanceRefer: Cooperative Holistic Understanding for Visual Grounding on Point Clouds through Instance Multi-level Contextual Referring. ICCV 2021 | |||||||
3DVG-Transformer | 0.7576 29 | 0.5515 34 | 0.4224 20 | 0.2933 23 | 0.4976 20 | 0.3512 30 | |
Lichen Zhao∗, Daigang Cai∗, Lu Sheng†, Dong Xu: 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds. ICCV2021 | |||||||
SAVG | 0.7758 23 | 0.5664 31 | 0.4236 19 | 0.2826 28 | 0.5026 19 | 0.3462 31 | |
PointGroup_MCAN | 0.7510 30 | 0.6397 19 | 0.3271 42 | 0.2535 33 | 0.4222 38 | 0.3401 32 | |
TransformerVG | 0.7502 31 | 0.5977 24 | 0.3712 31 | 0.2628 32 | 0.4562 28 | 0.3379 33 | |
TGNN | 0.6834 39 | 0.5894 25 | 0.3312 41 | 0.2526 34 | 0.4102 41 | 0.3281 34 | |
Pin-Hao Huang, Han-Hung Lee, Hwann-Tzong Chen, Tyng-Luh Liu: Text-Guided Graph Neural Network for Referring 3D Instance Segmentation. AAAI 2021 | |||||||
BEAUTY-DETR | 0.7848 16 | 0.5499 35 | 0.3934 24 | 0.2480 35 | 0.4811 23 | 0.3157 35 | |
Ayush Jain, Nikolaos Gkanatsios, Ishita Mediratta, Katerina Fragkiadaki: Looking Outside the Box to Ground Language in 3D Scenes. | |||||||
grounding | 0.7298 34 | 0.5458 36 | 0.3822 28 | 0.2421 37 | 0.4538 31 | 0.3046 36 | |
henet | 0.7110 36 | 0.5180 38 | 0.3936 23 | 0.2472 36 | 0.4590 26 | 0.3030 37 | |
SRGA | 0.7494 32 | 0.5128 39 | 0.3631 34 | 0.2218 38 | 0.4497 34 | 0.2871 38 | |
SR-GAB | 0.7016 37 | 0.5202 37 | 0.3233 44 | 0.1959 41 | 0.4081 42 | 0.2686 39 | |
SPANet | 0.5614 46 | 0.4641 41 | 0.2800 48 | 0.2071 40 | 0.3431 49 | 0.2647 40 | |
ScanRefer | 0.6859 38 | 0.4353 42 | 0.3488 39 | 0.2097 39 | 0.4244 37 | 0.2603 41 | |
Dave Zhenyu Chen, Angel X. Chang, Matthias Nießner: ScanRefer: 3D Object Localization in RGB-D Scans using Natural Language. 16th European Conference on Computer Vision (ECCV), 2020 | |||||||
scanrefer2 | 0.6340 43 | 0.4353 42 | 0.3193 45 | 0.1947 42 | 0.3898 44 | 0.2486 42 | |
TransformerRefer | 0.6010 44 | 0.4658 40 | 0.2540 50 | 0.1730 47 | 0.3318 50 | 0.2386 43 | |
ScanRefer Baseline | 0.6422 41 | 0.4196 44 | 0.3090 46 | 0.1832 43 | 0.3837 45 | 0.2362 44 | |
ScanRefer_vanilla | 0.6488 40 | 0.4056 45 | 0.3052 47 | 0.1782 45 | 0.3823 46 | 0.2292 45 | |
pairwisemethod | 0.5779 45 | 0.3603 46 | 0.2792 49 | 0.1746 46 | 0.3462 48 | 0.2163 46 | |
bo3d | 0.5400 47 | 0.1550 47 | 0.3817 29 | 0.1785 44 | 0.4172 40 | 0.1732 47 | |
Co3d3 | 0.5326 48 | 0.1369 48 | 0.3848 27 | 0.1651 48 | 0.4179 39 | 0.1588 48 | |
Co3d2 | 0.5070 49 | 0.1195 50 | 0.3569 38 | 0.1511 49 | 0.3906 43 | 0.1440 49 | |
bo3d0 | 0.4823 50 | 0.1278 49 | 0.3271 42 | 0.1394 50 | 0.3619 47 | 0.1368 50 | |
Co3d | 0.0000 51 | 0.0000 51 | 0.0000 51 | 0.0000 51 | 0.0000 51 | 0.0000 51 | |