ScanRefer Benchmark
This table lists the benchmark results for the ScanRefer Localization Benchmark scenario.
Unique | Unique | Multiple | Multiple | Overall | Overall | ||
---|---|---|---|---|---|---|---|
Method | Info | acc@0.25IoU | acc@0.5IoU | acc@0.25IoU | acc@0.5IoU | acc@0.25IoU | acc@0.5IoU |
ConcreteNet | 0.8607 1 | 0.7923 1 | 0.4746 4 | 0.4091 1 | 0.5612 3 | 0.4950 1 | |
Ozan Unal, Christos Sakaridis, Suman Saha, Fisher Yu, Luc Van Gool: Three Ways to Improve Verbo-visual Fusion for Dense 3D Visual Grounding. | |||||||
CORE-3DVG | 0.8557 2 | 0.6867 7 | 0.5275 1 | 0.3850 4 | 0.6011 1 | 0.4527 6 | |
cus3d | 0.8384 3 | 0.7073 5 | 0.4908 2 | 0.4000 2 | 0.5688 2 | 0.4689 2 | |
pointclip | 0.8211 4 | 0.7082 4 | 0.4803 3 | 0.3884 3 | 0.5567 4 | 0.4601 3 | |
3DInsVG | 0.8170 5 | 0.6925 6 | 0.4582 7 | 0.3617 7 | 0.5386 7 | 0.4359 7 | |
CSA-M3LM | 0.8137 6 | 0.6241 17 | 0.4544 8 | 0.3317 9 | 0.5349 8 | 0.3972 9 | |
3D-REC | 0.7997 7 | 0.7123 2 | 0.4708 5 | 0.3805 6 | 0.5445 5 | 0.4549 4 | |
M3DRef-CLIP | 0.7980 8 | 0.7085 3 | 0.4692 6 | 0.3807 5 | 0.5433 6 | 0.4545 5 | |
Yiming Zhang, ZeMing Gong, Angel X. Chang: Multi3DRefer: Grounding Text Description to Multiple 3D Objects. ICCV 2023 | |||||||
D3Net | 0.7923 9 | 0.6843 8 | 0.3905 19 | 0.3074 13 | 0.4806 18 | 0.3919 10 | |
Dave Zhenyu Chen, Qirui Wu, Matthias Niessner, Angel X. Chang: D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding. 17th European Conference on Computer Vision (ECCV), 2022 | |||||||
FE-3DGQA | 0.7857 10 | 0.5862 23 | 0.4317 12 | 0.2935 15 | 0.5111 12 | 0.3592 19 | |
BEAUTY-DETR | 0.7848 11 | 0.5499 28 | 0.3934 18 | 0.2480 28 | 0.4811 17 | 0.3157 28 | |
Ayush Jain, Nikolaos Gkanatsios, Ishita Mediratta, Katerina Fragkiadaki: Looking Outside the Box to Ground Language in 3D Scenes. | |||||||
ContraRefer | 0.7832 12 | 0.6801 11 | 0.3850 20 | 0.2947 14 | 0.4743 19 | 0.3811 11 | |
Se2d | 0.7799 13 | 0.6628 13 | 0.3636 26 | 0.2823 22 | 0.4569 21 | 0.3677 17 | |
HAM | 0.7799 13 | 0.6373 16 | 0.4148 15 | 0.3324 8 | 0.4967 15 | 0.4007 8 | |
Jiaming Chen, Weixin Luo, Ran Song, Xiaolin Wei, Lin Ma, Wei Zhang: Learning Point-Language Hierarchical Alignment for 3D Visual Grounding. | |||||||
InstanceRefer | 0.7782 15 | 0.6669 12 | 0.3457 33 | 0.2688 24 | 0.4427 29 | 0.3580 22 | |
Zhihao Yuan, Xu Yan, Yinghong Liao, Ruimao Zhang, Zhen Li*, Shuguang Cui: InstanceRefer: Cooperative Holistic Understanding for Visual Grounding on Point Clouds through Instance Multi-level Contextual Referring. ICCV 2021 | |||||||
Clip-pre | 0.7766 16 | 0.6843 8 | 0.3617 30 | 0.2904 20 | 0.4547 23 | 0.3787 13 | |
SAVG | 0.7758 17 | 0.5664 25 | 0.4236 13 | 0.2826 21 | 0.5026 13 | 0.3462 24 | |
3DVG-Trans + | 0.7733 18 | 0.5787 24 | 0.4370 11 | 0.3102 12 | 0.5124 11 | 0.3704 15 | |
Lichen Zhao∗, Daigang Cai∗, Lu Sheng†, Dong Xu: 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds. ICCV2021 | |||||||
Clip | 0.7733 18 | 0.6810 10 | 0.3619 28 | 0.2919 19 | 0.4542 24 | 0.3791 12 | |
HGT | 0.7692 20 | 0.5886 22 | 0.4141 16 | 0.2924 18 | 0.4937 16 | 0.3588 21 | |
3DJCG(Grounding) | 0.7675 21 | 0.6059 19 | 0.4389 10 | 0.3117 11 | 0.5126 10 | 0.3776 14 | |
Daigang Cai, Lichen Zhao, Jing Zhang†, Lu Sheng, Dong Xu: 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds. CVPR2022 Oral | |||||||
D3Net - Pretrained | 0.7659 22 | 0.6579 14 | 0.3619 28 | 0.2726 23 | 0.4525 26 | 0.3590 20 | |
Dave Zhenyu Chen, Qirui Wu, Matthias Niessner, Angel X. Chang: D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding. 17th European Conference on Computer Vision (ECCV), 2022 | |||||||
3DVG-Transformer | 0.7576 23 | 0.5515 27 | 0.4224 14 | 0.2933 16 | 0.4976 14 | 0.3512 23 | |
Lichen Zhao∗, Daigang Cai∗, Lu Sheng†, Dong Xu: 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds. ICCV2021 | |||||||
PointGroup_MCAN | 0.7510 24 | 0.6397 15 | 0.3271 35 | 0.2535 26 | 0.4222 31 | 0.3401 25 | |
TransformerVG | 0.7502 25 | 0.5977 20 | 0.3712 24 | 0.2628 25 | 0.4562 22 | 0.3379 26 | |
SRGA | 0.7494 26 | 0.5128 32 | 0.3631 27 | 0.2218 31 | 0.4497 28 | 0.2871 31 | |
bo3d-1 | 0.7469 27 | 0.5606 26 | 0.4539 9 | 0.3124 10 | 0.5196 9 | 0.3680 16 | |
grounding | 0.7298 28 | 0.5458 29 | 0.3822 22 | 0.2421 30 | 0.4538 25 | 0.3046 29 | |
secg | 0.7288 29 | 0.6175 18 | 0.3696 25 | 0.2933 16 | 0.4501 27 | 0.3660 18 | |
henet | 0.7110 30 | 0.5180 31 | 0.3936 17 | 0.2472 29 | 0.4590 20 | 0.3030 30 | |
SR-GAB | 0.7016 31 | 0.5202 30 | 0.3233 37 | 0.1959 34 | 0.4081 35 | 0.2686 32 | |
ScanRefer | 0.6859 32 | 0.4353 35 | 0.3488 32 | 0.2097 32 | 0.4244 30 | 0.2603 34 | |
Dave Zhenyu Chen, Angel X. Chang, Matthias Nießner: ScanRefer: 3D Object Localization in RGB-D Scans using Natural Language. 16th European Conference on Computer Vision (ECCV), 2020 | |||||||
TGNN | 0.6834 33 | 0.5894 21 | 0.3312 34 | 0.2526 27 | 0.4102 34 | 0.3281 27 | |
Pin-Hao Huang, Han-Hung Lee, Hwann-Tzong Chen, Tyng-Luh Liu: Text-Guided Graph Neural Network for Referring 3D Instance Segmentation. AAAI 2021 | |||||||
ScanRefer_vanilla | 0.6488 34 | 0.4056 38 | 0.3052 40 | 0.1782 38 | 0.3823 39 | 0.2292 38 | |
ScanRefer Baseline | 0.6422 35 | 0.4196 37 | 0.3090 39 | 0.1832 36 | 0.3837 38 | 0.2362 37 | |
scanrefer2 | 0.6340 36 | 0.4353 35 | 0.3193 38 | 0.1947 35 | 0.3898 37 | 0.2486 35 | |
TransformerRefer | 0.6010 37 | 0.4658 33 | 0.2540 43 | 0.1730 40 | 0.3318 43 | 0.2386 36 | |
pairwisemethod | 0.5779 38 | 0.3603 39 | 0.2792 42 | 0.1746 39 | 0.3462 41 | 0.2163 39 | |
SPANet | 0.5614 39 | 0.4641 34 | 0.2800 41 | 0.2071 33 | 0.3431 42 | 0.2647 33 | |
bo3d | 0.5400 40 | 0.1550 40 | 0.3817 23 | 0.1785 37 | 0.4172 33 | 0.1732 40 | |
Co3d3 | 0.5326 41 | 0.1369 41 | 0.3848 21 | 0.1651 41 | 0.4179 32 | 0.1588 41 | |
Co3d2 | 0.5070 42 | 0.1195 43 | 0.3569 31 | 0.1511 42 | 0.3906 36 | 0.1440 42 | |
bo3d0 | 0.4823 43 | 0.1278 42 | 0.3271 35 | 0.1394 43 | 0.3619 40 | 0.1368 43 | |
Co3d | 0.0000 44 | 0.0000 44 | 0.0000 44 | 0.0000 44 | 0.0000 44 | 0.0000 44 | |