Scene Type Classification Benchmark
The Scene type classification task involves classifying a scan into 13 scene types.
Evaluation and metricsOur evaluation ranks all methods according to recall (TP/(TP+FN)) as well as the PASCAL VOC intersection-over-union metric (IoU = TP/(TP+FP+FN)), where TP, FP, and FN are the numbers of true positive, false positive, and false negative predictions, respectively.
This table lists the benchmark results for the scene type classification scenario.
| Method | Info | avg recall | apartment | bathroom | bedroom / hotel | bookstore / library | conference room | copy/mail room | hallway | kitchen | laundry room | living room / lounge | misc | office | storage / basement / garage |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| LAST-PCL-type | 0.780 1 | 0.250 3 | 1.000 1 | 1.000 1 | 1.000 1 | 1.000 1 | 1.000 1 | 0.500 2 | 1.000 1 | 0.500 2 | 0.889 1 | 0.000 2 | 1.000 1 | 1.000 1 | |
| Yanmin Wu, Qiankun Gao, Renrui Zhang, and Jian Zhang: Language-Assisted 3D Scene Understanding. arxiv23.12 | |||||||||||||||
| multi-task | 0.700 2 | 0.500 1 | 1.000 1 | 0.882 3 | 0.500 3 | 1.000 1 | 1.000 1 | 0.500 2 | 1.000 1 | 1.000 1 | 0.778 2 | 0.000 2 | 0.938 2 | 0.000 3 | |
| Shengyu Huang, Mikhail Usvyatsov, Konrad Schindler: Indoor Scene Recognition in 3D. IROS 2020 | |||||||||||||||
| 3DASPP-SCE | 0.691 3 | 0.500 1 | 0.938 3 | 0.824 4 | 1.000 1 | 1.000 1 | 0.500 3 | 1.000 1 | 0.857 3 | 0.500 2 | 0.556 4 | 0.000 2 | 0.812 3 | 0.500 2 | |
| SE-ResNeXt-SSMA | 0.498 4 | 0.000 5 | 0.812 4 | 0.941 2 | 0.500 3 | 0.500 4 | 0.500 3 | 0.500 2 | 0.429 5 | 0.500 2 | 0.667 3 | 0.500 1 | 0.625 4 | 0.000 3 | |
| Abhinav Valada, Rohit Mohan, Wolfram Burgard: Self-Supervised Model Adaptation for Multimodal Semantic Segmentation. arXiv | |||||||||||||||
| resnet50_scannet | 0.353 5 | 0.250 3 | 0.812 4 | 0.529 5 | 0.500 3 | 0.500 4 | 0.000 5 | 0.500 2 | 0.571 4 | 0.000 5 | 0.556 4 | 0.000 2 | 0.375 5 | 0.000 3 | |
