Scene Type Classification Benchmark
The Scene type classification task involves classifying a scan into 13 scene types.
Evaluation and metricsOur evaluation ranks all methods according to recall (TP/(TP+FN)) as well as the PASCAL VOC intersection-over-union metric (IoU = TP/(TP+FP+FN)), where TP, FP, and FN are the numbers of true positive, false positive, and false negative predictions, respectively.
This table lists the benchmark results for the scene type classification scenario.
| Method | Info | avg iou | apartment | bathroom | bedroom / hotel | bookstore / library | conference room | copy/mail room | hallway | kitchen | laundry room | living room / lounge | misc | office | storage / basement / garage |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| multi-task | 0.646 2 | 0.500 1 | 1.000 1 | 0.789 2 | 0.333 3 | 0.667 3 | 1.000 1 | 0.500 1 | 1.000 1 | 1.000 1 | 0.778 2 | 0.000 2 | 0.833 2 | 0.000 3 | |
| Shengyu Huang, Mikhail Usvyatsov, Konrad Schindler: Indoor Scene Recognition in 3D. IROS 2020 | |||||||||||||||
| LAST-PCL-type | 0.738 1 | 0.250 3 | 1.000 1 | 0.895 1 | 1.000 1 | 1.000 1 | 1.000 1 | 0.500 1 | 1.000 1 | 0.500 2 | 0.842 1 | 0.000 2 | 0.941 1 | 0.667 1 | |
| Yanmin Wu, Qiankun Gao, Renrui Zhang, and Jian Zhang: Language-Assisted 3D Scene Understanding. arxiv23.12 | |||||||||||||||
| 3DASPP-SCE | 0.556 3 | 0.500 1 | 0.938 3 | 0.778 3 | 0.667 2 | 1.000 1 | 0.250 3 | 0.500 1 | 0.750 3 | 0.333 3 | 0.500 4 | 0.000 2 | 0.812 3 | 0.200 2 | |
| SE-ResNeXt-SSMA | 0.355 4 | 0.000 5 | 0.684 4 | 0.696 4 | 0.200 5 | 0.500 4 | 0.200 4 | 0.500 1 | 0.429 4 | 0.200 4 | 0.545 3 | 0.111 1 | 0.556 4 | 0.000 3 | |
| Abhinav Valada, Rohit Mohan, Wolfram Burgard: Self-Supervised Model Adaptation for Multimodal Semantic Segmentation. arXiv | |||||||||||||||
| resnet50_scannet | 0.231 5 | 0.200 4 | 0.481 5 | 0.346 5 | 0.250 4 | 0.250 5 | 0.000 5 | 0.500 1 | 0.333 5 | 0.000 5 | 0.357 5 | 0.000 2 | 0.286 5 | 0.000 3 | |
