The 3D semantic labeling task involves predicting a semantic labeling of a 3D scan mesh.

Evaluation and metrics

Our evaluation ranks all methods according to the PASCAL VOC intersection-over-union metric (IoU). IoU = TP/(TP+FP+FN), where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively. Predicted labels are evaluated per-vertex over the respective 3D scan mesh; for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices (e.g., one such example for grid to mesh vertices is provided in the evaluation helpers).



This table lists the benchmark results for the 3D semantic label with limited reconstructions scenario.




Method Infoavg ioubathtubbedbookshelfcabinetchaircountercurtaindeskdoorfloorotherfurniturepicturerefrigeratorshower curtainsinksofatabletoiletwallwindow
sort bysort bysorted bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort bysort by
WS3D_LR_Sem0.684 10.865 10.761 10.780 10.644 10.810 20.445 10.796 10.596 10.594 10.945 20.456 10.234 10.541 10.793 10.723 10.761 10.618 10.906 10.822 10.598 2
Kangcheng Liu: WS3D: Weakly Supervised 3D Scene Segmentation with Region-Level Boundary Awareness and Instance Discrimination. European Conference on Computer Vision (ECCV), 2022
CSC_LR_SEM0.460 50.472 70.731 20.465 30.398 40.817 10.292 50.442 50.311 80.387 60.939 40.218 70.181 40.302 40.076 40.449 40.743 20.430 70.444 80.737 50.368 7
CSG_3DSegNet0.480 40.521 50.715 30.562 20.389 60.693 80.307 40.157 80.501 30.321 80.927 80.219 60.074 80.329 20.485 20.504 20.596 80.458 50.715 40.714 80.418 4
Scratch_LR_SEM0.401 80.240 80.674 40.095 80.347 70.763 60.271 70.204 70.449 60.406 50.936 50.220 50.127 60.199 80.004 70.348 80.665 50.477 40.493 70.730 60.366 8
PointContrast_LR_SEM0.438 70.517 60.659 50.251 60.332 80.783 30.244 80.408 60.411 70.409 40.935 60.206 80.119 70.200 70.048 50.355 60.682 40.414 80.647 60.743 40.391 5
NWSYY0.517 20.725 30.619 60.396 40.455 30.766 50.327 30.570 20.477 50.427 30.943 30.288 20.220 30.274 50.135 30.471 30.697 30.504 20.714 50.767 30.566 3
Viewpoint_BN_LR_AIR0.452 60.587 40.569 70.172 70.391 50.769 40.290 60.512 30.501 30.373 70.935 60.251 40.173 50.201 60.003 80.352 70.619 70.454 60.783 20.719 70.390 6
DE-3DLearner LR0.508 30.824 20.530 80.314 50.479 20.746 70.334 20.490 40.508 20.477 20.950 10.269 30.221 20.324 30.029 60.421 50.626 60.490 30.727 30.782 20.620 1
Ping-Chung Yu, Cheng Sun, Min Sun: Data Efficient 3D Learner via Knowledge Transferred from 2D Model. ECCV 2022