The set of instance classes is a subset of the 100 semantic classes that includes only the object classes that are countable.
Predicted labels are evaluated per-vertex over the vertices of 5% decimated 3D scan mesh (mesh_aligned_0.05.ply); for 3D approaches that operate on other representations like grids or points, the predicted labels should be mapped onto the mesh vertices.
Our evaluation ranks all methods according to the average precision for each class. We report the mean average precision AP at overlap 0.25 (AP 25%), overlap 0.5 (AP 50%), and over overlaps in the range [0.5:0.95:0.05] (AP). Note that multiple predictions of the same ground truth instance are penalized as false positives.
Evaluation excludes the vertices which are anonymized. The list of these vertices is in mesh_aligned_0.05_mask.txt
Methods | MEAN | BACKPACK | BAG | BASKET | BED | BINDER | BLANKET | BLINDS | BOOK | BOOKSHELF | BOTTLE | BOWL | BOX | BUCKET | CABINET | CEILING LAMP | CHAIR | CLOCK | COAT HANGER | COMPUTER TOWER | CONTAINER | CRATE | CUP | CURTAIN | CUSHION | CUTTING BOARD | DOOR | EXHAUST FAN | FILE FOLDER | HEADPHONES | HEATER | JACKET | JAR | KETTLE | KEYBOARD | KITCHEN CABINET | LAPTOP | LIGHT SWITCH | MARKER | MICROWAVE | MONITOR | MOUSE | OFFICE CHAIR | PAINTING | PAN | PAPER BAG | PAPER TOWEL | PICTURE | PILLOW | PLANT | PLANT POT | POSTER | POT | POWER STRIP | PRINTER | RACK | REFRIGERATOR | SHELF | SHOE RACK | SHOES | SINK | SLIPPERS | SMOKE DETECTOR | SOAP DISPENSER | SOCKET | SOFA | SPEAKER | SPRAY BOTTLE | STAPLER | STORAGE CABINET | SUITCASE | TABLE | TABLE LAMP | TAP | TELEPHONE | TISSUE BOX | TOILET | TOILET BRUSH | TOILET PAPER | TOWEL | TRASH CAN | TV | WHITEBOARD | WHITEBOARD ERASER | WINDOW |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SoftGroup | 0.297 | 0.629 | 0.068 | 0.000 | 0.280 | 0.000 | 0.532 | 0.493 | 0.030 | 0.447 | 0.148 | 0.031 | 0.133 | 0.071 | 0.383 | 0.736 | 0.798 | 0.500 | 0.000 | 0.313 | 0.001 | 0.013 | 0.360 | 0.623 | 0.021 | 0.000 | 0.644 | 0.070 | 0.000 | 0.000 | 0.756 | 0.207 | 0.000 | 0.166 | 0.734 | 0.053 | 0.364 | 0.022 | 0.000 | 0.816 | 0.590 | 0.758 | 0.940 | 0.015 | 0.000 | 0.002 | 0.000 | 0.197 | 0.265 | 0.777 | 0.337 | 0.000 | 0.000 | 0.000 | 0.104 | 0.000 | 0.095 | 0.152 | 0.000 | 0.189 | 0.823 | 0.097 | 0.803 | 0.666 | 0.292 | 0.888 | 0.001 | 0.001 | 0.000 | 0.454 | 0.427 | 0.251 | 0.283 | 0.433 | 0.368 | 0.119 | 0.808 | 0.755 | 0.130 | 0.066 | 0.516 | 0.813 | 0.911 | 0.000 | 0.177 |
Thang Vu, Kookhoi Kim, Tung M. Luu, Xuan Thanh Nguyen, Chang D. Yoo. Softgroup for 3d instance segmentation on point clouds. CVPR 2022 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Open3DIS - only2D | 0.204 | 0.000 | 0.285 | 0.000 | 0.220 | 0.116 | 0.000 | 0.141 | 0.000 | 0.000 | 0.145 | 0.133 | 0.158 | 0.078 | 0.036 | 0.277 | 0.310 | 0.763 | 0.101 | 0.128 | 0.000 | 0.038 | 0.313 | 0.103 | 0.476 | 0.133 | 0.446 | 0.036 | 0.000 | 0.600 | 0.108 | 0.035 | 0.010 | 0.250 | 0.638 | 0.033 | 0.505 | 0.000 | 0.000 | 0.390 | 0.510 | 0.325 | 0.446 | 0.046 | 0.000 | 0.155 | 0.155 | 0.000 | 0.492 | 0.168 | 0.072 | 0.021 | 0.000 | 0.206 | 0.488 | 0.000 | 0.344 | 0.252 | 0.000 | 0.034 | 0.472 | 0.046 | 0.117 | 0.074 | 0.119 | 0.364 | 0.000 | 0.074 | 0.381 | 0.364 | 0.288 | 0.112 | 0.000 | 0.275 | 0.368 | 0.654 | 0.549 | 0.107 | 0.238 | 0.239 | 0.480 | 0.372 | 0.434 | 0.277 | 0.047 |
Phuc Nguyen, Tuan Duc Ngo, Evangelos Kalogerakis, Chuang Gan, Anh Tran, Cuong Pham, Khoi Nguyen. Open3DIS: Open-vocabulary 3D Instance Segmentation with 2D Mask Guidance. CVPR 2024 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
MAFT | 0.319 | 0.263 | 0.125 | 0.001 | 0.459 | 0.000 | 0.267 | 0.738 | 0.044 | 0.519 | 0.139 | 0.000 | 0.133 | 0.139 | 0.443 | 0.780 | 0.777 | 0.765 | 0.316 | 0.390 | 0.000 | 0.074 | 0.443 | 0.653 | 0.117 | 0.033 | 0.725 | 0.379 | 0.000 | 0.000 | 0.756 | 0.393 | 0.000 | 0.049 | 0.760 | 0.320 | 0.088 | 0.010 | 0.000 | 0.647 | 0.867 | 0.458 | 0.937 | 0.022 | 0.000 | 0.142 | 0.000 | 0.085 | 0.436 | 0.721 | 0.245 | 0.060 | 0.000 | 0.088 | 0.142 | 0.000 | 0.121 | 0.305 | 0.041 | 0.137 | 0.669 | 0.140 | 0.755 | 0.606 | 0.116 | 0.988 | 0.189 | 0.000 | 0.000 | 0.458 | 0.307 | 0.434 | 0.253 | 0.366 | 0.461 | 0.119 | 0.710 | 0.326 | 0.050 | 0.161 | 0.699 | 0.988 | 0.815 | 0.441 | 0.185 |
. Mask-Attention-Free Transformer for 3D Instance Segmentation. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SGIFormer | 0.410 | 0.529 | 0.156 | 0.000 | 0.553 | 0.000 | 0.419 | 0.708 | 0.020 | 0.543 | 0.246 | 0.125 | 0.231 | 0.324 | 0.580 | 0.780 | 0.865 | 0.381 | 0.166 | 0.544 | 0.000 | 0.002 | 0.472 | 0.640 | 0.028 | 0.000 | 0.869 | 0.481 | 0.000 | 0.200 | 0.845 | 0.253 | 0.000 | 0.577 | 0.858 | 0.221 | 0.712 | 0.408 | 0.000 | 0.899 | 0.886 | 0.822 | 0.912 | 0.002 | 0.000 | 0.000 | 0.175 | 0.049 | 0.325 | 0.777 | 0.369 | 0.009 | 0.000 | 0.265 | 0.238 | 0.200 | 0.862 | 0.207 | 0.166 | 0.259 | 1.000 | 0.064 | 0.865 | 0.666 | 0.111 | 0.965 | 0.000 | 0.010 | 0.033 | 0.559 | 0.314 | 0.645 | 0.541 | 0.842 | 0.785 | 0.142 | 0.909 | 0.938 | 0.138 | 0.210 | 0.607 | 0.960 | 0.943 | 0.548 | 0.502 |
Lei Yao, Yi Wang, Moyun Liu, Lap-Pui Chau. SGIFormer: Semantic-guided and Geometric-enhanced Interleaving Transformer for 3D Instance Segmentation. Under Review | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SceneMamba | 0.377 | 0.572 | 0.130 | 0.000 | 0.614 | 0.000 | 0.577 | 0.698 | 0.026 | 0.402 | 0.156 | 0.020 | 0.188 | 0.098 | 0.434 | 0.798 | 0.815 | 0.291 | 0.166 | 0.691 | 0.000 | 0.037 | 0.354 | 0.602 | 0.097 | 0.014 | 0.807 | 0.573 | 0.000 | 0.033 | 0.794 | 0.493 | 0.000 | 0.220 | 0.844 | 0.232 | 0.184 | 0.429 | 0.000 | 0.383 | 0.822 | 0.822 | 0.966 | 0.011 | 0.000 | 0.017 | 0.000 | 0.080 | 0.478 | 0.762 | 0.625 | 0.009 | 0.000 | 0.131 | 0.156 | 0.050 | 0.291 | 0.110 | 0.000 | 0.211 | 0.882 | 0.247 | 0.837 | 0.666 | 0.108 | 1.000 | 0.040 | 0.142 | 0.050 | 0.614 | 0.513 | 0.612 | 0.800 | 0.626 | 0.670 | 0.202 | 0.890 | 0.832 | 0.187 | 0.240 | 0.586 | 0.965 | 0.878 | 0.475 | 0.318 |
HAIS | 0.199 | 0.209 | 0.000 | 0.000 | 0.095 | 0.000 | 0.104 | 0.407 | 0.029 | 0.425 | 0.074 | 0.000 | 0.066 | 0.000 | 0.312 | 0.712 | 0.791 | 0.333 | 0.000 | 0.416 | 0.000 | 0.000 | 0.273 | 0.430 | 0.000 | 0.000 | 0.514 | 0.000 | 0.000 | 0.000 | 0.778 | 0.106 | 0.000 | 0.364 | 0.294 | 0.090 | 0.000 | 0.000 | 0.000 | 0.213 | 0.635 | 0.548 | 0.949 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.339 | 0.403 | 0.375 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.013 | 0.000 | 0.170 | 0.861 | 0.000 | 0.882 | 0.000 | 0.056 | 0.666 | 0.000 | 0.000 | 0.000 | 0.165 | 0.230 | 0.242 | 0.000 | 0.000 | 0.217 | 0.000 | 0.644 | 0.000 | 0.000 | 0.080 | 0.376 | 0.777 | 0.902 | 0.000 | 0.178 |
Shaoyu Chen, Jiemin Fang, Qian Zhang, Wenyu Liu, Xinggang Wang. Hierarchical aggregation for 3d instance segmentation. ICCV 2021 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
PointGroup | 0.146 | 0.000 | 0.000 | 0.000 | 0.137 | 0.000 | 0.000 | 0.626 | 0.012 | 0.480 | 0.021 | 0.000 | 0.029 | 0.000 | 0.222 | 0.721 | 0.790 | 0.000 | 0.000 | 0.288 | 0.000 | 0.000 | 0.279 | 0.334 | 0.000 | 0.000 | 0.460 | 0.000 | 0.000 | 0.000 | 0.722 | 0.179 | 0.000 | 0.000 | 0.225 | 0.033 | 0.000 | 0.000 | 0.000 | 0.031 | 0.566 | 0.490 | 0.906 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.366 | 0.389 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.203 | 0.733 | 0.000 | 0.000 | 0.000 | 0.000 | 0.535 | 0.000 | 0.000 | 0.000 | 0.320 | 0.000 | 0.196 | 0.000 | 0.000 | 0.000 | 0.000 | 0.035 | 0.000 | 0.000 | 0.000 | 0.494 | 0.503 | 0.857 | 0.000 | 0.134 |
Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-Wing Fu, Jiaya Jia. Pointgroup: Dual-set point grouping for 3d instance segmentation. CVPR 2020 |
Please refer to the submission instructions before making a submission
Submit results