Submitted by Abhinav Valada.

Submission data

Full nameAdapNet++
DescriptionBenchmarking the AdapNet++ semantic segmentation architecture trained with HHA encoded depth images as input to the network. AdapNet++ is a computationally efficient unimodal semantic segmentation architecture that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling (eASPP) that has a larger effective receptive field with more than 10x fewer parameters compared to the standard ASPP, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details.
Publication titleSelf-Supervised Model Adaptation for Multimodal Semantic Segmentation
Publication authorsAbhinav Valada, Rohit Mohan, Wolfram Burgard
Publication venueInternational Journal of Computer Vision, 2019
Publication URLhttps://arxiv.org/abs/1808.03833
Input Data TypesUses Geometry        Uses 2D
Programming language(s)Python, Tensorflow
HardwareIntel Xeon E5 CPU, NVIDIA TITAN X (Pascal)
Websitehttp://deepscene.cs.uni-freiburg.de
Source code or download URLhttps://github.com/DeepSceneSeg/AdapNet-pp
Submission creation date2 Jan, 2019
Last edited18 Jul, 2019

2D semantic label results

Infoavg ioubathtubbedbookshelfcabinetchaircountercurtaindeskdoorfloorotherfurniturepicturerefrigeratorshower curtainsinksofatabletoiletwallwindow
copyleft0.5030.6130.7220.4180.3580.3370.3700.4790.4430.3680.9070.2070.2130.4640.5250.6180.6570.4500.7880.7210.408