Submitted by Abhinav Valada.

Submission data

Full nameAdapNet++
DescriptionBenchmarking the AdapNet++ semantic segmentation architecture trained with HHA encoded depth images as input to the network. AdapNet++ is a computationally efficient unimodal semantic segmentation architecture that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling (eASPP) that has a larger effective receptive field with more than 10x fewer parameters compared to the standard ASPP, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details.
Publication titleSelf-Supervised Model Adaptation for Multimodal Semantic Segmentation
Publication authorsAbhinav Valada, Rohit Mohan, Wolfram Burgard
Publication venueInternational Journal of Computer Vision, 2019
Publication URL
Input Data TypesUses Geometry        Uses 2D
Programming language(s)Python, Tensorflow
HardwareIntel Xeon E5 CPU, NVIDIA TITAN X (Pascal)
Source code or download URL
Submission creation date2 Jan, 2019
Last edited18 Jul, 2019

2D semantic label results

Infoavg ioubathtubbedbookshelfcabinetchaircountercurtaindeskdoorfloorotherfurniturepicturerefrigeratorshower curtainsinksofatabletoiletwallwindow