Full name | AdapNet++ |
Description | Benchmarking the AdapNet++ semantic segmentation architecture trained with HHA encoded depth images as input to the network. AdapNet++ is a computationally efficient unimodal semantic segmentation architecture that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling (eASPP) that has a larger effective receptive field with more than 10x fewer parameters compared to the standard ASPP, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. |
Publication title | Self-Supervised Model Adaptation for Multimodal Semantic Segmentation |
Publication authors | Abhinav Valada, Rohit Mohan, Wolfram Burgard |
Publication venue | International Journal of Computer Vision, 2019 |
Publication URL | https://arxiv.org/abs/1808.03833 |
Input Data Types | Uses Geometry Uses 2D |
Programming language(s) | Python, Tensorflow |
Hardware | Intel Xeon E5 CPU, NVIDIA TITAN X (Pascal) |
Website | http://deepscene.cs.uni-freiburg.de |
Source code or download URL | https://github.com/DeepSceneSeg/AdapNet-pp |
Submission creation date | 2 Jan, 2019 |
Last edited | 18 Jul, 2019 |
Last uploaded | 15 Jan, 2019 |