Submitted anonymously.

Submission data

Full nameDeep Multimodal Networks with Residual Fusion Blocks
DescriptionConventional RGB-D semantic segmentation methods adopt two-stream fusion structure which uses two modality-specific encoders to extract features from the RGB and depth data. However, they do not fully exploit the interdependencies of the encoders. We proposes a novel bottom-up interactive fusion structure and residual fusion block to formulate the interdependencies of the two encoders.
https://arxiv.org/abs/1907.00135
Input Data TypesUses Color,Uses Geometry        Uses 2D
Programming language(s)Tensorflow
Hardware1080ti
Submission creation date3 Jun, 2019
Last edited4 Jul, 2019

2D semantic label results

Infoavg ioubathtubbedbookshelfcabinetchaircountercurtaindeskdoorfloorotherfurniturepicturerefrigeratorshower curtainsinksofatabletoiletwallwindow
0.5920.6160.7580.6590.5810.3300.4690.6550.5430.5240.9240.3550.3360.5720.4790.6710.6480.4800.8140.8140.614