Full name | FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks |
Description | In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a sub-network specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet. |
Publication title | FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks |
Publication authors | Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, Thomas Brox |
Publication venue | CVPR 2017 |
Publication URL | https://arxiv.org/pdf/1612.01925 |
Input Data Types | Uses Depth Input |
Programming language(s) | Python |
Hardware | GeForce 1080Ti |
Website | https://lmb.informatik.uni-freiburg.de/Publications/2017/IMSKDB17/ |
Source code or download URL | https://github.com/NVIDIA/flownet2-pytorch |
Submission creation date | 17 Mar, 2020 |
Last edited | 17 Mar, 2020 |