Autonomous driving is one of these applications where an acute awareness of distances matters. Currently systems for the extraction of dense depth maps often combine camera and Lidar input. Lidar sensors are by no means an obvious choice, given their sizes and - above all - prices. In order to mitigate those issues, there is increased interest in replacing the high density Lidars by cheaper ones, that only yield sparse depth maps. The latter can then be enhanced to a higher resolution with the help of the camera input. The first part of the presentation focuses on our recent work to do exactly that: at a given time frame, combine sparse Lidar data with images, in order to generate dense depth maps. In the second part, we discuss how the inclusion of time can be used to generate better depth maps than those obtained when every single time instance is handled in isolation. In the near future, we will endeavor to combine both these strands.
Back to Workshop IV: Deep Geometric Learning of Big Data and Applications