Print Email Facebook Twitter Fusion of stereo and monocular depth estimates in a self-supervised learning context Title Fusion of stereo and monocular depth estimates in a self-supervised learning context Author Tomás Cardoso Rézio Martins, Diogo (TU Delft Aerospace Engineering) Contributor de Croon, G.C.H.E. (mentor) Degree granting institution Delft University of Technology Programme Aerospace Engineering | Control & Simulation Date 2017-10-11 Abstract We study how autonomous robots can better evaluate distances by fusing depth estimates from both stereo vision and a convolutional neural network (CNN) that processes a single still image. The main contribution is a novel fusion method that preserves high confidence stereo estimates, while leveraging the CNN estimates in the low-confidence regions. The main concern with such a fusion scheme is that the CNN may work on the training set, but will degrade significantly in the operational environment. Therefore, we also show that the performance of the monocular estimator in the operational environment improves if stereo vision provides supervised targets in a self-supervised learning (SSL) fashion. The merging framework is implemented on-board of a Parrot SLAMDunkand tested in real world scenarios, providing more reliable depth maps for use in autonomous navigation. To reference this document use: http://resolver.tudelft.nl/uuid:faf5d4fb-5785-4d27-9d52-0b09214f3a6a Part of collection Student theses Document type master thesis Rights © 2017 Diogo Tomás Cardoso Rézio Martins Files PDF MsThesis.pdf 10.47 MB Close viewer /islandora/object/uuid:faf5d4fb-5785-4d27-9d52-0b09214f3a6a/datastream/OBJ/view