Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks

Author/Creator ORCID

Date

2016-12-19

Department

Program

Citation of Original Publication

Arsalan Mousavian, et.al, Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks, 2016 Fourth International Conference on 3D Vision (3DV), DOI: 10.1109/3DV.2016.69

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
© 2016 IEEE

Abstract

Multi-scale deep CNNs have been used successfully for problems mapping each pixel to a label, such as depth estimation and semantic segmentation. It has also been shown that such architectures are reusable and can be used for multiple tasks. These networks are typically trained independently for each task by varying the output layer(s) and training objective. In this work we present a new model for simultaneous depth estimation and semantic segmentation from a single RGB image. Our approach demonstrates the feasibility of training parts of the model for each task and then fine tuning the full, combined model on both tasks simultaneously using a single loss function. Furthermore we couple the deep CNN with fully connected CRF, which captures the contextual relationships and interactions between the semantic and depth cues improving the accuracy of the final results. The proposed model is trained and evaluated on NYUDepth V2 dataset [23] outperforming the state of the art methods on semantic segmentation and achieving comparable results on the task of depth estimation.