Pirsiavash, HamedEsmaeilkhanian, Maryam2021-01-292021-01-292019-01-0112083http://hdl.handle.net/11603/20901One of the biggest challenges in deep learning field of research is the need for having a large amount of annotated data. Self-supervised learning is one of the methods to tackle this issue. In this method, a new task is designed to learn features without having annotated data available. These learned features can be transferred to a new task by fine-tuning the pre-trained model. This process of transferring the learned features on one task trained on unannotated data to a new task is called transfer learning. The objective of this study is to learn low level features through novel self-supervised task, with the hypotheses being that the learned features from self-supervised task would improve object classification in the supervised learning. In addition, there is a significant reduction in the complexity of the overall model when primarily representation is learned in a deep network and the resulting knowledge is transferred to the second task. Compared to some of the existing self-supervised methods, transfer learning method described in this study is shown to have achieved superior results in terms of accuracy on object classification on PASCAL VOC 2007 dataset.application:pdfTransfer Learning by Optical FlowText