Pirsiavash, HamedTejankar, Ajinkya Baban2021-09-012021-09-012020-01-2012265http://hdl.handle.net/11603/22847Domain adaptation is an important problem with many practical applications. The goal is to adapt a model trained on one domain (source) to another domain (target) with scarce or no annotation. We observe that the unlabeled target datasets of popular domain adaptation benchmarks do not contain any categories apart from testing categories. We believe this introduces a bias that does not exist in many practical applications. We note that this bias can be reduced easily by appending the datasets with images from non-testing categories. On these modified benchmarks, state-of-the-art domain adaptation methods show a large drop in performance. Thus, raising concerns about their practical applicability. Further, we show that a simple, two-stage method involving self-supervised task of rotation prediction and knowledge distillation is a competitive baseline.application:pdfCuration Bias in Domain AdaptationText