Is Multi-Task Learning an Upper Bound for Continual Learning?

Author/Creator ORCID

Date

2023-03-05

Department

Program

Citation of Original Publication

Wu, Zihao, Huy Tran, Hamed Pirsiavash, and Soheil Kolouri. “Is Multi-Task Learning an Upper Bound for Continual Learning?” In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5, 2023. https://doi.org/10.1109/ICASSP49357.2023.10095984.

Rights

© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Subjects

Abstract

Continual learning and multi-task learning are commonly used machine learning techniques for learning from multiple tasks. However, existing literature assumes multi-task learning as a reasonable performance upper bound for various continual learning algorithms, without rigorous justification. Additionally, in a multi-task setting, a small subset of tasks may behave as adversarial tasks, negatively impacting overall learning performance. On the other hand, continual learning approaches can avoid the negative impact of adversarial tasks and maintain performance on the remaining tasks, resulting in better performance than multi-task learning. This paper introduces a novel continual self-supervised learning approach, where each task involves learning an invariant representation for a specific class of data augmentations. We demonstrate that this approach results in naturally contradicting tasks and that, in this setting, continual learning often outperforms multi-task learning on benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100.