Is Multi-Task Learning an Upper Bound for Continual Learning?

dc.contributor.authorWu, Zihao
dc.contributor.authorTran, Huy
dc.contributor.authorPirsiavash, Hamed
dc.contributor.authorKolouri, Soheil
dc.date.accessioned2023-11-09T21:19:42Z
dc.date.available2023-11-09T21:19:42Z
dc.date.issued2023-03-05
dc.descriptionICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); Rhodes Island, Greece; 04-10 June 2023en_US
dc.description.abstractContinual learning and multi-task learning are commonly used machine learning techniques for learning from multiple tasks. However, existing literature assumes multi-task learning as a reasonable performance upper bound for various continual learning algorithms, without rigorous justification. Additionally, in a multi-task setting, a small subset of tasks may behave as adversarial tasks, negatively impacting overall learning performance. On the other hand, continual learning approaches can avoid the negative impact of adversarial tasks and maintain performance on the remaining tasks, resulting in better performance than multi-task learning. This paper introduces a novel continual self-supervised learning approach, where each task involves learning an invariant representation for a specific class of data augmentations. We demonstrate that this approach results in naturally contradicting tasks and that, in this setting, continual learning often outperforms multi-task learning on benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100.en_US
dc.description.sponsorshipThis work was supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR00112190135.en_US
dc.description.urihttps://ieeexplore.ieee.org/abstract/document/10095984en_US
dc.format.extent5 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m2dgj3-lxgk
dc.identifier.citationWu, Zihao, Huy Tran, Hamed Pirsiavash, and Soheil Kolouri. “Is Multi-Task Learning an Upper Bound for Continual Learning?” In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5, 2023. https://doi.org/10.1109/ICASSP49357.2023.10095984.en_US
dc.identifier.urihttps://doi.org/10.1109/ICASSP49357.2023.10095984
dc.identifier.urihttp://hdl.handle.net/11603/30663
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rights© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.titleIs Multi-Task Learning an Upper Bound for Continual Learning?en_US
dc.typeTexten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2210.14797.pdf
Size:
295.13 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: