Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning

Zedong Wang1,     Siyuan Li2,     Dan Xu
1The Hong Kong University of Science and Technology, 2Zhejiang University

✨ ICCV 2025 Highlight ✨

Paper arXiv Code Hugging Face
Rep-MTL Overview

Comparison between Rep-MTL and existing Multi-task Optimizers. (a) Both loss scaling and gradient manipulation methods focus on optimizer-centric strategies to address conflicts. (b) Rep-MTL instead leverages task saliency in shared representation space to facilitate cross-task sharing while preserving task-specific signals via regularization, without modifications to either the optimizers or model architectures.

Abstract

Despite the promise of Multi-Task Learning in leveraging complementary knowledge across tasks, existing multi-task optimization (MTO) techniques remain fixated on resolving conflicts via optimizer-centric loss scaling and gradient manipulation strategies, yet fail to deliver consistent gains. In this paper, we argue that the shared representation space, where task interactions naturally occur, offers rich information and potential for operations complementary to existing methods, especially for facilitating the inter-task complementarity explicitly, which is rarely explored in MTO. This intuition leads to Rep-MTL, which exploits the representation-level task saliency to quantify interactions between task-specific optimization and shared representation learning. By steering these saliencies through entropy-based penalization and sample-wise cross-task alignment, Rep-MTL aims to mitigate negative transfer by maintaining the effective training of individual tasks instead pure conflict-solving, while explicitly promoting complementary information sharing. Experiments are conducted on four challenging MTL benchmarks covering both task-shift and domain-shift scenarios. The results show that Rep-MTL, even paired with the basic equal weighting, achieves competitive performance gains with favorable efficiency. Beyond standard metrics, PL exponent analysis demonstrates Rep-MTL's efficacy in balancing task-specific learning and cross-task sharing.

Method

Rep-MTL leverages representation-level task saliency to quantify the interactions between task-specific optimization and shared representation learning. The method employs entropy-based penalization and sample-wise cross-task alignment to steer these saliencies, effectively mitigating negative transfer while promoting complementary information sharing across tasks.

Rep-MTL Method

Overview of Rep-MTL: A representation-level regularization method for multi-task learning that introduces task saliency-based objectives to encourage cross-task feature sharing and mitigate negative transfer.

BibTeX

@inproceedings{iccv2025repmtl,
  title={Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning},
  author={Wang, Zedong and Li, Siyuan and Xu, Dan},
  booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2025}
}