The content discusses the problem of multi-task learning (MTL) with partially labeled data. Most existing MTL methods require fully labeled datasets, which can be prohibitively expensive and impractical to obtain, especially for dense prediction tasks.
The authors propose a new approach called Joint-Task Regularization (JTR) to address this issue. JTR encodes predictions and labels for multiple tasks into a single joint-task latent space and regularizes the encoded features by a distance loss in this space. This allows information to flow across multiple tasks during regularization, leveraging cross-task relationships. Additionally, JTR scales linearly with the number of tasks, unlike previous pair-wise task regularization methods which scale quadratically.
The authors extensively benchmark JTR on variations of three popular MTL datasets - NYU-v2, Cityscapes, and Taskonomy - under different partially labeled scenarios. JTR outperforms existing methods across these benchmarks, demonstrating its effectiveness in achieving data-efficient multi-task learning.
Para outro idioma
do conteúdo fonte
arxiv.org
Principais Insights Extraídos De
by Kento Nishi,... às arxiv.org 04-03-2024
https://arxiv.org/pdf/2404.01976.pdfPerguntas Mais Profundas