Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Conflict-Averse Gradient Descent for Multi-task learning

Conflict-Averse Gradient Descent for Multi-task learning.
Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu.
In Conference on Neural Information Processing Systems (NeurIPS), 2021, December 2021.

Download

[PDF]9.7MB  

Abstract

The goal of multi-task learning is to enable more efficient learning than single task learning by sharing model structures for a diverse set of tasks. A standard multi-task learning objective is to minimize the average loss across all tasks. While straightforward, using this objective often results in much worse final performance for each task than learning them independently. A major challenge in optimizing a multi-task model is the conflicting gradients, where gradients of different task objectives are not well aligned so that following the average gradient direction can be detrimental to specific tasks' performance. Previous work has proposed several heuristics to manipulate the task gradients for mitigating this problem. But most of them lack convergence guarantee and/or could converge to any Pareto-stationary point.In this paper, we introduce Conflict-Averse Gradient descent (CAGrad) which minimizes the average loss function, while leveraging the worst local improvement of individual tasks to regularize the algorithm trajectory. CAGrad balances the objectives automatically and still provably converges to a minimum over the average loss. It includes the regular gradient descent (GD) and the multiple gradient descent algorithm (MGDA) in the multi-objective optimization (MOO) literature as special cases. On a series of challenging multi-task supervised learning and reinforcement learning tasks, CAGrad achieves improved performance over prior state-of-the-art multi-objective gradient manipulation methods.

BibTeX Entry

@InProceedings{NeurIPS2021-Liu,
  author = {Bo Liu and Xingchao Liu and Xiaojie Jin and Peter Stone and Qiang Liu},
  title = {Conflict-Averse Gradient Descent for Multi-task learning},
  booktitle = {Conference on Neural Information Processing Systems (NeurIPS), 2021},
  location = {Virtual Only},
  month = {December},
  year = {2021},
  abstract = {
  The goal of multi-task learning is to enable more efficient learning
  than single task learning by sharing model structures for a diverse
  set of tasks. A standard multi-task learning objective is to
  minimize the average loss across all tasks. While straightforward,
  using this objective often results in much worse final performance
  for each task than learning them independently. A major challenge in
  optimizing a multi-task model is the conflicting gradients, where
  gradients of different task objectives are not well aligned so that
  following the average gradient direction can be detrimental to
  specific tasks' performance. Previous work has proposed several
  heuristics to manipulate the task gradients for mitigating this
  problem. But most of them lack convergence guarantee and/or could
  converge to any Pareto-stationary point.In this paper, we introduce
  Conflict-Averse Gradient descent (CAGrad) which minimizes the
  average loss function, while leveraging the worst local improvement
  of individual tasks to regularize the algorithm trajectory. CAGrad
  balances the objectives automatically and still provably converges
  to a minimum over the average loss. It includes the regular gradient
  descent (GD) and the multiple gradient descent algorithm (MGDA) in
  the multi-objective optimization (MOO) literature as special
  cases. On a series of challenging multi-task supervised learning and
  reinforcement learning tasks, CAGrad achieves improved performance
  over prior state-of-the-art multi-objective gradient manipulation
  methods.
  },
}

Generated by bib2html.pl (written by Patrick Riley ) on Thu Dec 09, 2021 19:29:29