While leveraging additional training data is well established to improve
adversarial robustness, it incurs the unavoidable cost of data collection and
the heavy computation to train models. To mitigate the costs, we propose Guided
Adversarial Training (GAT), a novel adversarial training technique that
exploits auxiliary tasks under a limited set of training data. Our approach
extends single-task models into multi-task models during the min-max
optimization of adversarial training, and drives the loss optimization with a
regularization of the gradient curvature across multiple tasks. GAT leverages
two types of auxiliary tasks: self-supervised tasks, where the labels are
generated automatically, and domain-knowledge tasks, where human experts
provide additional labels. Experimentally, GAT increases the robust AUC of
CheXpert medical imaging dataset from 50% to 83% and On CIFAR-10, GAT
outperforms eight state-of-the-art adversarial training and achieves 56.21%
robust accuracy with Resnet-50. Overall, we demonstrate that guided multi-task
learning is an actionable and promising avenue to push further the boundaries
of model robustness.