Skip to main navigation Skip to search Skip to main content

Privileged learning via a multi-task distilled approach

  • Mario Martínez-García
  • , Jon Vadillo
  • , Marco Pedersoli
  • , Iñaki Inza
  • , Jose A. Lozano

Research output: Contribution to journalJournal Articlepeer-review

Abstract

The learning using privileged information paradigm leverages relevant features unavailable at deployment time for model training. In this paper, we propose a multi-task privileged framework that combines two types of tasks. First, the privileged-prediction task involves using regular features (available in both training and deployment) to predict privileged information, working as an intermediate step to guide the learning process. Second, the main learning objective, the target task, uses the predicted privileged information along with the regular features to make the final target prediction. Furthermore, knowledge distillation techniques are included within the target task to enhance the knowledge transfer of privileged information. Experimental results show improvements in tabular datasets and image-related problems compared to state-of-the-art approaches. Additionally, we analyze misclassification causes and refine the proposed multi-task privileged learning to reduce errors.

Original languageEnglish
Article number113389
JournalPattern Recognition
Volume178
DOIs
Publication statusPublished - Oct 2026

!!!Keywords

  • Convolutional neural networks
  • Knowledge distillation
  • Learning using privileged information (LUPI)
  • Multi-task learning
  • Neural networks

Fingerprint

Dive into the research topics of 'Privileged learning via a multi-task distilled approach'. These topics are generated from the title and abstract of the publication. Together, they form a unique fingerprint.

Cite this