Abstract
The learning using privileged information paradigm leverages relevant features unavailable at deployment time for model training. In this paper, we propose a multi-task privileged framework that combines two types of tasks. First, the privileged-prediction task involves using regular features (available in both training and deployment) to predict privileged information, working as an intermediate step to guide the learning process. Second, the main learning objective, the target task, uses the predicted privileged information along with the regular features to make the final target prediction. Furthermore, knowledge distillation techniques are included within the target task to enhance the knowledge transfer of privileged information. Experimental results show improvements in tabular datasets and image-related problems compared to state-of-the-art approaches. Additionally, we analyze misclassification causes and refine the proposed multi-task privileged learning to reduce errors.
| Original language | English |
|---|---|
| Article number | 113389 |
| Journal | Pattern Recognition |
| Volume | 178 |
| DOIs | |
| Publication status | Published - Oct 2026 |
!!!Keywords
- Convolutional neural networks
- Knowledge distillation
- Learning using privileged information (LUPI)
- Multi-task learning
- Neural networks
Fingerprint
Dive into the research topics of 'Privileged learning via a multi-task distilled approach'. These topics are generated from the title and abstract of the publication. Together, they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver