Résumé
The learning using privileged information paradigm leverages relevant features unavailable at deployment time for model training. In this paper, we propose a multi-task privileged framework that combines two types of tasks. First, the privileged-prediction task involves using regular features (available in both training and deployment) to predict privileged information, working as an intermediate step to guide the learning process. Second, the main learning objective, the target task, uses the predicted privileged information along with the regular features to make the final target prediction. Furthermore, knowledge distillation techniques are included within the target task to enhance the knowledge transfer of privileged information. Experimental results show improvements in tabular datasets and image-related problems compared to state-of-the-art approaches. Additionally, we analyze misclassification causes and refine the proposed multi-task privileged learning to reduce errors.
| langue originale | Anglais |
|---|---|
| Numéro d'article | 113389 |
| journal | Pattern Recognition |
| Volume | 178 |
| Les DOIs | |
| état | Publié - oct. 2026 |
Empreinte digitale
Voici les principaux termes ou expressions associés à « Privileged learning via a multi-task distilled approach ». Ces libellés thématiques sont générés à partir du titre et du résumé de la publication. Ensemble, ils forment une empreinte digitale unique.Contient cette citation
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver