Skip to main navigation Skip to search Skip to main content

Progressive Multi-Source Domain Adaptation for Personalized Facial Expression Recognition

Research output: Contribution to journalJournal Articlepeer-review

Abstract

Personalized facial expression recognition (FER) involves adapting a machine learning model using samples from labeled sources and unlabeled target domains. Given the challenges of recognizing subtle expressionswith considerable interpersonal variability, state-of-the-art unsupervised domain adaptation (UDA) methods focus on the multi-source UDA (MSDA) setting, where each domain corresponds to a specific subject, and improve model accuracy and robustness. However, when adapting to a specific target, the diverse nature of multiple source domains translates to a large shift between source and target data. State-of-the-art MSDA methods for FER address this domain shift by considering all the sources to adapt to the target representations. Nevertheless, adapting to a target subject presents significant challenges due to large distributional differences between source and target domains, often resulting in negative transfer. In addition, integrating all sources simultaneously increases computational costs and causes misalignment with the target. To address these issues, we propose a progressive MSDA approach that gradually introduces information from subjects (source domains) based on their similarity to the target subject. This will ensure that only the most relevant sources from the target are selected, which helps avoid the negative transfer caused by dissimilar sources. During adaptation, the source domains are introduced in a curriculum manner. We first exploit the closest sources to reduce the distribution shift with the target and then move towards the furthest while only considering the most relevant sources based on the predetermined threshold. Furthermore, to mitigate catastrophic forgetting caused by the incremental introduction of source subjects, we implemented a density-basedmemorymechanism that preserves themost relevant historical source samples for adaptation. Our extensive experiments1 show the effectiveness of our proposed method on challenging FER datasets: Biovid, UNBC-McMaster, Aff-Wild2, and BAH. Further, performance is evaluated on a cross-dataset setting (UNBC-McMaster→BioVid), showing the importance of gradually adapting to source subjects.

Original languageEnglish
Pages (from-to)575-586
Number of pages12
JournalIEEE Transactions on Affective Computing
Volume17
Issue number1
DOIs
Publication statusPublished - 2026

!!!Keywords

  • Facial expression recognition
  • gradual domain adaptation
  • multi-source domain adaptation
  • pain estimation
  • unsupervised domain adaptation

Fingerprint

Dive into the research topics of 'Progressive Multi-Source Domain Adaptation for Personalized Facial Expression Recognition'. These topics are generated from the title and abstract of the publication. Together, they form a unique fingerprint.

Cite this