Disentangled Source-Free Personalization for Facial Expression Recognition with Neutral Target Data

Résultats de recherche: Chapitre dans un livre, rapport, actes de conférenceParticipation à un ouvrage collectif lié à un colloque ou une conférenceRevue par des pairs

1 Citation (Scopus)

Résumé

Facial Expression Recognition (FER) from videos is a crucial task in various application areas, such as human-computer interaction and health diagnosis and monitoring (e.g., assessing pain and depression). Beyond the challenges of recognizing subtle emotional or health states, the effectiveness of deep FER models is often hindered by the considerable inter-subject variability in expressions. Source-free (unsupervised) domain adaptation (SFDA) methods may be employed to adapt a pre-trained source model using only unlabeled target domain data, thereby avoiding data privacy, storage, and transmission issues. Typically, SFDA methods adapt to a target domain dataset corresponding to an entire population and assume it includes data from all recognition classes. However, collecting such comprehensive target data can be difficult or even impossible for FER in healthcare applications. In many real-world scenarios, it may be feasible to collect a short neutral control video (which displays only neutral expressions) from target subjects before deployment. These videos can be used to adapt a model to better handle the variability of expressions among subjects. This paper introduces the Disentangled SFDA (DSFDA) method to address the challenge posed by adapting models with missing target expression data. DSFDA leverages data from a neutral target control video for end-to-end generation and adaptation of target data with missing non-neutral data. Our method learns to disentangle features related to expressions and identity while generating the missing non-neutral expression data for the target subject, thereby enhancing model accuracy. Additionally, our self-supervision strategy improves model adaptation by reconstructing target images that maintain the same identity and source expression. Experimental results1 on the challenging BioVid, UNBC-McMaster and StressID datasets indicate that our DSFDA approach can outperform state-of-the-art adaptation methods.1https://github.com/MasoumehSharafi/DSFDA/

langue originaleAnglais
titre2025 IEEE 19th International Conference on Automatic Face and Gesture Recognition, FG 2025
EditeurInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronique)9798331553418
Les DOIs
étatPublié - 2025
Evénement19th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2025 - Tampa, Etats-Unis
Durée: 26 mai 202530 mai 2025

Série de publications

Nom2025 IEEE 19th International Conference on Automatic Face and Gesture Recognition, FG 2025

Conférence

Conférence19th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2025
Pays/TerritoireEtats-Unis
La villeTampa
période26/05/2530/05/25

Empreinte digitale

Voici les principaux termes ou expressions associés à « Disentangled Source-Free Personalization for Facial Expression Recognition with Neutral Target Data ». Ces libellés thématiques sont générés à partir du titre et du résumé de la publication. Ensemble, ils forment une empreinte digitale unique.

Contient cette citation