Disentangled Source-Free Personalization for Facial Expression Recognition with Neutral Target Data

Research output: Contribution to Book/Report typesContribution to conference proceedingspeer-review

1 Citation (Scopus)

Abstract

Facial Expression Recognition (FER) from videos is a crucial task in various application areas, such as human-computer interaction and health diagnosis and monitoring (e.g., assessing pain and depression). Beyond the challenges of recognizing subtle emotional or health states, the effectiveness of deep FER models is often hindered by the considerable inter-subject variability in expressions. Source-free (unsupervised) domain adaptation (SFDA) methods may be employed to adapt a pre-trained source model using only unlabeled target domain data, thereby avoiding data privacy, storage, and transmission issues. Typically, SFDA methods adapt to a target domain dataset corresponding to an entire population and assume it includes data from all recognition classes. However, collecting such comprehensive target data can be difficult or even impossible for FER in healthcare applications. In many real-world scenarios, it may be feasible to collect a short neutral control video (which displays only neutral expressions) from target subjects before deployment. These videos can be used to adapt a model to better handle the variability of expressions among subjects. This paper introduces the Disentangled SFDA (DSFDA) method to address the challenge posed by adapting models with missing target expression data. DSFDA leverages data from a neutral target control video for end-to-end generation and adaptation of target data with missing non-neutral data. Our method learns to disentangle features related to expressions and identity while generating the missing non-neutral expression data for the target subject, thereby enhancing model accuracy. Additionally, our self-supervision strategy improves model adaptation by reconstructing target images that maintain the same identity and source expression. Experimental results1 on the challenging BioVid, UNBC-McMaster and StressID datasets indicate that our DSFDA approach can outperform state-of-the-art adaptation methods.1https://github.com/MasoumehSharafi/DSFDA/

Original languageEnglish
Title of host publication2025 IEEE 19th International Conference on Automatic Face and Gesture Recognition, FG 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798331553418
DOIs
Publication statusPublished - 2025
Event19th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2025 - Tampa, United States
Duration: 26 May 202530 May 2025

Publication series

Name2025 IEEE 19th International Conference on Automatic Face and Gesture Recognition, FG 2025

Conference

Conference19th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2025
Country/TerritoryUnited States
CityTampa
Period26/05/2530/05/25

Fingerprint

Dive into the research topics of 'Disentangled Source-Free Personalization for Facial Expression Recognition with Neutral Target Data'. These topics are generated from the title and abstract of the publication. Together, they form a unique fingerprint.

Cite this