Abstract

Transformer-based tabular foundation models have recently demonstrated promising incontext learning (ICL) performance on structured data, emerging as competitive alternatives to gradient-boosted trees. However, the fairness implications of this new paradigm remain largely unexplored. We present the first investigation of fairness in tabular ICL, evaluating three recently proposed foundation models—TabPFNv2, TabICL, and TabDPT—on multiple benchmark datasets. To mitigate biases, we explore three pre-processing fairness-enhancing methods: correlation removal (decorrelating input features from the sensitive attribute), group-balanced sample selection (ensuring equal representation of protected groups in context examples), and uncertainty-based sample selection (prioritizing context examples with high sensitive-attribute prediction uncertainty). Our experiments show that the uncertainty-based strategy consistently improves group fairness metrics (e.g., demographic parity, equalized odds, and equal opportunity) with minimal impact on predictive accuracy. We release our code to facilitate reproducibility (https://github.com/patrikken/Fair-TabICL).

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2026
Publication statusPublished - 2026

Fingerprint

Dive into the research topics of 'Towards Fair In-Context Learning with Tabular Foundation Models'. These topics are generated from the title and abstract of the publication. Together, they form a unique fingerprint.

Cite this