Behaviour Discovery and Attribution for Explainable Reinforcement Learning

Résultats de recherche: Contribution à un journalArticle publié dans une revue, révisé par les pairsRevue par des pairs

Résumé

Building trust in reinforcement learning (RL) agents requires understanding why they make certain decisions, especially in high-stakes applications like robotics, healthcare, and finance. Existing explainability methods often focus on single states or entire trajectories, either providing only local, step-wise insights or attributing decisions to coarse, episodelevel summaries. Both approaches miss the recurring strategies and temporally extended patterns that actually drive agent behavior across multiple decisions. We address this gap by proposing a fully offline, reward-free framework for behavior discovery and segmentation, enabling the attribution of actions to meaningful and interpretable behavior segments that capture recurring patterns appearing across multiple trajectories. Our method identifies coherent behavior clusters from state-action sequences and attributes individual actions to these clusters for fine-grained, behavior-centric explanations. Evaluations on four diverse offline RL environments show that our approach discovers meaningful behaviors and outperforms trajectory-level baselines in fidelity, human preference, and cluster coherence. Our code is publicly available1.

langue originaleAnglais
journalTransactions on Machine Learning Research
Volume2025-September
étatPublié - 2025
Modification externeOui

Empreinte digitale

Voici les principaux termes ou expressions associés à « Behaviour Discovery and Attribution for Explainable Reinforcement Learning ». Ces libellés thématiques sont générés à partir du titre et du résumé de la publication. Ensemble, ils forment une empreinte digitale unique.

Contient cette citation