Can Out-of-Distribution Evaluations Uncover Reliance on Shortcuts? A Case Study in Question Answering

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

ŠTEFÁNIK Michal MICKUS Timothee SPIEGEL Michal KADLČÍK Marek KUCHAŘ Josef

Year of publication 2025
Type Article in Proceedings
Conference Findings of the Association for Computational Linguistics: EMNLP 2025
MU Faculty or unit

Faculty of Informatics

Citation
web https://aclanthology.org/2025.findings-emnlp.1232/
Keywords generalization; robustness; evaluation; question answering; language models; natural language processing; machine learning
Description A majority of recent work in AI assesses models' generalization capabilities through the lens of performance on out-of-distribution (OOD) datasets. Despite their practicality, such evaluations build upon a strong assumption: that OOD evaluations can capture and reflect upon possible failures in a real-world deployment. In this work, we challenge this assumption and confront the results obtained from OOD evaluations with a set of specific failure modes documented in existing question-answering (QA) models, referred to as a reliance on spurious features or prediction shortcuts. We find that different datasets used for OOD evaluations in QA provide an estimate of models' robustness to shortcuts that have a vastly different quality, some largely under-performing even a simple, in-distribution evaluation. We partially attribute this to the observation that spurious shortcuts are shared across ID+OOD datasets, but also find cases where a dataset's quality for training and evaluation is largely disconnected. Our work underlines limitations of commonly-used OOD-based evaluations of generalization, and provides methodology and recommendations for evaluating generalization within and beyond QA more robustly.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info