Negation: A Pink Elephant in the Large Language Models' Room?
Authors | |
---|---|
Year of publication | 2025 |
Type | Article in Proceedings |
Conference | Findings of the Association for Computational Linguistics: EMNLP 2025 |
MU Faculty or unit | |
Citation | |
web | Odkaz na preprint přijatého paperu |
Keywords | textual entailment; robustness; scaling; evaluation; natural language inference; multilingualism; multilingual corpora; lexical semantic change; negation |
Description | Negations are key to determining sentence meaning, making them essential for logical reasoning. Despite their importance, negations pose a substantial challenge for large language models (LLMs) and remain underexplored. We constructed and published two new textual entailment datasets NoFEVER-ML and NoSNLI-ML in four languages (English, Czech, German, and Ukrainian) with examples differing in negation. It allows investigation of the root causes of the negation problem and its exemplification: how popular LLM model properties and language impact their inability to handle negation correctly. Contrary to previous work, we show that increasing the model size may improve the models' ability to handle negations. Furthermore, we find that both the models' reasoning accuracy and robustness to negation are language-dependent and that the length and explicitness of the premise have an impact on robustness. There is better accuracy in projective language with fixed order, such as English, than in non-projective ones, such as German or Czech. Our entailment datasets pave the way to further research for explanation and exemplification of the negation problem, minimization of LLM hallucinations, and improvement of LLM reasoning in multilingual settings. |
Related projects: |