Pre-trained Language Models Learn Remarkably Accurate Representations of Numbers

Varování

Publikace nespadá pod Ústav výpočetní techniky, ale pod Fakultu informatiky. Oficiální stránka publikace je na webu muni.cz.
Autoři

KADLČÍK Marek ŠTEFÁNIK Michal MICKUS Timothee SPIEGEL Michal KUCHAŘ Josef

Rok publikování 2025
Druh Článek ve sborníku
Konference Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Fakulta / Pracoviště MU

Fakulta informatiky

Citace
www https://aclanthology.org/2025.emnlp-main.1356/
Klíčová slova robustness; model editing; interpretability; probing; language models
Popis Pretrained language models (LMs) are prone to arithmetic errors. Existing work showed limited success in probing numeric values from models’ representations, indicating that these errors can be attributed to the inherent unreliability of distributionally learned embeddings in representing exact quantities. However, we observe that previous probing methods are inadequate for the emergent structure of learned number embeddings with sinusoidal patterns. In response, we propose a novel probing technique that decodes numeric values from input embeddings with near-perfect accuracy across a range of open-source LMs. This proves that after the sole pre-training, LMs represent numbers with remarkable precision. Finally, we find that the embeddings’ preciseness judged by our probe’s accuracy explains a large portion of LM’s errors in elementary arithmetic, and show that aligning the embeddings with the pattern discovered by our probe can mitigate these errors.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.

Další info