Methods for Estimating and Improving Robustness of Language Models.

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

ŠTEFÁNIK Michal

Year of publication 2022
Type Article in Proceedings
Conference Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop
MU Faculty or unit

Faculty of Informatics

Citation
Web https://aclanthology.org/2022.naacl-srw.6/
Doi http://dx.doi.org/10.18653/v1/2022.naacl-srw.6
Keywords natural language processing; transformers; robustness; generalization
Description Despite their outstanding performance, large language models (LLMs) suffer notorious flaws related to their preference for shallow textual relations over full semantic complexity of the problem. This proposal investigates a common denominator of this problem in their weak ability to generalise outside of the training domain. We survey diverse research directions providing estimations of model generalisation ability and find that incorporating some of these measures in the training objectives leads to enhanced distributional robustness of neural models. Based on these findings, we present future research directions enhancing the robustness of LLMs.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info