Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains

Logo poskytovatele

Varování

Publikace nespadá pod Ústav výpočetní techniky, ale pod Právnickou fakultu. Oficiální stránka publikace je na webu muni.cz.
Autoři

ŠAVELKA Jaromír WESTERMANN Hannes BENYEKHLEF Karim ALEXANDER Charlotte S. GRANT Jayla C. AMARILES Restrepo David HAMDANI Rajaa El MEEUS Sébastien TROUSSEL Aurore ARASZKIEWICZ Michal ASHLEY Kevin D. ASHLEY Alexandra BRANTING Karl FALDUTI Mattia GRABMAIR Matthias HARAŠTA Jakub NOVOTNÁ Tereza TIPPETT Elizabeth JOHNSON Shiwanni

Rok publikování 2021
Druh Článek ve sborníku
Konference Eighteenth International Conference on Artificial Intelligence and Law: Proceedings of the Conference
Fakulta / Pracoviště MU

Právnická fakulta

Citace
www
Doi http://dx.doi.org/10.1145/3462757.3466149
Klíčová slova multi-lingual sentence embeddings; transfer learning; domain adaptation; adjudicatory decisions; document segmentation; annotation
Přiložené soubory
Popis In this paper, we examine the use of multi-lingual sentence embeddings to transfer predictive models for functional segmentation of adjudicatory decisions across jurisdictions, legal systems (common and civil law), languages, and domains (i.e. contexts). Mechanisms for utilizing linguistic resources outside of their original context have significant potential benefits in AI & Law because differences between legal systems, languages, or traditions often block wider adoption of research outcomes. We analyze the use of LanguageAgnostic Sentence Representations in sequence labeling models using Gated Recurrent Units (GRUs) that are transferable across languages. To investigate transfer between different contexts we developed an annotation scheme for functional segmentation of adjudicatory decisions. We found that models generalize beyond the contexts on which they were trained (e.g., a model trained on administrative decisions from the US can be applied to criminal law decisions from Italy). Further, we found that training the models on multiple contexts increases robustness and improves overall performance when evaluating on previously unseen contexts. Finally, we found that pooling the training data from all the contexts enhances the models’ in-context performance.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.

Další info