![]() ![]() We find that three models - XGLM 2.9B, 4.5B, and 7.5B - capture the human behavior from all the experiments, with others successfully modeling some of the results. We test whether 12 contemporary language models display expectations that reflect human behavior when exposed to sentences with zero pronouns from five behavioral experiments conducted in Italian by Carminati (2005). We ask whether Neural Language Models also extract the same expectations. Yet human language comprehenders reliably infer the intended referents of these zero pronouns, in part because they construct expectations about which referents are more likely. ![]() Some languages allow arguments to be omitted in certain contexts. International Committee on Computational Linguistics Proceedings of the 29th International Conference on Computational Linguistics This result suggests that human expectations about coreference can be derived from exposure to language, and also indicates features of language models that allow them to better reflect human behavior.",ĭo Language Models Make Human-like Predictions about the Coreferents of Italian Anaphoric Zero Pronouns? Publisher = "International Committee on Computational Linguistics",Ībstract = "Some languages allow arguments to be omitted in certain contexts. Cite (Informal): Do Language Models Make Human-like Predictions about the Coreferents of Italian Anaphoric Zero Pronouns? (Michaelov & Bergen, COLING 2022) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: Code = "Do Language Models Make Human-like Predictions about the Coreferents of talian Anaphoric Zero Pronouns?",īooktitle = "Proceedings of the 29th International Conference on Computational Linguistics", ![]() International Committee on Computational Linguistics. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1–14, Gyeongju, Republic of Korea. ![]() Do Language Models Make Human-like Predictions about the Coreferents of Italian Anaphoric Zero Pronouns?. Anthology ID: ling-1.1 Volume: Proceedings of the 29th International Conference on Computational Linguistics Month: October Year: 2022 Address: Gyeongju, Republic of Korea Venue: COLING SIG: Publisher: International Committee on Computational Linguistics Note: Pages: 1–14 Language: URL: DOI: Bibkey: michaelov-bergen-2022-language Cite (ACL): James A. This result suggests that human expectations about coreference can be derived from exposure to language, and also indicates features of language models that allow them to better reflect human behavior. Abstract Some languages allow arguments to be omitted in certain contexts. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |