Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.11861/8704
Title: Reanalyzing L2 preposition learning with bayesian mixed effects and a pretrained language model
Authors: Prange, Jakob 
Dr. WONG Man Ho, Ivy 
Issue Date: 2023
Publisher: Association for Computational Linguistics
Source: Prange, Jakob & Wong, Man Ho Ivy (2023). Reanalyzing L2 preposition learning with bayesian mixed effects and a pretrained language model. In Rogers, Anna, Boyd-Graber, Jordan & Okazaki, Naoaki (Eds.). Proceedings of the 61st annual meeting of the association for computational linguistics (volume 1: long papers). 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, Toronto, Canada (pp. 12722-12736). Association for Computational Linguistics.
Conference: 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 
Abstract: We use both Bayesian and neural models to dissect a data set of Chinese learners’ pre- and post-interventional responses to two tests measuring their understanding of English prepositions. The results mostly replicate previous findings from frequentist analyses and newly reveal crucial interactions between student ability, task type, and stimulus sentence. Given the sparsity of the data as well as high diversity among learners, the Bayesian method proves most useful; but we also see potential in using language model probabilities as predictors of grammaticality and learnability.
Type: Conference Paper
URI: http://hdl.handle.net/20.500.11861/8704
DOI: 10.18653/v1/2023.acl-long.712
Appears in Collections:English Language & Literature - Publication

Show full item record

Page view(s)

9
checked on Jan 3, 2024

Google ScholarTM

Impact Indices

Altmetric

PlumX

Metrics


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.