Trial-to-trial effects have been found in a number of studies, indicating that processing a stimulus influences responses in subsequent trials. A special case are priming effects which have been modelled successfully with error-driven learning (Marsolek, 2008), implying that participants are continuously learning during experiments. This study investigates whether trial-to-trial learning can be detected in an unprimed lexical decision experiment. We used the Discriminative Lexicon Model (DLM; Baayen et al., 2019), a model of the mental lexicon with meaning representations from distributional semantics, which models error-driven incremental learning with the Widrow-Hoff rule. We used data from the British Lexicon Project (BLP; Keuleers et al., 2012) and simulated the lexical decision experiment with the DLM on a trial-by-trial basis for each subject individually. Then, reaction times were predicted with Generalised Additive Models (GAMs), using measures derived from the DLM simulations as predictors. We extracted measures from two simulations per subject (one with learning updates between trials and one without), and used them as input to two GAMs. Learning-based models showed better model fit than the non-learning ones for the majority of subjects. Our measures also provide insights into lexical processing and individual differences. This demonstrates the potential of the DLM to model behavioural data and leads to the conclusion that trial-to-trial learning can indeed be detected in unprimed lexical decision. Our results support the possibility that our lexical knowledge is subject to continuous changes.
翻译:在一系列研究中发现了审判到审判的效果,表明处理刺激因素对随后审判的反应产生影响。一个特殊案例具有极大的影响,其模型是成功模拟了错误驱动的学习(Marsolek,2008年),这意味着参与者在实验期间不断学习。这项研究调查了在未经精选的词汇决定实验中能否检测到审判到审判的学习。我们使用差异性语言模型(DLM; Baayen等人,2019年),一种精神词汇模型,其含义来自分布式语义,它模拟了错误驱动的递增学习(Widrow-Hoff规则)。我们使用了英国词汇项目(BLP;Keuleers等人,2012年)的数据,并在未经精选的词汇法决定实验实验中分别模拟了对审判到审判的理论实验。然后,用通用的Additive模型(GAM)预测了反应时间,利用DLM模拟得出的措施作为预测器。我们从两个模型中提取了措施,从不由错误驱动的递增学习到Widrow-Hoff 规则。我们从非模拟主题(一个模型中学习的递增到不动的LILLLL 的模型,也显示了我们学习过程的模型的模型的模型,而不用的模型的模型的演进到不进到不进到不动的模型的模型的模型,用来显示的模型的模型的模型的模型的模型,可以显示的模型的模型的模型,用来显示的模型的模型的模型,用来显示的模拟的模型,用来显示不进到不进到不进到不进。