We aim to discuss schedules of reinforcement in its theoretical and practical terms pointing to practical limitations on implementing those schedules while discussing the advantages of computational simulation. In this paper, we present a R script named Beak, built to simulate rates of behavior interacting with schedules of reinforcement. Using Beak, we've simulated data that allows an assessment of different reinforcement feedback functions (RFF). This was made with unparalleled precision, since simulations provide huge samples of data and, more importantly, simulated behavior isn't changed by the reinforcement it produces. Therefore, we can vary it systematically. We've compared different RFF for RI schedules, using as criteria: meaning, precision, parsimony and generality. Our results indicate that the best feedback function for the RI schedule was published by Baum (1981). We also propose that the model used by Killeen (1975) is a viable feedback function for the RDRL schedule. We argue that Beak paves the way for greater understanding of schedules of reinforcement, addressing still open questions about quantitative features of schedules. Also, they could guide future experiments that use schedules as theoretical and methodological tools.
翻译:我们的目标是讨论其理论和实践术语中的强化时间表,指出执行这些时间表的实际限制,同时讨论计算模拟的好处。在本文中,我们提出了一个名为 Beak 的R脚本,用于模拟与强化时间表互动的行为率。我们用Beak 模拟了数据,用于评估不同的强化反馈功能。这是以无与伦比的精确度进行的,因为模拟提供了巨大的数据样本,更重要的是,模拟行为并不因其产生的增强力而改变。因此,我们可以系统地加以改变。我们用定义、精确度、纯度和一般性作为标准,比较了RI时间表的不同RFF。我们的结果显示,RI时间表的最佳反馈功能是由鲍姆(1981年)公布的。我们还提议,Killeen(1975年)所使用的模型是RDRL时间表的一个可行的反馈功能。我们说,Beak为更好地了解强化时间表铺平了道路,解决了有关时间表的数量特征方面仍然开放的问题。此外,它们还可以指导今后的实验,将时间表用作理论和方法工具。