In this paper, we present a R script named Beak, built to simulate rates of behavior interacting with schedules of reinforcement. Using Beak, we've simulated data that allows an assessment of different reinforcement feedback functions (RFF). This was made with unparalleled precision, since simulations provide huge samples of data and, more importantly, simulated behavior isn't changed by the reinforcement it produces. Therefore, we can vary it systematically. We've compared different RFF for RI schedules, using as criteria: meaning, precision, parsimony and generality. Our results indicate that the best feedback function for the RI schedule was published by Baum (1981). We also propose that the model used by Killeen (1975) is a viable feedback function for the RDRL schedule. We argue that Beak paves the way for greater understanding of schedules of reinforcement, addressing still open questions about quantitative features of schedules. Also, they could guide future experiments that use schedules as theoretical and methodological tools.
翻译:在本文中, 我们展示了一个名为 Beak 的 R 脚本, 用于模拟行为与增强计划互动的速度。 使用 Beak, 我们模拟了数据, 从而可以评估不同的增强反馈功能 。 这是无与伦比的精确度。 因为模拟提供了巨大的数据样本, 更重要的是, 模拟行为不会因它产生的增强力而改变 。 因此, 我们可以系统地改变它 。 我们用定义、 精确度、 粒子和一般性作为标准, 比较了RI 时间表的不同 RFF 。 我们的结果表明, RI 时间表的最佳反馈功能是由 Baum (1981年) 公布的。 我们还提议 Killeen (1975年) 所使用的模型是RDRL 时间表的一个可行的反馈功能 。 我们说, Beak 为更好地了解强化计划时间表铺平了道路, 解决关于时间表数量特征的仍然开放的问题。 此外, 他们可以指导未来的实验, 将时间表用作理论和方法工具。