The resolution of intelligence tests, in particular numerical sequences, has been of great interest in the evaluation of AI systems. We present a new computational model called KitBit that uses a reduced set of algorithms and their combinations to build a predictive model that finds the underlying pattern in numerical sequences, such as those included in IQ tests and others of much greater complexity. We present the fundamentals of the model and its application in different cases. First, the system is tested on a set of number series used in IQ tests collected from various sources. Next, our model is successfully applied on the sequences used to evaluate the models reported in the literature. In both cases, the system is capable of solving these types of problems in less than a second using standard computing power. Finally, KitBit's algorithms have been applied for the first time to the complete set of entire sequences of the well-known OEIS database. We find a pattern in the form of a list of algorithms and predict the following terms in the largest number of series to date. These results demonstrate the potential of KitBit to solve complex problems that could be represented numerically.
翻译:智能测试的解析,特别是数字序列,对于评估AI系统非常感兴趣。我们提出了一个名为KitBit的新的计算模型,它使用一套减少的算法及其组合来建立一个预测模型,在数字序列中找到基本模式,例如IQ测试和其他复杂程度大得多的参数。我们在不同的情况下展示了模型的基本原理及其应用。首先,该系统在从各种来源收集的IQ测试中使用的一组数字序列上进行了测试。接着,我们的模型成功地应用于用于评估文献中报告的模型的序列上。在这两种情况下,该系统都能够用不到二手的标准计算能力解决这类类型的问题。最后,KitBit的算法首次应用于众所周知的OEIS数据库整个序列的完整组。我们发现一个模式,即一个算法清单,并预测迄今在最大系列中出现的下列术语。这些结果表明KitBit有可能用数字来解决各种复杂问题。