This work is driven by a practical question, corrections of Artificial Intelligence (AI) errors. Systematic re-training of a large AI system is hardly possible. To solve this problem, special external devices, correctors, are developed. They should provide quick and non-iterative system fix without modification of a legacy AI system. A common universal part of the AI corrector is a classifier that should separate undesired and erroneous behavior from normal operation. Training of such classifiers is a grand challenge at the heart of the one- and few-shot learning methods. Effectiveness of one- and few-short methods is based on either significant dimensionality reductions or the blessing of dimensionality effects. Stochastic separability is a blessing of dimensionality phenomenon that allows one-and few-shot error correction: in high-dimensional datasets under broad assumptions each point can be separated from the rest of the set by simple and robust linear discriminant. The hierarchical structure of data universe is introduced where each data cluster has a granular internal structure, etc. New stochastic separation theorems for the data distributions with fine-grained structure are formulated and proved. Separation theorems in infinite-dimensional limits are proven under assumptions of compact embedding of patterns into data space. New multi-correctors of AI systems are presented and illustrated with examples of predicting errors and learning new classes of objects by a deep convolutional neural network.
翻译:这项工作由实际问题驱动, 人工智能(AI) 错误校正。 对大型人工智能系统进行系统的再培训几乎不可能。 为了解决这个问题, 开发了特殊的外部装置、 校正器。 它们应该提供快速和非修补系统修补系统, 而不修改遗留的AI系统。 AI 校正器的一个共同通用部分是一个分类器, 它应该将不理想和错误的行为与正常操作区分开来。 对此类分类器的培训是一、 少见的学习方法核心的巨大挑战。 一和少少少方法的有效性要么基于深度的尺寸降幅, 要么基于维度效应的祝福。 托盘性静默性是维度现象的祝福, 允许一小数的错误校正校正校正校正校正校正校正校正校正校正。 高空心性数据结构的构建并被证实, 以精细的模型化的内置正校正的系统。