In standard generative deep learning models, such as autoencoders or GANs, the size of the parameter set is proportional to the complexity of the generated data distribution. A significant challenge is to deploy resource-hungry deep learning models in devices with limited memory to prevent system upgrade costs. To combat this, we propose a novel framework called generative optimization networks (GON) that is similar to GANs, but does not use a generator, significantly reducing its memory footprint. GONs use a single discriminator network and run optimization in the input space to generate new data samples, achieving an effective compromise between training time and memory consumption. GONs are most suited for data generation problems in limited memory settings. Here we illustrate their use for the problem of anomaly detection in memory-constrained edge devices arising from attacks or intrusion events. Specifically, we use a GON to calculate a reconstruction-based anomaly score for input time-series windows. Experiments on a Raspberry-Pi testbed with two existing and a new suite of datasets show that our framework gives up to 32% higher detection F1 scores and 58% lower memory consumption, with only 5% higher training overheads compared to the state-of-the-art.
翻译:在标准的基因深层学习模型中,如自动编码器或GANs,参数集的大小与生成的数据分布的复杂性成比例。一个重大挑战是如何在记忆有限的设备中部署资源渴望的深学习模型,以防止系统升级费用。为此,我们提议了一个称为基因优化网络的新框架,这个框架与GANs相似,但不使用发电机,大大缩小其记忆足迹。GONs使用单一的区分器网络,在输入空间中优化生成新的数据样本,实现培训时间和记忆消耗之间的有效折中。GONs最适合在有限的记忆环境中生成数据的问题。我们在这里演示它们用于在袭击或入侵事件引起的记忆限制边缘装置中异常探测的问题。具体地说,我们用GON计算输入时间序列窗口的重建异常分数。在Raspberry-Pi测试台上实验有两个现有和新的数据集显示我们的框架给出了32%的探测F1分和58%的低记忆消耗率,相比之下,只有5%的高级管理管理费培训。