Detecting out-of-distribution (OOD) data is critical to building reliable machine learning systems in the open world. Among the existing OOD detection methods, ReAct is famous for its simplicity and efficiency, and has good theoretical analysis. The gap between ID data and OOD data is enlarged by clipping the larger activation value. But the question is, is this operation optimal? Is there a better way to expand the spacing between ID samples and OOD samples in theory? Driven by these questions, we view the optimal activation function modification from the perspective of functional extremum and propose the Variational Recified Acitvations (VRA) method. In order to make our method easy to practice, we further propose several VRA variants. To verify the effectiveness of our method, we conduct experiments on many benchmark datasets. Experimental results demonstrate that our method outperforms existing state-of-the-art approaches. Meanwhile, our method is easy to implement and does not require additional OOD data or fine-tuning process. We can realize OOD detection in only one forward pass.
翻译:在开放世界中,检测分配数据对于建立可靠的机器学习系统至关重要。在现有的OOD检测方法中,ReAct以其简单和高效而出名,并具有良好的理论分析。ID数据与OOD数据之间的差距通过剪切更大的激活值而扩大。但问题是,这项操作是否最佳?在理论上,是否有更好的方法可以扩大ID样本与OOD样本之间的间隔?受这些问题的驱动,我们从功能极限的角度来看待最佳激活功能的修改,并提议变异的校正加速法。为了使我们的方法易于实践,我们进一步提出若干VRA变量。为了验证我们的方法的有效性,我们在许多基准数据集上进行实验。实验结果表明,我们的方法超越了现有的最新方法。与此同时,我们的方法很容易实施,不需要额外的OD数据或微调过程。我们只能在一个前方路进行OOD探测。</s>