In recent years, deep neural networks have had great success in machine learning and pattern recognition. Architecture size for a neural network contributes significantly to the success of any neural network. In this study, we optimize the selection process by investigating different search algorithms to find a neural network architecture size that yields the highest accuracy. We apply binary search on a very well-defined binary classification network search space and compare the results to those of linear search. We also propose how to relax some of the assumptions regarding the dataset so that our solution can be generalized to any binary classification problem. We report a 100-fold running time improvement over the naive linear search when we apply the binary search method to our datasets in order to find the best architecture candidate. By finding the optimal architecture size for any binary classification problem quickly, we hope that our research contributes to discovering intelligent algorithms for optimizing architecture size selection in machine learning.
翻译:近年来,深神经网络在机器学习和模式识别方面取得了巨大成功。神经网络的建筑规模极大地促进了任何神经网络的成功。在本研究中,我们优化了选择过程,调查不同的搜索算法,以找到一个能产生最高精确度的神经网络结构大小。我们对一个定义明确的二进制网络搜索空间进行二进制搜索,并将结果与线性搜索结果进行比较。我们还提议如何放松对数据集的一些假设,以便我们的解决办法能够普遍适用于任何二进制分类问题。我们报告,在将二进制搜索方法应用于我们的数据集以找到最佳的架构候选时,对天真的线性搜索进行了100倍的时间改进。通过迅速找到任何二进制分类问题的最佳架构大小,我们希望我们的研究有助于发现智能算法,以优化机器学习中的架构大小选择。