The performance improvement of deep networks significantly depends on their optimizers. With existing optimizers, precise and efficient recognition of the gradients trend remains a challenge. Existing optimizers predominantly adopt techniques based on the first-order exponential moving average (EMA), which results in noticeable delays that impede the real-time tracking of gradients trend and consequently yield sub-optimal performance. To overcome this limitation, we introduce a novel optimizer called fast-adaptive moment estimation (FAME). Inspired by the triple exponential moving average (TEMA) used in the financial domain, FAME leverages the potency of higher-order TEMA to improve the precision of identifying gradient trends. TEMA plays a central role in the learning process as it actively influences optimization dynamics; this role differs from its conventional passive role as a technical indicator in financial contexts. Because of the introduction of TEMA into the optimization process, FAME can identify gradient trends with higher accuracy and fewer lag issues, thereby offering smoother and more consistent responses to gradient fluctuations compared to conventional first-order EMA. To study the effectiveness of our novel FAME optimizer, we conducted comprehensive experiments encompassing six diverse computer-vision benchmarks and tasks, spanning detection, classification, and semantic comprehension. We integrated FAME into 15 learning architectures and compared its performance with those of six popular optimizers. Results clearly showed that FAME is more robust and accurate and provides superior performance stability by minimizing noise (i.e., trend fluctuations). Notably, FAME achieves higher accuracy levels in remarkably fewer training epochs than its counterparts, clearly indicating its significance for optimizing deep networks in computer-vision tasks.
翻译:暂无翻译