Stochastic Gradient Descent (SGD) is the foundation of modern deep learning optimization but becomes increasingly inefficient when training convolutional neural networks (CNNs) on high-resolution data. This paper introduces Multiscale Stochastic Gradient Descent (Multiscale-SGD), a novel optimization approach that exploits coarse-to-fine training strategies to estimate the gradient at a fraction of the cost, improving the computational efficiency of SGD type methods while preserving model accuracy. We derive theoretical criteria for Multiscale-SGD to be effective, and show that while standard convolutions can be used, they can be suboptimal for noisy data. This leads us to introduce a new class of learnable, scale-independent Mesh-Free Convolutions (MFCs) that ensure consistent gradient behavior across resolutions, making them well-suited for multiscale training. Through extensive empirical validation, we demonstrate that in practice, (i) our Multiscale-SGD approach can be used to train various architectures for a variety of tasks, and (ii) when the noise is not significant, standard convolutions benefit from our multiscale training framework. Our results establish a new paradigm for the efficient training of deep networks, enabling practical scalability in high-resolution and multiscale learning tasks.
翻译:暂无翻译