This paper presents a formalized analysis of the sigmoid function and a fully mechanized proof of the Universal Approximation Theorem (UAT) in Isabelle/HOL, a higher-order logic theorem prover. The sigmoid function plays a fundamental role in neural networks; yet, its formal properties, such as differentiability, higher-order derivatives, and limit behavior, have not previously been comprehensively mechanized in a proof assistant. We present a rigorous formalization of the sigmoid function, proving its monotonicity, smoothness, and higher-order derivatives. We provide a constructive proof of the UAT, demonstrating that neural networks with sigmoidal activation functions can approximate any continuous function on a compact interval. Our work identifies and addresses gaps in Isabelle/HOL's formal proof libraries and introduces simpler methods for reasoning about the limits of real functions. By exploiting theorem proving for AI verification, our work enhances trust in neural networks and contributes to the broader goal of verified and trustworthy machine learning.
翻译:暂无翻译