We study non-linear Bayesian inverse problems arising from semilinear partial differential equations (PDEs) that can be transformed into linear Bayesian inverse problems. We are then able to extend the early stopping for Ensemble Kalman-Bucy Filter (EnKBF) to these types of linearisable nonlinear problems as a way to tune the prior distribution. Using the linearisation method introduced in \cite{koers2024}, we transform the non-linear problem into a linear one, apply early stopping based on the discrepancy principle, and then pull back the resulting posterior to the posterior for the original parameter of interest. Following \cite{tienstra2025}, we show that this approach yields adaptive posterior contraction rates and frequentist coverage guarantees, under mild conditions on the prior covariance operator. From this, it immediately follows that Tikhonov regularisation coupled with the discrepancy principle contracts at the same rate. The proposed method thus provides a data-driven way to tune Gaussian priors via early stopping, which is both computationally efficient and statistically near optimal for nonlinear problems. Lastly, we demonstrate our results theoretically and numerically for the classical benchmark problem, the time-independent Schr\"odinger equation.
翻译:暂无翻译