We present an optimization-based framework to construct confidence intervals for functionals in constrained inverse problems, ensuring valid one-at-a-time frequentist coverage guarantees. Our approach builds upon the now-called strict bounds intervals, originally pioneered by Burrus (1965); Rust and Burrus (1972), which offer ways to directly incorporate any side information about parameters during inference without introducing external biases. Notably, this family of methods allows for uncertainty quantification in ill-posed inverse problems without needing to select a regularizing prior. By tying our proposed intervals to an inversion of a constrained likelihood ratio test, we translate interval coverage guarantees into type-I error control, and characterize the resulting interval via solutions of optimization problems. Along the way, we refute the Burrus conjecture, which posited that, for possibly rank-deficient linear Gaussian models with positivity constraints, a correction based on the quantile of the chi-squared distribution with one degree of freedom suffices to shorten intervals while maintaining frequentist coverage guarantees. Our framework provides a novel approach to analyze the conjecture and construct a counterexample by employing a stochastic dominance argument, which we also use to disprove a general form of the conjecture. We illustrate our framework with several numerical examples and provide directions for extensions beyond the Rust-Burrus method for non-linear, non-Gaussian settings with general constraints.
翻译:暂无翻译