We investigate the high-probability estimation of discrete distributions from an \iid sample under $\chi^2$-divergence loss. Although the minimax risk in expectation is well understood, its high-probability counterpart remains largely unexplored. We provide sharp upper and lower bounds for the classical Laplace estimator, showing that it achieves optimal performance among estimators that do not rely on the confidence level. We further characterize the minimax high-probability risk for any estimator and demonstrate that it can be attained through a simple smoothing strategy. Our analysis highlights an intrinsic separation between asymptotic and non-asymptotic guarantees, with the latter suffering from an unavoidable overhead. This work sharpens existing guarantees and advances the theoretical understanding of divergence-based estimation.
翻译:暂无翻译