We introduce the \textit{almost goodness-of-fit} test, a procedure to assess whether a (parametric) model provides a good representation of the probability distribution generating the observed sample. Specifically, given a distribution function $F$ and a parametric family $\mathcal{G}=\{ G(\boldsymbol{\theta}) : \boldsymbol{\theta} \in \Theta\}$, we consider the testing problem \[ H_0: \| F - G(\boldsymbol{\theta}_F) \|_p \geq \epsilon \quad \text{vs} \quad H_1: \| F - G(\boldsymbol{\theta}_F) \|_p < \epsilon, \] where $\epsilon>0$ is a margin of error and $G(\boldsymbol{\theta}_F)$ denotes a representative of $F$ within the parametric class. The approximate model is determined via an M-estimator of the parameters. %The objective is the approximate validation of a distribution or an entire parametric family up to a pre-specified threshold value. The methodology also quantifies the percentage improvement of the proposed model relative to a non-informative (constant) benchmark. The test statistic is the $\mathrm{L}^p$-distance between the empirical distribution function and that of the estimated model. We present two consistent, easy-to-implement, and flexible bootstrap schemes to carry out the test. The performance of the proposal is illustrated through simulation studies and analysis and real-data applications.
翻译:暂无翻译