We present a result according to which certain functions of covariance matrices are maximized at scalar multiples of the identity matrix. This is used to show that the ordinary least squares (OLS) estimate of regression is minimax, in the class of generalized least squares estimates, when the maximum is taken over certain classes of error covariance structures and the loss function possesses a natural monotonicity property. We then consider regression models in which the response function is possibly misspecified, and show that OLS is no longer minimax. We argue that the gains from a minimax estimate are however often outweighed by the simplicity of OLS. We also investigate the interplay between minimax precision matrices and minimax designs. We find that the design has by far the major influence on efficiency and that, when the two are combined, OLS is generally at least 'almost' minimax, and often exactly so.
翻译:暂无翻译