An important problem of optimization analysis surges when parameters such as $ \{\theta_j\}_{j=1,\, \dots \,,k }$, determining a function $ y=f(x\given\{\theta_j\}) $, must be estimated from a set of observables $ \{ x_i,y_i\}_{i=1,\, \dots \,,m} $. Where $ \{x_i\} $ are independent variables assumed to be uncertainty-free. It is known that analytical solutions are possible if $ y=f(x\given\theta_j) $ is a linear combination of $ \{\theta_{j=1,\, \dots \,,k} \}.$ Here it is proposed that determining the uncertainty of parameters that are not \textit{linearly independent} may be achieved from derivatives $ \tfrac{\partial f(x \given \{\theta_j\})}{\partial \theta_j} $ at an optimum, if the parameters are \textit{stochastically independent}.
翻译:当诸如 $@theta_j ⁇ j=1,\,\\\\\\\\\\ddts\\,k}$等参数,确定一个函数 y=f(x\given\\\theta_j}$),必须用一组可观测值来估计, $ x_i,y_i=1,\,\\,\\dosts\,m}美元, 当 $ {x_i} 美元是假定没有不确定性的独立变量时, 优化分析的快速激增是一个重要问题。 众所周知, 如果 $ y=f(x\ give\theta_j) $ 是 $ y=_j=1,\\\\\\\\\\\\\ddts\\, k}\\\\\\\\\\ 美元 美元 。 如果参数是独立的, \\\\ text{text\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\