The Impact Factor (IF), despite its widespread use, suffers from well-known biases, most notably its sensitivity to journal size and its lack of field normalization. As a consequence of size sensitivity, a randomly formed journal of $n$ papers can attain a range of IF values that decreases sharply with size, as $\sim 1/\sqrt{n}$. The Central Limit Theorem, which underlies this effect, also allows us to correct for it by standardizing citation averages for scale and field in an elegant manner analogous to calculating the $z$-score in statistics. We thus introduce the $\Phi$ (Phi) index, defined as $\Phi = (f-\mu) \sqrt{n}/\sigma$, where $f$ is a journal's average citation count (akin to the IF), $n$ is the journal's publication count, and $\mu, \sigma$ represent the mean and standard deviation of citations in the journal's field. This formulation incorporates disparities in journal size and field citation practices. Applying the $\Phi$ index to a broad set of journals, we find that it produces rankings that better align with expert community perception of journal prestige, while boosting high-performing journals from diverse and less-cited fields. The $\Phi$ index thus offers a principled, scale- and field-standardized alternative to current metrics, with direct implications for research evaluation and publishing policy.
翻译:暂无翻译