Click here to flash read.
arXiv:2401.00997v2 Announce Type: replace
Abstract: The sensitivity of Impact Factors (IFs) to journal size causes systematic bias in IF rankings, in a process akin to {\it stacking the cards}: A random ``journal'' of $n$ papers can attain a range of IF values that decreases rapidly with size, as $\sim 1/\sqrt{n}$ . The Central Limit Theorem, which underlies this effect, also allows us to correct it by standardizing citation averages for scale {\it and} subject in a geometrically intuitive manner analogous to calculating the $z$-score. We thus propose the $\Phi$ index, a standardized scale- and subject-independent citation average. The $\Phi$ index passes the ``random sample test'', a simple check for scale and subject independence that we argue ought to be used for every citation indicator. We present $\Phi$ index rankings for 12,173 journals using 2020 Journal Citation Reports data. We show how scale standardization alone affects rankings, demonstrate the additional effect of subject standardization for monodisciplinary journals, and discuss how to treat multidisciplinary journals. $\Phi$ index rankings offer a clear improvement over IF rankings. And because the $\Phi$ index methodology is general, it can also be applied to compare individual researchers, universities, or countries.
No creative common's license