In an article, "Should we ditch impact factors", the problems of using the impact factor of a journal in deciding the quality of a paper is discussed. A journal's impact factor is calculated based on the total number of citations of all its papers published during the previous two years, divided by the total number of papers.
The logic behind using an impact factor of the journal in determining the quality of the paper is that we believe that journals with high impact factors publish high quality papers while lesser quality papers are published in a 'low impact' journal. This argument is fundamentally flawed because several analysis have shown the 20-80 principle to hold i.e., only 20% of papers published result in 80% of the citations in that journal for that year.
The current way of determining the impact factor of a journal can be manipulated. As an editor of a polymer journal, I know ways that editors/publishers manipulate to reach a higher impact factor. For example, because the impact factor is calculated for two years from Jan-Dec, reviews (which typically receive more citations than other papers) are published in the January issue of the journal.
My detailed analysis of publications from IISc showed that a paper from IISc in a journal received more citations than the average number of citations for all papers published in that journal. Therefore, citations is a much better parameter to be used than impact factor. If a paper receives 20 citations after being published in a journal of impact factor of 2, it should be considered better than the paper receiving 10 citations in a journal of impact factor higher than two. Of course, citations should be compared in the same area of research or suitably weighted.
The corrected impact factor, of course, is given by this :-)
No comments:
Post a Comment