Saturday, December 12, 2009


As pointed out by Abi, there are two interesting posts on the issue of citations. First, Arunn has a post on Quantifying Research Quality through Article Level Metrics. He mentions that "The beginning of the end for impact factors and journals, a neat online article by Richard Smith [3], explains the newly introduced ALM indices with examples." As Arunn mentions, "Nevertheless citations offer a fair measure of worthiness of an article, as it is based on other peer reviewed research articles. The measure maintains a peer-to-peer credibility.The rest of the ALM mentioned in the above section are even more incredible, if not dubious."

Let me state my own opinions. It is important to realize that impact factors were originally introduced by Garfield to validate the 20:80 rule (Pareto's law) and help guide the librarians in choosing appropriate journals for the library. Unfortunately, many librarians in India have not used the impact factor to choose the journals they want to subscribe but are instead are happy signing the big deal which bundles a lot of useless journals to a single good journal. Therefore, many libraries (including IISc) have subscribed to nearly 2000 journals but use less than 200 of them to publish and cite. See my article on Current Science on this for more details.

The impact factor was not to guide where faculty should publish. Faculty and the scientists in the field mostly knew what the best journals in the field are and the impact factor just confirmed this. Garfield himself states, "Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. These journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty." 

Of course, Web of Science started selling the list of journals with the impact factor for an obscene price and many scientists, instead of choosing the journals based on what they wanted, started choosing the journals based on impact factor. Eigenfactor is a good substitute to the impact factor because it is free, provides the cost-effectiveness of each journal, counts citations for a five year period and gives percentile rankings of each journal within each field. It is worth mentioning again that it is free.

The impact factors of journals from different scientific fields vary considerably. A common misinterpretation of impact factors and citation numbers is frequently reflected by the statements such as: "He (she) is an excellent scientist, because he (she) published three papers in a journal with an impact factor above 3, and was cited more than two hundred times".Whoever says this, should remember that, in fields such as mathematics, there are nearly no journals with impact factors above 3. On the other hand, in the subfield Biochemistry and molecular biology, there are more than forty journals with impact factors above 3. Thus, eigenfactor has introduced percentile rankings of each journal within each field. The variation of the average impact factor of the journals within each field is given by a recent article by Althouse et al., in the January 2009 issue of J. Amer. Soc. Inform. Sci. Tech.

Again, these are to be used to make decisions on library subscriptions or compare similar institutions and not to compare individuals in different fields.

Sachin Shanbag talks on Quantitative v/s Qualitative Evaluations: Impact Factors and Wine Experts and states, "I think they are a lazy substitute for actually reading a person's research and evaluating its worth individually.You wouldn't necessarily think that the musician who sells the most records, or has the most covers made is necessarily the best."

There is a recent article on Scientometrics and its fallacies in Nature. One feature of the article is that an individual's work can not be judged solely on the basis of the number of publications or citations.  But the Ponderer mentions in its blog,
"If something is important to you, you will find a way to measure it". This quote appeals to me, as an analytical person, perhaps overly so. I think often when we claim to make a decision subjectively, we are actually doing it quite objectively, but with bias - and claim subjectivity to avoid admitting the bias.

I am sure some bias will still remain, but much of it can be eliminated by agreeing to objective metrics. Maybe you like candidate A because she went to Cornell, just like you, but if candidate B has a more superior publication record, as attested by agreed-upon research metrics, can you argue with that? I think I am changing my opinion on h-index, citations and other cold objective metrics that I used to dismiss as bean-counting. We DO need objective metrics, because as humans we are intrinsically biased.

Use of these quantitative parameters for evaluating an individual is best avoided and, if used, it has to be viewed with caution. If it is used, it can be used for positive affirmation (i.e., people who have h-index are likely to be good), it should not be used to say a person who has a poor h-index is bad. Just because a positive correlation exists between high h-index and excellence (e.g. Nobel prizes) does not mean the reverse corollary applies. Thus, I completely agree that it is a lazy and clerical attitude to evaluate an individual's based only on number of publications or citations.

However, scientometrics is an excellent tool to judge and rank institutions. A large institution (with more than 400 faculty) will have all kinds of researchers: faculty who publish a lot with small number of citations, faculty who publish very little with high number of citations, faculty who have both large number of papers and citations, papers that are poor which get cited a lot and papers that are good which get cited poorly. A case in point is that if you do scientometric analysis, universities like MIT, Harvard and Caltech will come out in top 10. They are not in the top 10 because of scientometric analysis but scientometric analysis only justifies the ranking. Similarly, IISc ranks among the top in every category in India and this is only confirmed by scientometrics.

Thus, evaluating the country's research productivity on these parameters is very much valid. Research in any field should either lead to publications (that are cited) or patents (that are licensed) or useful products for end use. Research that does not lead to any of these from an individual may be even accepted but not for a large nation that puts 1 to 3% GDP into research.


Anonymous said...

I think this is a balanced view on impact factors.

Citations: Though you say this should not be used to evaluate individuals, IITs have a lot of deadwood. Some faculty have not published a paper, guided a student or taught a course in five years. Why shouldn't criteria be used to evaluate them?

Anonymous said...

Dear Prof. Madras,

A very nice and balanced view about scientometrics. I agree this is not the perfect method but there is none better.

In most cases, a good scientist/researcher can be identified with a complete review of CV and not just his/her scientometrics. The errors/fallacies associated with scientometrics minimize for large group (department, University, Country).

Anon 1: I agree there is lot of deadwood (research wise) in IITs. A good peer review system is needed for promotion with due credit for teaching as IITs do lot of undergraduate teaching. Also, IITs need to decide themselves what they want to be: 4-year teaching institutions or research institutions. They need to develop their policies accordingly.


Anonymous said...


I saw this link on the IISc website which tells about lecturer position?

Does this mean that IISc will also entertain applications from fresh Ph.Ds for the post of lecturers? Can you also throw some light on the advantages and disadvantages of applying to the lecturer position (compared to AP after post doc) as a fresh Ph.D only with respect to IISc? Is it a realistic possibility now for a fresh Ph.D to enter IISc as a lecturer?

Anonymous said...

Yes, IISc does recruit lecturers. Previously, IISc recruited over 5-10 lecturers in the last 2 years. Promotion to AP is after 3 years of lecturer. You are an independent researcher, can take students, apply for grants etc.

Anonymous said...

Thanks Anon. Can fresh phds apply for lecturers? If so what are their realistic chances? Prof. Giri could you please answer my questions?

Anonymous said...

IISc had a drive to recruit lecturers couple of years but the applications were restricted to SC/ST. Lecturers are normally not taken in any department in the open category. However, engineering departments do take people with 0.5-2 years postdoc as assistant professors.

Anonymous said...

Here is what I found when searched for the performance of the Phys/Chem/All departments of the following four institutes:


IISc 9.95 8.74 7.84
UoHyd 14.63 5.44 9.54
IIT 8.41 5.4 5.26
IACS 10.39 5.44 7.91

Numbers indicate number of citations per paper.

Sachin Mandavgane said...

Professor: Research in any field should either lead to publications (that are cited) or patents (that are licensed) or useful products for end use

I totally agree with the above statement. I personally would give maximum emphasis on the last point. I did my PhD under the guidance of a Professor (Prof B B Gogte, Retd Head Paint & Oil dept LIT Nagpur) who always discourages to publish in International Journal of reputed of publishing house. Instead he uses to insist on publishing in Indian Journals and that too Industrial magazine. Like Paint India, Chemical Engineering World etc..his argument is , this is the way of having maximum outreach. The research knocks the actual end users’ door. He helped many small and medium scale industries through this approach.
We both always use to fight on issue of impact factor.
Professor: "He (she) is an excellent scientist, because he (she) published three papers in a journal with an impact factor above 3, and was cited more than two hundred times".
is a very common statement. I am suffering a lot on that account.
The present only rider of quality research is IMPACT FACTOR.
Your comments on Prof Gogte’s view please

Anonymous said...

One thing that generally annoys me is that many researchers publish almost identical papers in different journals. One of the "versions" is always in some low impact factor/almost unknown journal. How to cite such kind of work? Can I trust their scientific information? Do we really need to be inundated with such large amount of small journals, which many times bring just unnecessary information, with papers published just to appear in someone's CV?

I think Sachin's professor is partially right. More than a decade ago I was in a small/medium size research group trying to get funding. That same group today has more money than it has people to do the research, and they actually have to turn down consulting projects due to lack of time/personnel. To make contacts we started going to all the major national conferences in our field, we attended the talks, talked with the speakers, and "attacked" them for funding during the cocktail parties. At that time we published our work mainly in conference proceedings and in a technical metallurgical magazine, which was read by everybody in the industry. The funding came, we worked hard, and after a few years we got a publication in the best speciality journal in our field. I left the lab (and that field) in 2001, but they have being publishing in the top journals of their area since them.

If you have the funding, please take your students to conferences. Present them to people, make them feel comfortable, though they may not be experts in anything "yet", tell them what is the best way to talk with people (what to ask or not). In the end, the society is the one that will gain.


Giri@iisc said...

Sachin: I meant, as a country, one should have all kinds of scientists, who do research that lead to publications (that are cited) or patents (that are licensed) or useful products for end use. One is not superior to the other.


Jason said...

Impact factor has become less useful because of systematic attempts to game the impact factor system. Something must be done.