Let me state my own opinions. It is important to realize that impact factors were originally introduced by Garfield to validate the 20:80 rule (Pareto's law) and help guide the librarians in choosing appropriate journals for the library. Unfortunately, many librarians in India have not used the impact factor to choose the journals they want to subscribe but are instead are happy signing the big deal which bundles a lot of useless journals to a single good journal. Therefore, many libraries (including IISc) have subscribed to nearly 2000 journals but use less than 200 of them to publish and cite. See my article on Current Science on this for more details.
The impact factor was not to guide where faculty should publish. Faculty and the scientists in the field mostly knew what the best journals in the field are and the impact factor just confirmed this. Garfield himself states, "Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. These journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty."
Of course, Web of Science started selling the list of journals with the impact factor for an obscene price and many scientists, instead of choosing the journals based on what they wanted, started choosing the journals based on impact factor. Eigenfactor is a good substitute to the impact factor because it is free, provides the cost-effectiveness of each journal, counts citations for a five year period and gives percentile rankings of each journal within each field. It is worth mentioning again that it is free.
The impact factors of journals from different scientific fields vary considerably. A common misinterpretation of impact factors and citation numbers is frequently reflected by the statements such as: "He (she) is an excellent scientist, because he (she) published three papers in a journal with an impact factor above 3, and was cited more than two hundred times".Whoever says this, should remember that, in fields such as mathematics, there are nearly no journals with impact factors above 3. On the other hand, in the subfield Biochemistry and molecular biology, there are more than forty journals with impact factors above 3. Thus, eigenfactor has introduced percentile rankings of each journal within each field. The variation of the average impact factor of the journals within each field is given by a recent article by Althouse et al., in the January 2009 issue of J. Amer. Soc. Inform. Sci. Tech.
Again, these are to be used to make decisions on library subscriptions or compare similar institutions and not to compare individuals in different fields.
Sachin Shanbag talks on Quantitative v/s Qualitative Evaluations: Impact Factors and Wine Experts and states, "I think they are a lazy substitute for actually reading a person's research and evaluating its worth individually.You wouldn't necessarily think that the musician who sells the most records, or has the most covers made is necessarily the best."
"If something is important to you, you will find a way to measure it". This quote appeals to me, as an analytical person, perhaps overly so. I think often when we claim to make a decision subjectively, we are actually doing it quite objectively, but with bias - and claim subjectivity to avoid admitting the bias.
I am sure some bias will still remain, but much of it can be eliminated by agreeing to objective metrics. Maybe you like candidate A because she went to Cornell, just like you, but if candidate B has a more superior publication record, as attested by agreed-upon research metrics, can you argue with that? I think I am changing my opinion on h-index, citations and other cold objective metrics that I used to dismiss as bean-counting. We DO need objective metrics, because as humans we are intrinsically biased.
Use of these quantitative parameters for evaluating an individual is best avoided and, if used, it has to be viewed with caution. If it is used, it can be used for positive affirmation (i.e., people who have h-index are likely to be good), it should not be used to say a person who has a poor h-index is bad. Just because a positive correlation exists between high h-index and excellence (e.g. Nobel prizes) does not mean the reverse corollary applies. Thus, I completely agree that it is a lazy and clerical attitude to evaluate an individual's based only on number of publications or citations.
However, scientometrics is an excellent tool to judge and rank institutions. A large institution (with more than 400 faculty) will have all kinds of researchers: faculty who publish a lot with small number of citations, faculty who publish very little with high number of citations, faculty who have both large number of papers and citations, papers that are poor which get cited a lot and papers that are good which get cited poorly. A case in point is that if you do scientometric analysis, universities like MIT, Harvard and Caltech will come out in top 10. They are not in the top 10 because of scientometric analysis but scientometric analysis only justifies the ranking. Similarly, IISc ranks among the top in every category in India and this is only confirmed by scientometrics.
Thus, evaluating the country's research productivity on these parameters is very much valid. Research in any field should either lead to publications (that are cited) or patents (that are licensed) or useful products for end use. Research that does not lead to any of these from an individual may be even accepted but not for a large nation that puts 1 to 3% GDP into research.