Thursday, January 7, 2010

Statistics

In the talk in IISc,Prof. Venki Ramakrishnan, mentions

" “The quality or impact of work is much more important than the number of papers published,” he said while citing the example of double chemistry Nobel winner Fred Sanger who published only around 40 papers in his long career but who transformed molecular biology."

I could not agree more. However, the impact of the work has to be considerable, which is clearly what Venki is implying. It is famously said that what statistics reveal is suggestive but what they conceal is vital. Statistics like 40 papers hide more than what they reveal. I looked up the record of Sanger, whose work has led to the creation of the famous Sanger Institute. The impact is not measured by the number of papers but by the number of citations. The Web of Science shows that he has 66 research papers with 90,000 citations i.e, at an average of 1500 citations per paper. No Indian scientist can boast of such numbers either in number of citations or in the number of citations per paper. Only Prof. CNR Rao comes close with around 40,000 citations.

Thousands of scientists in India have 40 papers but not even 1000 citations. If they quote Sanger's example to justify their output and support, I can only quote Winston Churchill who asked whether we make the mistake of using statistics like a drunkard uses a lamp-post -- for mere support, rather than for illumination.

7 comments:

Ranga said...

Citations also seem to have inherent flaws. A letter reporting something 'novel' has a greater probability of being cited than a publication that forwards an area of research. Would love to read a post comparing different indices of scientific impact.

Vas-Happy - :-)Tree said...

I agree with you. I would like to know your CPP rate. I am a researcher from Spain and a great fan of yours...
You are my inspiration for research

Vas-Happy - :-)Tree said...

Again its me, these days i also dont trust in individual citataions. I normally publish 10-15 papers per year. Its a promise, for every paper i published in recent years i am forced to cite some articles of the referees. I hope you might have experienced similar problems. I know in India in past years one of the person who won citation lauraete won this award by this way. I am sad where the science is going. I dont know how to tackle these type of reviewers. What do you think???????

Giri@iisc said...

Dear Vasanth,

As a reviewer, asking the author to cite your papers is unethical. I review 60 papers a year and I never do this. As an author, you need not comply with the reviewer articles. The editor knows who is the reviewer is and does not expect you cite him/her. Unless, of course, the said article is critical in the paper.

Let us say, you review 30 papers per year and request all those cite two of your papers. That makes it 60 citations per year. No way, that you can reach 95,000 citations. In India, having more 3000 citations is considered good and this can no way be achieved by asking the authors of the papers you review to cite you.

For inspirations in research, you should look at someone who has achieved great heights in India like CNR.

Thanks

Giridhar

Anonymous said...

> Thousands of scientists in India have 40 papers
> but not even 1000 citations. If they
> quote Sanger's example to justify their output
> and support....

This reads like a harsh judgement call, which I presume wasn't your intention.

The raw value of the total number of citations makes no sense in isolation, unless we take into account what fields the papers were published in. Statistically, an average molecular biologist who published 40 papers before 1995 would have raked in around 2000 citations by 2009. In comparison, an average mathematician with similar output would have around 200 citations.

It would be unwise for a young researcher in mathematical sciences (or an allied field) to aspire to the citation levels achieved by their colleagues in laboratory based sciences. Young researchers need to find role models within their own field. One of my colleagues in number theory (a tenured full Professor and a well-known authority) is the proud author of 19 journal articles that have managed to attract 110 citations at the last count! But I know many early-career academics in chemistry and materials science (5 years after PhD) with well over 500 citations. Comparing individuals across such disciplines based on citations would be a pointless exercise.

I am not against the use of citation metrics in certain situations, for example, while comparing two academics working in the same field at a similar stage in their careers. But this data is dangerous if used thoughtlessly by search, tenure and promotion committees.

Some interesting data on citation levels within various disciplines can be found here
http://arxiv.org/pdf/physics/0607224 (see Table I)
The idea of h-index normalization factors for various disciplines have been proposed but I think that may just open up another can of worms since it will encourage people to compare academics from two different disciplines using a scalar index.

Gengatopathy Acharya

Giri@iisc said...

Dear Sir,

I agree with you. I have written an extensive article on comparison of citation numbers. Please see

http://giridharmadras.blogspot.com/2009/12/citations.html

What is meant is that the impact of work will be known in some measure. To claim that Sanger has published only 40 papers but won two nobel prizes and thus 40 papers is good enough for *all* scientists and quote Sanger's example as a justification for their output, it is wrong.

Anonymous said...

>http://giridharmadras.blogspot.com/2009/12/citations.html

>What is meant is that the impact of work will be >known in some measure. To claim that Sanger has >published only 40 papers but won two nobel prizes >and thus 40 papers is good enough for *all* ?>scientists and quote Sanger's example as a >justification for their output, it is wrong.

Thanks for the link to your very interesting article on this topic.

My primary focus is to ensure that PhD students and early career researchers get the right kind of message. I try to emphasize what they should not be doing by pointing them towards examples of academics who have perfected the art of salami-slicing publications.

In the current cut throat academic environment, many researchers have unfortunately forgotten that the emphasis should always be on scientific rigor and not working out how many minimum publishable units (MPUs) can one get out a piece of work. Some of the experts in optimal identification of MPUs have also worked out that this strategy helps with their h-index. There is a friend of mine who has published ~60 salami-sliced articles over 10 years or so in respectable journals with over 200 citations and a h-index of 15. Sound respectable, but a closer examination would reveal around 170 of them are self-citations! He is a bright guy but somehow he has got it into his head that this is the only strategic way to beat the system and this strategy got him what he wanted. I am sure there are many such examples (probably, not as extreme) around us.

I suppose it finally comes down to the flexibility of one's personal ethics. Given the huge number of journals around, it is not possible to police the practice of salami-slicing any more.

Scientific rigor should always be the focus, but
this doesn't necessarily translate to many citations unless the size of the field one is working in is close to its critical mass. In some instances, good papers don't get the citations they deserve simply because the author didn't do a good job writing the article and making it palatable for a general audience. Targeted marketing in the form of conference and workshop talks is increasingly becoming more important in today's world where many academics have short attention spans and there are too many papers being published every week.

Personally, I would always go for a researcher with 40 scientifically rigorous articles over someone with 100 salami-sliced papers simply because working out what the second guy has done in his career would waste far too much of my time. It is nothing to do with Sanger's case, since I think guys who won the Swedish medal don't necessarily provide good benchmarks.

Gengatopathy Acharya