Wednesday, August 19, 2015

Rankings

Shanghai rankings..

No need for words..comparison of IISc with two Chinese universities. All these universities had similar ranks in 2003 but the rankings have changed due to judicious implementation of certain policies.

http://www.shanghairanking.com/World-University-Rankings/Tsinghua-University.html

http://www.shanghairanking.com/World-University-Rankings/Zhejiang-University.html

http://www.shanghairanking.com/World-University-Rankings/Indian-Institute-of-Science.html


31 comments:

iitmsriram said...

Giri, I don't know why you say no need for words, I think there is a need. I sat down with my (IITM) Director and we looked over the detailed data and methodology so we better understand what is driving this ranking, so we better understand where we are doing OK and where we may need to improve if we want to fare better in this ranking.

Following points may be noted wrt engineering rankings. Only 4 factors are used, namely, number of highly cited papers, total number of papers, fraction of papers in "top" journals and funding. The ranking uses only "publicly" available information, so except for US universities (where funding data is put out by NSF annual report), all others get zero on funding. So, IISc = IITM = Zhejiang = Tsinghua = 0 on this count; but this does not matter as this score is not used in such cases. For number of highly cited papers, IISc gets a 13 (benchmark is Stanford = 100), Tsinghua gets 20 and Zhejiang gets 0. This would suggest that Tsinghua is putting out a few more "star" papers (OK, about 50% more than IISc), but still nowhere near the top univs of the world. If we look at fraction of papers in top journals, IISc is doing quite well with 83 (benchmark is UC Santa Barbara at 100, MIT, Stanford, Cal Tech, Princeton all score in the 90's) and beats both Tsinghua and Zhejiang at 77 and 79 respectively (and IITM hangs in there with a 75, so all four places are publishing roughly the same fraction of their papers in the top journals and this is a good fraction too). So, based on above factors, one should say IISc is actually doing quite well compared to Tsinghua or Zhejiang. But why the difference in rankings?

The raw / total paper count!

IISc has been publishing at a respectable 2000+ papers a year. Top places such as MIT, Georgia Tech, Cal Tech publish about 8000 papers a year. If we do this per capita, IISc actually compares quite favourably - so, IISc just needs to grow to 4 - 5 times its current size to start looking like MIT. And, this is roughly what Tsinghua and Zhejiang have done to boost their ranking. In fact, if we look through the numbers, it is only the Chinese + some Korean, NUS and the like which are publishing more papers than MIT. Tsinghua top scores at 12000 papers a year. Maybe I am a fool, but if you ask me, I look at MITs 8000 and Tsinghua's 12000 and I know someone is gaming the system. Zhejiang's count is about 10000.

So, Giri, I want to ask you, what is this "judicious implementation of certain policies" that you think has propelled Tsinghua and Zhejiang from IISc level to another level? Keep hiring more people and ask all of them to keep publishing? That is the only thing that Tsinghua and Zhejiang are doing better than IISc. Is that the direction we want to go in?

Anonymous said...

highlycited researchers are also a criteria...go to highlycited researchers and do a search

http://highlycited.com/#china

http://highlycited.com/#india

china has 163, India has 5.

Also, just increasing numbers alone will not work. IT is scaled per capita

http://www.shanghairanking.com/ARWU-Methodology-2014.html

PCP The weighted scores of the above five indicators divided by the number of full-time equivalent academic staff.

HiCi and PCP carry 30% weightage. The total number of papers only contribute to 20%.

S said...

@Anon: You are probably looking at the wrong ranking (ARWU). What iitmsriram is talking about is the more relevant (at least for IITM) engineering rankings, called ARWU-FIELD-ENG on the website, which have a very different methodology (and which is exactly as in the description given by iitmsriram).

iitmsriram said...

Thank you @S. I have also noted in my remarks that " ... may be noted wrt engineering rankings". Anon also misses another point, increasing numbers will work as ONLY the PCP indicator is weighted per capita, none of the other measures are. And, the PCP counts only for 10%. To add to my comments with a specific example on highly cited papers (which was the point from anon), we can note that IISc has a score of 13, Tsinghua has 20 and Stanford gets 100. These are not done per capita or even as fraction of total papers. Tsinghua publishes almost twice as many total papers as Stanford, but only has one fifth as many highly cited papers as Stanford - so, the proportion of highly cited papers is not even close. For IISc, the total publication count is about one fourth that of Stanford while highly cited score is 13 vs 100. As a fraction, IISc is about half of Stanford, which is not bad, I would think (and much better than Tsinghua).

Anonymous said...

" is the more relevant (at least for IITM) engineering rankings, "

What are the science faculty in IIT Madras doing? getting paid for doing nothing?? why is only engineering revelant to engg

Rankings are university based. When Smriti ji says no university is ranked in top 300, that is correct.

Anonymous said...

why are you saying with respect to engineering rankings but comparing overall number of papers?? IISc 2000 papers is not from engineering but overall. Tsinghua 12000 papers is also overall.

Now, let us take IIT and IISc put together. Maybe that will beat 12000 papers with 4000 faculty. Now let us compare.

Number of highly cited scientists from 15 IIT and IISc: ZERO

Number of highly cited scientists in Stanford: 59
Number of highly cited scientists in Caltech: 21

Note that Caltech has approximately the same number of faculty as IISc. i.e., nearly 8% of faculty are highly cited scientists. IIT and IISc are 0 out of 4000.

Same thing can be talked about nature/science papers.

Indian Scientist said...

Some of our Indian Profs/ Scientists/ Student feel a great proud in showing off ranking even if it is first in South Asia!

I was about to give comparison of Caltech vs IISc or Janelia Farm vs IISc but some one already made the point.

Please focus on doing good science and not spending energy on discussing these ranking/ impact factor/ h index type numbers. Fake pride is more dangerous than under performance...

Anonymous said...

Implicit in your comment that IITs are pure while other universities game the system.

Okay, sir, all IITs put together publish ~12000 papers same as Tsinghua. Now, make an comparison:

Alumni of an institution winning Nobel Prizes and Fields Medals : ZERO
Staff of an institution winning Nobel Prizes and Fields Medals: ZERO
Highly cited researchers in 21 broad subject categories : ZERO
Papers published in Nature and Science : ZERO

The above is 70% weightage, for which IITs put together gets ZERO. Other universities will get some marks in this.

"Keep hiring more people and ask all of them to keep publishing? That is the only thing that Tsinghua and Zhejiang are doing better than IISc"

No, sir, they have very good faculty who are listed in highly cited researchers, they publish in nature/science etc.

Regarding policies, please read their tenure policies, their promotion policies and their policies for payment for good publications and publications which receive good number of citations. See how they reward their faculty for publications, patents, citations, products developed etc...

Compare this against India,

http://rankingwatch.blogspot.in/2015/06/what-indian-physicists-do.html

A vice chancellor of an university says, "The thing is that you have a job in the university, you have a job for life, you can decide to sleep, still you will get the salary..." it tells you what is the life of a professor in India.

Well, even Prof. Giridhar does not want tenure or reward policies implemented in IISc, when what can one say !

Ankur Kulkarni said...

Prof Sriram,

It seems like you are analyzing the rankings with an agenda to dismiss them. The devil is in the details that you are disregarding. If for highly cited papers, IISc gets a 13 and Tsinghua gets 20, that is not a minor difference. Before you dismiss it in your discussions, why don't you give some thought to what it would take to achieve it? A 50% increase in *highly cited* work would require significant changes and cannot be taken as trivial or small.

I think IIXs are disingenuous about their approach to rankings. A serious and sincere analysis would have asked questions like - historically how has this number looked for IISc/IITM and for Tsinghua? Which areas are we strong at and which areas need improvement. Having identified weak/strong areas, what can be done to further improve? Which groups are contributing them, which groups are lagging behind; if so why. What are the emerging "boom" areas which can hope to lead, etc...

iitmsriram said...

@ankur, thanks for pointing out what appears to be my motive, but it is not so. No, I do not want to dismiss the ranking, but am trying to understand the underlying metrics better so hopefully we have a better idea of (to kind of borrow from you) what we are doing OK and where we suck. And, I started this thread as our host Giri seemed to have some idea about "judicious implementation of certain policies" and that is what I wanted to question as I don't believe that is where we have an issue.

Using what you point out, yes Tsinghua has 50% more highly cited papers than IISc - but Tsinghua has about 6 times the faculty head count and is publishing about 6 times as many papers. If one needs to publish 6 times as many papers to get a 50% increase in highly cited papers, I don't believe that is the right kind of model to follow - it is a clear indication that one is pushing for volume with a definite drop in quality. If we can do something to, say, double the count of papers and get 50% increase in highly cited papers, I would think that is something worthwhile. The point is that Tsinghua may not be the model we want to shoot for; as someone else pointed out, CalTech may be the model we want to shoot for (though I have my own questions of what to make of some of the CalTech numbers. CalTech lists their faculty head count as about 300 professorial faculty, whatever that means and another 1200 or so non-professorial faculty. What do we take as the faculty head count - 300 or 1500? And what do we make of JPL,with about 3000 scientists on roll, which appears to be absorbed into CalTech in the numbers?).

One thing appears common in most of the top places - a large count of non-faculty (non student) researchers, often exceeding the faculty count by significant margin. However, this appears to be not easy to work out in India. As noted under a different topic in this blog, we set up INSPIRE and other fellowships, but we seem reluctant to host them in our Institutions. If each of the IIX would take on 400 - 500 PostDoc / INSPIRE or some other such fellows, I think our impact and research environment (and productivity) will be much better. But this seems to be a hard sell and is not finding enough traction.

Anonymous said...

ok, then, combine all the IITs, you will have close to 3000 faculty and 12000 papers

For the overall ranking, you need the following


Alumni of an institution winning Nobel Prizes and Fields Medals : ZERO
Staff of an institution winning Nobel Prizes and Fields Medals: ZERO
Highly cited researchers in 21 broad subject categories : ZERO
Papers published in Nature and Science : ZERO

The above is 70% weightage, for which IITs put together gets ZERO. Look at how much top universities like caltech or stanford fare in the above categories...then you can think of performing like them.

These rankings are overall but other rankings like THE are based on per capita. Look where they stand in those rankings. For us it does not matter because zero divided by something is still zero.

Ankur Kulkarni said...

Again, I feel you are downplaying the key issues. The presence of ancillary researches is not the only difference. There are vast governance differences which you seem to be overlooking. There are also per capita productivity. There are rewards, recognition, retention policies, funding, salaries.. so many differences, which are more perhaps more critical. When Prof Giri speaks of policies these might be some of them.

Also, I don't get the attitude. Honestly, for institutions that are so behind in the race, isn't it a bit rich to sit back and discuss which of the top models we would like, as if all you had to do was "add to cart" and soon you would get your 50% additional highly cited papers delivered to you?

I don't think we are at the stage where all that is left to do is "choose" a model -- Tsinghua/Caltech/NUS or whatever. Institutional attitudes and these endless discussion about where we want to go seems to suggest that we have all the capabilities, we only have to make up our mind on what we want to do. After that the doing and the achieving would be easy. This is not true. On the contrary, we are way behind even in our capabilities at making the simplest models work.

For example: In my experience, if one spends even 6-9 months not doing research, a kind of mental obesity sets in - one becomes sluggish in thinking, ideas become more superficial and besides one loses touch with where the field has moved. When you do decide to implement any model, remember that this is the inertia you would have to fight against. For all the time we have idled away philosophizing about what model to choose, we would now have to get on to a high intensity regime to get over this obesity and again begin performing.

Does your analysis of what model we would like to follow sincerely include this and other such ground realities?

Anonymous said...

IIT-Madras, which has 1% of its faculty strength who are Bhatnagar winners, compares itself with Caltech, where 10% of its faculty are Nobel Prize winners. And Bhatnagar winners are nowhere even close to Nobel Prize winners.

IITs put together gets ZERO out of 70%. Look at what Caltech gets. Compare and then see whether you can catch up with Caltech in 50 years.

Prof. Giri's point of putting chinese universities as a model is because they are similar in salaries, they also started in the last 75 years and the ranks of IISc and Tsinghua were same in 2003.

IITs/IISc has no HiCi (highly cited researchers), Tsinghua has 5. Caltech has 21. Now, 5 may not look much different from zero. But India as a country has only 5. China has around 150. India had higher numbers which dropped while China which had lower numbers increased in the last 20 years. Understand the number of HiCi and N&S papers has increased in China while it has decreased in India.

Take all the faculty count you want. Combine all IITs in India, you will have more papers and faculty than caltech/stanford/Tsinghua but your rank would be much less than any one of them. That is the fact. Increasing faculty count alone will not help.

Ordinary Person said...

Whenever the discussion of ranking comes up, there are always some people who argue that Indian universities have a different mission/vision/scope and a different funding structure in comparison to the top universities in the world. I'm not sure whether this mission statement is very precisely defined for IITs, IISc, IISER and if there is a conscious effort to achieve whatever they aim for.

Will it not be sensible if the IIX themselves define a ranking criteria that is relevant to them. It may be useful to see which institutions are actually doing good on the parameters that are relevant to Indians. In addition to teaching and research, one can then include points on factors such as affirmative action, gender equality, TEQIP programs etc.

If our goals are clear and if we have proper data, the university administrators will have quantifiable goals and it will be easy to modify policy to achieve them.

S said...

@anon: This prizes and papers games can be played in the other direction too (and that is why it is mostly worthless). For example, IITK computer science department has two Godel prize winners on the faculty (Caltech has 0). In the last 5 years, groups at MSR India have published far more papers in leading Computer science venues in Complexity theory than the corresponding groups at Caltech.

These numbers however mean nothing. I have worked at some of the more well known CS groups in India as well as in the US. I saw no real difference in the research culture at these places (except that one of the places in the US had the advantage of having a very large number of students and postdocs). Crucially, none of the good researchers in these places, whether in India or the US, seemed preoccupied with rankings or prize counting.

There are clearly some groups in India, in various fields, that are doing some right things, no matter what metric you look at. Perhaps the first step is to figure out what these groups are doing and what has gone right for them, and why some other groups might not be doing as well. Prize counting can come later.

Anonymous said...

It is not about individual research groups or performance in a narrow area. Nobel prizes are given for overall achievement not in a particular area like Godel prize etc. Ask the man on the street whether he knows about Nobel or Godel.

If Caltech and IIT Kanpur has no real difference, why do B.Tech students from IIT Kanpur go to Caltech for Ph.D and not just continue in IIT K?

It is not about small groups doing well..it is about the number of highly cited scientists and papers in nature/science which are used in rankings.

Rankings and prize do not bother good researchers but it should bother administrators. Rankings are very important to administrators in the US. Ask a top 50 school and the president will tell you what he is trying to do so that the university will move to the top 10. Ask a director in IIT what he thinks and he is thinking about the extension of his term.

S said...

Nobel Prizes are not given for overall achievement for a particular result in a particular field, just like the Godel prize. There is no Nobel in Computer Science, and no Godel in Physics. A man on the street is likely to be even more familiar with Oscars. "What prizes the man on the street is familiar with" is a very lousy criterion to judge performance of academic groups on.

Yes, there are groups at IITK and Caltech, in the same field, with which I am very familiar, and where I would not put an order between them. Why do you find that so surprising? I certainly know BTech students who stayed back to do a PhD at IITs and who did as well as anyone who went to Caltech or other places. But that is completely out of context here.

"It is not about small groups doing well..it is about the number of highly cited scientists and papers in nature/science which are used in rankings."

That's the problem with rankings. There are many fields where Nature/Science are not even relevant (mathematics, various areas of CS). Rankings which just look at numbers will completely ignore those fields, or shortchange them. The job of a science administrator is much more complicated than that of rankings meant for media splashes. They need to evaluate groups in different fields based on standards relevant to those fields.

"Ask a top 50 school and the president will tell you what he is trying to do so that the university will move to the top 10."

I have never heard this at the places I have been in the US. Yes, there is a lot of emphasis on specific ideas like trying to get one senior person to nucleate a good group around her/him, and that is a good thing. But I have never heard something as fuzzy as "we need to remain in top 10". This is probably because there are so many completely disparate rankings in the US: the ones for undergraduate education are often completely out of sync with the one for research. I know of a leading public university that figures in the top 5 or so of almost all research based worldwide rankings, but not in any top 10 of any undergraduate rankings in the US.

Indian Scientist said...

Just read somewhere that India produces more engineers (1.5 millions), medical graduates (~35,000), science graduates (12 million) every year than the whole population of Switzerland (8 million). Switzerland is no. 1 in global innovation index and India is no. 76! Singapore with 5 million population is #7. Iceland with ~325,000 people is #19.

Funny that our leadership is thinking we are as good as others by giving numbers which are convenient to them.

Something is seriously wrong with our primary education, secondary education and higher education. Come out of your shell or else the shell will become a well and you as a frog in it. At this moment recruitment/ hiring/ teaching is in shambles.

Open faculty positions in all institutes to foreigners. Get exchange teachers from better countries for primary/ secondary and higher education.

Vikram said...

Why is it so sacrilegious to suggest that Indian universities, or any specific university has a different goal from those measured in these 'rankings' ? Doesn't the Indian taxpayer pay for the upkeep of these universities, and don't they have a right to demand that the knowledge and personnel produced in these universities be prioritized towards specific goals ?

Academic rankings of universities are a bad idea for the same reason JEE/AIEEE rankings are a bad idea for the selection of students. They make the parties being ranked gear their efforts towards achieving a high ranking rather than reflect on their own goals. They detach these knowledge centres from their local ecologies and place them in some arbitrary international framework based on questionable metrics.

Also, people here are talking about non-faculty staff hiring in leading universities without reflecting on whether this is a sign of flexibility or an indictment of a system that has produced too many PhDs and not enough real jobs ? The situation in this regard in many field is getting really desperate.

We should definitely discuss how our universities are doing. But I feel it would be better if we discuss this in light of a question like 'why are there zero expert hydrologists in India', rather than some ranking somewhere.

Vikram said...

Rankings are devastating for diversity, whether for an incoming cohort at an undergraduate level or for worldwide universities. And entities that lose diversity quickly atrophy and die out.

The focus should really change to the primary stake holders, students, faculty and the public (represented by the state) and universities should be assessed on whether they provided a suitable environment and results for these people.

Ankur Kulkarni said...

http://economictimes.indiatimes.com/industry/services/education/in-a-first-two-indian-institutes-make-it-to-worlds-top-200-universities/articleshow/48966237.cms

It seems like this year QS has done some changes to their methodology and this has yielded better rankings (<200 for a change) for IISc and IIT Delhi. This will mean that the academic leadership in India will switch momentarily from its usual mode of denialism and obscurantism, to chest-thumping, saying how their efforts are being rightly recognized.

However, note how far behind we are despite this: NUS is now in the top 20! London alone has 4 universities in the top 40. Once this sinks in, we can go back to saying how rankings mean nothing and how the methodology is flawed, and writing upanishads comparing and critiquing various models.

Anonymous said...

Some one mentioned it is a great beginning!

With all due respect, if 1947-2015 is just a beginning phase, then when will we see a rising phase. Several universities (Singapore, Israel, China, Korea, Japan, Europe) started after Indian independence but they are way ahead of us.

Oh wait, I forgot, IISc was established in 1909, and made it in top 200. See just in 106 years we made it into first 200. Perhaps in another 100 years we will be in top 100. Unless changes are made this is something we can just dream about.

We have 8 universities in top 1000. Let us go to our directors, PM, president and discuss, and celebrate this achievement.

Also let us discuss the way to change methodology to get 10 institutes from India in top 10 of the world! May be come up with a new matrix/methodology.

Young people are waiting and are becoming restless with this self-chest thumping.

This news should be a national shame.

S said...

@Ankur: Let us look into the QS rankings a little beyond what is published in newspapers, shall we? (If we are going to use them in any way for policy purposes, then we better know what they are measuring in the first place.)

The first step to do that, if you are concerned with the IITs, is to look at not the general "university" rankings, but the engineering and technology rankings, since most IITs have very small Humanities and Science departments.

So what do we have here? Here are a few samples

Rank 1, "Score" 96.6: MIT
RANK 4, "SCORE" 92.5: NUS
Rank 16, "Score" 86.7: Caltech
RANK 29, "SCORE" 82.9: CMU
Rank 38, "Score" 81.6: UT Austin
Rank 44, "Score" 81.2: IIT Delhi
Rank 52, "Score" 80.1: IIT Bombay
RANK 79, "SCORE" 76.4: UCSD
Rank 81, "Score" 76.3: Ecole Polytechnique
Rank 95, "Score" 74.6: IIT Kanpur

You might also be interested to know that while all the older IITs seem to figure in the top 100 for engineering, IISc does not. Quite wise these rankings makers must be.

Going by the absolute faith displayed in rankings by some correspondents here, UCSD and Ecole Polytechnique should start taking immediate steps to catch up with IIT Delhi, and CMU and Caltech need to see what they can do to reach the stratospheric heights of NUS. Indeed it should perhaps be a matter of "national shame" for France (as one of the commenters above put it).

My argument would, however, be different. It would be that in and of themselves, these rankings are meaningless. For example, in the QS rankings the largest chunk of the score is an "Academic Reputation" score, which is measured by sending a survey to roughly 80000 people, and then normalized by geographical region. I would be willing to bet this normalization is what is causing a deflation in the ratings of places like UCSD, Princeton, CMU and Ecole Polytechnique.


An undue, slavish attachment to these silly rankings is the worst that could happen to Indian higher education and research establishment in the current situation. I would much rather go with the idea Prof. Sriram above seemed to be suggesting. Look at specific places of comparable size (such as Tsinghua and Caltech) that seem to have "done well" historically, and see what policies they followed. Then try to see what parts of those policies are implementable currently in our universities.

Trying to optimize to the wrong objective (i.e. rankings of questionable methodology) would be the most dangerous thing to do in our country of limited resources.

Ankur Kulkarni said...

@S, thanks for a thoughtful comment.

The numbers you mention are interesting. However, I would not draw the conclusion that we should ignore rankings.

I have said this before on this blog, and I continue to hold that view today -- rankings are not the end. They are a means to the end. We must try to get very high rankings, simply because they are great PR, and they give us the standing to demand/command greater resources (students, funds, faculty etc) for science. Other universities worry about rankings not because they are meaningful, but because they are useful. We don't do this and are missing a major trick.

Besides, there is some thorough analysis that goes on behind these rankings, even though there may be distortions due to normalizations. There is little to disagree about the categories they measure (except perhaps the internationalization categories). Furthermore, rankings provide us with some great data -- there are meaningful submetrics there which we can and should use to inform ourselves of where we stand. I don't see a sincere effort at doing this at all (whether using the rankings or otherwise). There is no effort to tell faculty -- well this is how many papers/students/conferences your peers in other places are writing/guiding/attending and such and such is where you are falling short. On the contrary, all I find is people picking the lowest hanging fruit to dismiss rankings as BS.

S said...

@Ankur Kulkarni:

I agree that one might use a *sound* ranking as one of the inputs in an improvement program. The problem is that contrary to your claim that "some thorough analysis that goes on behind these rankings", exactly the opposite seems to be the case. If you look at the QS World rankings (not faculty specific rankings, for which I could not find links to methodology yet), 40% share of the score comes from a survey sent to roughly 80000 respondents, "normalized" by geographical region. This is precisely the kind of thing one should not be basing institutional policy over. Any methodology which even fails to recognize the engineering groups at IISc (which by the way is an institution I have never had any formal connections with) surely lacks any semblance of proper analysis.

As for other universities caring about ranking: I doubt many do. I have worked at at least one of the non-Indian places I listed, and one of the Indian ones, at both places in groups that are recognized in their fields as being quite good. As I said above, no one seemed to care about rankings in these groups, but they did just fine. I would also bet that CMU/Caltech/Princeton/UCSD are hardly going to fret over their lower than expected rankings in QS, and I hope IITD does not indulge in chest-thumping on an higher than expected one (I doubt they would).


I agree that some sub-metrics can be used as inputs. The problem is that the objective ones (such as citations per paper) have (a) very little weight in the rankings and (b) are subject to similar distortions. There is no substitute for in-depth detailed analysis when it comes to access the impact, whether social and scientific, of research. It is much easier to go after numerical things such as "X papers in an year", than to understand and possibly replicate why a certain group, whether in India or abroad, might be having a much better research reputation or impact, sometimes despite publishing fewer papers than others. The latter is the harder, but more rewarding way to proceed. The easy "low-hanging" fruit is to try to maximize "counts" of all sorts without an effort at understanding the broader context.

Anonymous said...

"The problem is that the objective ones (such as citations per paper) "

Citations, publications etc are not objective. By your logic, since the blog host here is the most cited and most published chemical engineering in India, he must be among the best in India. That would be ridiculous. He will not be listed in the top 10 of even chemical engineers in IISc. That is why reputation rankings are important.

Caltech will always be more reputed than NYU even if NYU will have more papers or citations.

S said...

@Anonymous: A dictionary definition of objective is "not influenced by personal feelings, interpretations, or prejudice; based on facts;". A citation count is eminently "objective": it will remain the same no manner who measures it, and will not be affected by "personal feelings". A "reputation" on the other hand is not. For example, you talked of NYU and Caltech, and in my field, it would be very easy to find respectable field, whose "subjective" evaluations of the two departments would differ widely. Whether a criterion is "objective" nor "subjective" does not immediately determine if it is useful for the purpose at hand.


You make somewhat truculent, perhaps even offensive, claims about the quality of the research of Prof. Madras. That is your subjective opinion of his work. I am not a chemical engineer, and without knowing on what grounds you are making such claims, it would be completely unethical and irrational for me to accept your claim at face value. It would be equally unethical or irrational for me to accept a "reputation ranking" which are based on a survey made of some 80000 people and then normalized in some unspecified way.

S said...

"respectable field" should be "respectable people" in the third sentence of my post above.

Ankur Kulkarni said...

@S,

I guess we are confusing between ranks and rankings. By the former I mean the final summary that comes after all calculations and data gathering. By the latter, I mean the process of arriving at the former and the inputs that go into this process. I would also not be slavishly attached to the ranks per se. But I do care about the processes that go into determining the rank, since most of them carry meaning relevant to teaching and research excellence. One may disagree about the normalizations, relative weights, sample sizes, accounting, and many such issues. But I find there is little to disagree about the categories (including the reputation -- see below).

Opinion of an individual is subjective, but reputation, which is an overall aggregate is an objective measure. Of course one needs a large sample size, correct randomizations, questionaires/surveys that are faithful etc -- these are minor matters, which if they are flawed, can be fixed. More importantly, like it or not, reputation is a part of research/teaching life. Students join your college based on reputation, they work with you because of your reputation. Faculty get invitations to give talks, review papers, chair sessions etc, based on their reputations. Science is an asocial reality, but research and teaching, are social phenomena, performed by socially interacting beings. Not appreciating this will lead to a wrong idea of what a university ought to be doing. Note that I am not advocating empty networking and advertising. I am only saying that reputation is a meaningful component for determining ranks.

Secondly, so what if the reputation measure is nonsense? What stops us from emailing these 80000 recipients with a dossier of our accomplishments? This would be good so for us, I would have thought our universities would have proactively done this, even if there was no QS ranking in the picture. Instead all they have been doing is to object "on principle" to this survey being used to determine ranks.

Anonymous said...

Anon@September 17, 2015 at 9:31 PM:

I agree that citations and publications can not be the only criteria. In fact, if this was so, then there would be no committees to select the best awards in the country: Bhatnagar; J.C. Bose etc. It would be then given to the person with the best citation record. However, if you look at Prof. Giridhar, he has got all these awards also. That would mean that the peer community values his contribution significantly.

Indian Scientist said...

Those who were happy with ranking of Indian Univs and chest thumping.

Here is another reason to run to the director of all the univs/ IIXs.

Now we have 9 univs in top 500 and 17 in top 800.. may be 50 in top 2000.

Top in the world is CALTECH and our top is IISc .. see we matched CALTECH in ranking
second is Oxford and ours is IITB
third is Stanford and ours is IITD
fourth is Cambridge and ours if IITKharagpur

and the best

fifth is MIT and ours is IITM.. just 'M' is misplaced and an extra 'I' is added .. but we are equal to MIT.

Pun intended just in case some one is missing the point.

Stop bother about ranks and do the hard work PLEASE. You are wasting tax-payers money and should be guilty if you waste more time on discussing ranks than science with directors.