WHO'S NUMBER ONE?

Who’s Number One?

WHO'S NUMBER ONE?

     In recent years, working as a college counsellor, I have encountered an increasing, sometimes frenzied preoccupation with University rankings. Parents and students will deem the education provided by a university occupying place 10 on a ranking as vast orders of magnitude inferior to that offered by the university in spot number 1. Rankings now are thus treated as absolute predictors of educational value and career success. It is as if the world of the Dow Jones Industrial Average had descended upon the lofty bastions of higher cognitive function.  Few, however, understand what the numbers actually mean or how they are computed.

     Avid consumers of university rankings often misunderstand the importance of impact factor. Impact factor is a technical word referring to the statistical frequency with which a given person’s scholarly publications are cited. Thanks to well-developed software tools it is now possible to scan the entire corpus of scholarly publications in a year, and ascertain how often an individual’s research articles are cited by peers i.e. fellow researchers at other universities world-wide.

     Impact factor plays a key role in establishing the reputation and ranking of a university. Adding the impact factor of all researchers in all the departments of a university, ranking agencies can establish a metric of how much that university’s research resonates in other institutions. Research universities such as Oxford, Cambridge, the Ivy League and EPFZ dominate impact factor rankings and for this reason are considered leading institutions in their fields.

     The problem with impact factor-based rankings is that they tell little about the actual quality of undergraduate teaching. Institutions that prioritise research output are likely to do so at the expense of undergraduate teaching. Furthermore, only a fraction of the students striving to enter those brand name universities really care at all for the research being conducted there and many are even turned off by the prospect of doing research themselves.

     The fallacy that a top research university must necessarily be a top undergraduate teaching university can be illustrated with a simple example:  a  renowned university professor, whose research is widely cited, will  contribute to the prestige of the university hiring him by winning, say, a Nobel Prize. But it is unlikely that a Nobel Prize winner will accept teaching undergraduates barely beginning to learn the rudiments of a discipline. It’s the same logic with Serena Williams who doesn’t really have time to teach me the rudiments of a backstroke, or the prima ballerina of the Bolshoi Ballet, who does not have the leisure to be my dance partner at beginners salsa lessons – these stars are too busy staying at the top of their game and competing with others to preoccupy themselves with neophytes.

     The sad and often overlooked reality about top ranking universities is that the top professors that give those universities their prestige are not or only very rarely the people actually doing the teaching at undergraduate level. Much of that teaching is delegated to adjunct faculty or to doctoral student assistants. As a consequence, undergraduate teaching at the top ranking universities can often be of a lower quality than what one could find in perhaps less highly-ranked colleges. (In fairness, Oxford and Cambridge do guarantee highly personalised tutoring by high profile scholars with doctorates, in what is called the tutorial system, which has been the basis of teaching there for centuries.)

     It is for this reason that truly savvy consumers of University education aim for top-ranking universities at the post-graduate level, but for their undergraduate studies, prefer to apply to smaller liberal arts colleges, where the institutional priorities are focussed on quality of teaching. A measure often left out in rankings is student satisfaction. There is a strong case to be made, that this metric should weigh more heavily in the minds of undergraduate candidates just beginning their university experiences.

      Somewhat perversely, colleges are playing the system not always with the best interests of students in mind. For example, colleges which reject a greater number of applicants are considered to be “more selective” and therefore “better”. In an effort to boost their selectivity, many colleges are enticing, vast numbers of candidates to apply, only to then turn down most of those candidates, and then boast on rankings that only 5% of applicants were accepted.

     Concerned parents, enticed by a mushrooming industry of educational consultants, are spending vast amounts on expensive agents who embellish and inflate the accomplishments of their children, giving sometimes unauthorized help, in producing polished personal essays. Some provide cram SAT courses, sometimes at the expense of actual learning and discovery, in order to ensure that clients gain entry into a top tier school. While some of those students may genuinely be Ivy League candidates, there are many who are just excelling at jumping the hoops laid down by the admissions process. What are we aiming for here, Knowledge for knowledge’s sake (click on the link to read my article in the Shanghai Daily) or naked social advancement?

     Powerful economic interests are at stake in the university market, particularly in the United States, where so many of the universities are private. In Europe, in turn, where so many, indeed, most universities are public, there is less pressure to “burnish one’s brand”, both on the part of students and on the part of universities.

     A few universities in the United States are trying to move away from these reductionist practices. Some colleges with sterling reputations are asking to be left out of rankings, while making a case for the intangible and unquantifiable aspects of a good education. Some colleges are even ditching the SATs, which play such a tyrannical role in the admissions selection process. Paradoxically, some of those colleges have the highest success rates in sending graduates to post-graduate programmes in the Ivy League schools, but few seem to know or care, simply because those colleges don’t wield the “right” marketing and ranking numbers that sway potential “clients”.

     It’s not my intent to discredit Ivy League institutions. The vast financial endowments these institutions possess to support research, their select and highly motivated student body, their productive and highly distinguished faculty, vast library holdings, and state-of-the-art infrastructure, all contribute to an exceptional educational experience (particularly at the post-graduate level). However, contrary to the misinterpreted ranking ‘dogma’ these schools do not have the monopoly on excellent undergraduate education.

     This article serves merely to point out that if we have so much riding on the question of ratings, we should be better informed about how the ratings are generated and about the variety of divergent results of different rankings. If our priority is employability, or student satisfaction, we should focus on that rather than on impact factor. In short, we should question the premise that there is one and only one variety of optimal education. Ascertaining educational quality is more nuanced than horse racing or Formula one.

 

Dr. Luis Murillo is a graduate of the Universities of Toronto and Fribourg in Switzerland. He is a professor of Psychology at McDaniel College and has worked for many years as a college counsellor in Europe and Asia.