One of the universities I worked in ran an advertising campaign for which the strapline was ‘It’s not the letters after your name that matter: it’s the name after your letters’. At the time, it attracted a good deal of criticism from within the university. This was reducing higher education to a mere positional good, placing value on the degree only in terms of its relative value against other universities. It was the language of competition and league tables, not the language of the intrinsic worth of higher education.
This sense of relative value in universities is in its purest form in business schools. Almost all business schools teach the same thing: the accreditation arrangements see to it that the curriculum for most MBAs is pretty standardised (strategy, operations, human resources, marketing, and so on). But the cost of an MBA varies enormously from the colossal fees charged at the top American business schools to the much more modest fees in lower ranked universities. You are not paying for the curriculum, which is pretty well standard, nor for the quality of teaching, which is probably nothing special, but for the opportunity to rub shoulders with other people who can pay hefty fees. You are paying for your alumni group. Given all this noise in the higher education market place, wouldn’t it be really good if we actually had a mechanism for making some real comparison of the outcomes of higher education: a systematic and reliable comparator which might tell us that graduates from Harvard were not, actually, all that much better than graduates from Des Moines Community College (or, of course, that they were), or, a more realistic comparison, how the outcomes of the American university system compared with the outcomes of the Australian, or Canadian, or South African university systems.
This was part of the background to the OECD AHELO project – the Assessment of Higher Education Learning Outcomes. There were other parts of the background too. As mass higher education becomes entrenched across the world, employers and policy makers have started to ask pretty much the same questions about universities as they ask about school systems: is it worth it? How good is it? How effective is it at preparing young graduates, of whom there are now millions each year, for the labour market, or for adult life. This wasn’t a question which needed to be asked when universities catered only for a tiny elite: it’s pressing now that they operate on a vast scale.
Without a systematic comparison, the only thing we have are commercial league tables, of which there are now a huge number. The Guardian, Sunday Times, and Which? produce league tables of UK universities; USA Today produces league tables of American universities. The Times Higher, QS and Shanghai Jiao Tong are the most widely disseminated global league tables. There are rather more niche league tables: the most expensive universities for accommodation, league tables of student facilities, of the most gay-friendly universities, and so on. By and large, for all the grumbling, university leaders quite like this plethora: there’s always a league table, somewhere, in which your own university does well because the metrics are always weighted slightly differently. In some universities, junior administrators labour over the data to produce graphs comparing the university’s performance in different league tables, or show how by slightly altering the weighting a quite different result would appear. One reasonably constant feature however is this: English-speaking universities do well internationally, and long-established research intensives do well nationally. The league tables present a recognisable world, and a comforting one: in today’s Times Higher league table, thirty-four UK universities appear in the top 200.
And it was this which killed AHELO, effectively pulled a couple of weeks ago. The intention was to develop a PISA-type test which would be administered in all participating universities to explore higher education learning outcomes and make comparisons at least at system level. OECD stressed the diagnostic value of such a test in soothing terms: it would be “a ‘low stakes’ voluntary international comparative assessment designed to provide higher education institutions with feedback on the learning outcomes of their students which they can use to foster improvement in student learning outcomes”. But it became rapidly apparent to the universities at the top of current league tables, and the countries in which those universities were located that such a survey could only be bad news for them. If the results confirmed that Harvard, Oxford, Cambridge, Stanford, Toronto and so on were secure there was no gain. If the results produced a different result, the emperor had no clothes. The risk was too great, and it was certainly not “low-stakes”. The arguments were mounted slightly differently, of course: American and Canadian university presidents wrote to the OECD to argue that “AHELO fundamentally misconstrues the purpose of learning outcomes, which should be to allow institutions to determine and define what they expect students will achieve and to measure whether they have been successful in doing so”. After eight years of planning, in the face of opposition from the most powerful universities in the world, AHELO collapsed as major university systems withdrew, leaving, said Andreas Schleicher, only Norway, China and Finland as core participants.
But the questions remain. As governments around the world realise that they simply cannot afford to publicly fund mass higher education systems and, like the English and Australian governments look at some combinations of loan and graduate tax systems, there are some really difficult questions: what does quality higher education look like in the twenty-first century? How much should it cost? What outcomes should we expect? The IOE launches, this month, its new ESRC research centre, the Centre for Global Higher Education, which, over the next five years will begin to underpin discussion of the role, contribution and nature of universities around the world with hard evidence. And as the OECD withdraws, a different threat emerges. The global accountancy firm Deloitte has announced that it has changed its selection process so recruiters do not know where candidates went to university in order, it says, to prevent “unconscious bias”. It’s not the only recruiter to begin to screen out information on academic background. If no-one knows the name after your letters, how much is it worth?