Research assessment is only partly reliable as an indicator of the real quality of the work going on in higher education. It has a dual character. On one hand it is rooted in material facts and objective methods. Strong research quality and quantity should be and are rewarded in the UK Research Excellence Framework (REF), the results of which have just been published.
But the outcome is also shaped by the universities that select and fashion data for competitive purposes and the subject area panels that define research judged to be outstanding on a global scale.
Total research activity can never be fully captured in performance data. Some things, such as citations in top journals, are easier to measure than others, such as the long-term impacts of research on policy and professional practice. Experienced players are best at gaming the system in their own interest.
A very strong overall REF performance signifies a large concentration of outstanding work. It is an unambiguous plus. All the same, precise league table positions in the REF, indicator by indicator, should be taken with a grain of salt.
The impact of impact
In the REF the indicators for ‘impact’, which are new to the 2014 assessment, are the least objectively grounded and most vulnerable to manipulation. This is because of the intrinsic difficulty of measuring the changes to society, economy and policy induced by new knowledge, especially in the long-term, and because of the kind of crafted ‘impact-related’ data that are collected during the REF assessment process. A sophisticated industry has already emerged, in the manufacture of examples of the relevant ‘evidence’ of impact. Thus the REF assesses simulations of impact, rather than actual impact.
At best, this gets everyone thinking about real connectivity with the users of research, which is one (though only one) of the starting points when producing the impact documentation. At worst, it leads to data that bear as much relation to reality as the statement of output by Russian factories in response to Soviet-era targets. Inevitably, the universities most experienced and adept at managing their response to performance measures of all kinds, perform especially well in producing impact documentation. There is also a ‘halo’ effect, of the kind that affects all measures contaminated by prior reputation. Research at, say, Imperial is seen to have impact precisely because it is research from Imperial.
The REF indicators that are the most meaningful are those related to ‘output’ quality, such as the grade point average, and the proportion of researchers ranked at 4*, the top mark. These are grounded in considered judgments of real research work, by panels with significant expertise. All the same, the standardised value of the output indicators, as measures of comparative quality, are subject to two caveats.
‘Getting better all the time’—or is it?
First, between the 2008 RAE and the 2014 REF there has been a notable inflation of the proportion of UK research outputs judged to be ‘world leading’ (rated 4*) and ‘internationally excellent’ (rated 3*).
In 2008, just 14% of research outputs were judged to be 4* and 37% were judged to be 3*, a total of 51% in the top two categories. In 2014, the proportion of the work judged to be outstanding had somehow jumped to 72%, with 22% judged to be 4* and another 50% judged to be 3*. This phenomenal improvement happened at a time when resources in higher education were constrained by historical standards.
While genuine improvement no doubt has occurred in at least some fields, the scale and speed of this improvement beggars belief. It reflects a combination of factors that generate boosterism. Higher Education Institutions (HEI)s have a vested interest in maximizing their apparent quality; subject area panels have a vested interest in maximizing the world class character for their fields; and UK higher education and its institutions are competing with other nations, especially the United States, for research rankings, doctoral students and offshore income.
The inflation of 4*s and 3*s is a worrying sign of a system in danger of becoming too complacent about its own self-defined excellence. This is not the way to drive long-term improvement in UK research. Less hubris and more hardnosed Chinese-style realism would produce better outcomes. It would be better to rely less on self-regulation, enhance the role of international opinion, and spotlight areas where improvement is most needed, not collapse into boosterism.
The selectivity game
Second, HEIs can readily game the assessment of output quality, by being highly selective about whose work they include in the assessment. Including only the best researchers pushes up the average GPA and the proportion of research ranked 4*. HEIs that do this pay a financial price, in that their apparent volume of research is reduced, and their subsequent funding will fall. Nevertheless, it is good for reputation. That has many long-term spinoffs, including financial benefits.
While some HEIs have chosen to approach the REF on an inclusive basis, others have pursued highly tailored entries designed to maximise average output quality and impact.
With the data from each HEI incomplete as a census of all research activity, and individual HEIs pursuing a variety of strategies, essentially the REF does not compare like-with-like. This undermines the validity of the REF as a league table of system performance, though everyone treats it that way. The same factor also undermines the value of performance comparisons between the 2008 RAE and the 2014 REF. The trend to greater selectivity, manifest in some but not all HEIs, is no doubt one of the factors that has inflated the incidence of 4*s and 3*s.
REF results in Education and the effect of IOE’s inclusive approach
Both of these tendencies—the inflation of outstanding performance, and the gaming of the system by being highly selective about the research on which the institution is judged —are apparent in the field of Education. In Education the proportion of work judged to be at 4* level doubled in the six years between research assessments, from 11% in 2008 to 22% in 2014. There were also changes in the ordering of institutions, on the basis of quality of outputs, driven by the gaming strategies of institutions.
The UCL Institute of Education (IOE) again submitted by far the largest entry, with 219 fulltime equivalent (FTE) staff, much the same as the 218 in 2008. The IOE took the inclusive approach to research assessment, and in that sense its REF results are a more accurate indicator of real research quality than is the case in some HEIs. In terms of total ‘research power’, the number of staff multiplied by the average assessment of quality (the GPA), the IOE achieved 703 points in the 2014 REF, which was more than four times the level of the number two institution in the field of Education, the Open University (164). Oxford was third at 140, followed by Edinburgh at 128 and King’s College at 124. As in 2008, the IOE is again confirmed as perhaps the world’s most important producer of globally significant research in the field of Education.
However, whereas in the 2008 RAE, the IOE was ranked equal first in terms of the quality of research outputs, in the 2014 REF it had slipped to equal 11th position. This was not due to any decline in the quality of outputs. In 2014 the proportion of IOE research judged to be at 4* level was 28%, up from 19% in 2008, in line with the trends in the RAE overall and in the field of Education. The proportion of work ranked at 3* also rose, from 38% to 40%, and 74% of the IOE’s research was ranked at maximum possible level for Impact. The IOE prepared 23 cases for Impact evaluation, with the next largest submission in the field of Education including only six cases.
Most of the HEIs that equalled or went past the IOE in 2014 on the basis of average output quality in Education, submitted more selective staff lists, compared to those used in 2008. Edinburgh dropped its staff input from 85 FTE in 2008 to 40 FTE in 2014, Nottingham from 51 FTE to 25, Birmingham from 47 to 24, Cambridge from 50 to 34, Bristol from 43 to 35, Durham from 31 to 25 and Sheffield from 24 to only 15.
Only Oxford, Exeter and King’s College London slightly increased their staff numbers in Education, though all three remained relatively ‘boutique’ in character, with 20 per cent or less of the IOE staff complement.
Oxford and King’s improved their overall REF performance in many fields of study, lifting their position within the top group of UK HEIs. This indicates either genuine research improvement, or more careful vetting of the best four publications per staff member that are the basis of the evaluation of outputs.
However, the largest volume of high quality research, 5.33% of total UK ‘research power’, was generated at the IOE’s parent university, University College London. Like the IOE, UCL takes the inclusive approach to research assessment. UCL’s share of research power rose sharply from its previous level of 3.83% in 2008. Following mergers with the School of Pharmacy and IOE, UCL is now the largest fish in the UK pond. Oxford is second at 5.19%, and Cambridge third at 4.49% followed by Edinburgh (3.60%) and Manchester (3.18%).