The government is currently looking long and hard at the school accountability framework. In February, it published a thoughtful consultation document on Key Stage 4 accountability, and a similar document on Key Stage 2 is expected shortly: the headline performance measures for schools have always been the Key Stage 2 expectations for primary schools and the Key Stage 4 measures for secondary schools: the focus on Level 4 performance (at Key Stage 2) and threshold GCSE performance at grades C and above (at Key Stage 4) have simultaneously focused minds and energy whilst at the same time driving some behaviours in schools which mean that resource and effort is focused on marginal performance at critical boundaries. Nonetheless, the focus on floor targets has been a powerful driver for improved performance, especially in English and Mathematics.
With little fanfare, the government has now published minimum performance standards for ‘Key Stage 5’ – that is, for 16-19 providers. The performance standards are long overdue: there is too much poor and often unviable provision at 16-19, and comparatively little sustained scrutiny of performance across the sector. The government is right to develop common expectations covering schools and colleges, and to try to develop indicators which assess performance in A-Levels and other academic and vocational qualifications taken at level 3. But at the same time that it is consulting intelligently about key stage 4 accountability, it appears to have developed indicators which will drive some perverse behaviours at key stage 5. The KS5 minimum standard will describe a school sixth form or college as underperforming if its results show that fewer than 40 per cent of students achieve an average point score per entry in academic or vocational qualifications equal to the fifth percentile of providers nationally.
The key flaw is simple, but technical. The current KS5 performance tables present two sets of data on institutions’ achievement: an average points score per student, and an average points score per entry. The points score is derived from the national points tariff – 300 points for an A* at A-level, 270 for an A, 240 for a B and so on, and a parallel tariff for approved vocational qualifications. However, the KS5 minimum standard is set at an average points score per entry, not per student. The perverse incentives can be easily illustrated: imagine a student predicted to score CCE at A-level. She has an average points score per entry of 190 (570/3). But if the school were to counsel her to drop the subject in which she is predicted an E, her average score per entry rises to 210: the measure has shifted, but the performance of the school or college has not. In this instance, it’s not clear that the interests of the student (narrowing her curriculum) are best served by the tactic which is in the best interests of the institution. Of course, this is based on a single case, but some institutions are managing very small cohorts: almost 600 institutions have cohorts of less than 125 students. Given the indicator – the points score secured by fewer than 40% – institutional behaviour of this sort could make a difference.
The relationship between the average points score per student and the average points score per entry is strong: that is, schools and colleges which have high average points score tend also to have a high average score per entry. The graph sets out the relationship based on A-level scores in 2012, with the red line indicating the lowest quintile of institutions. This is partly a consequence of a strongly selective post-16 structure in which some institutions set relatively high entry requirements at GCSE – and note that the DFE KS5 floor target is a norm-referenced measure against the performance of the sector as a whole, rather than a progress measure from 16, for which the data does exist. But the relationship is not absolute, and is weakest in the lowest quintile of performers, again suggesting considerable scope for institutional response to perceived signals in the accountability regime.
Relationship between average A-level score per entry and average A-level score per student, 2012:
It would be relatively easy to replace the planned per entry indicator with a per student indicator. As the graph indicates, this would be neutral for most institutions, but it would send important signals to those institutions that may be at risk of receiving a notice to improve: it is students who matter.