A-levels: How controversial algorithm behind moderation row works

Thousands of angry students have seen their A-level results downgraded by a new moderation process introduced after the coronavirus crisis led to exams being cancelled.

Teachers submitted predicted grades, alongside a ranking of students, with pupils then given a final grade calculated by exam boards.

Almost two in five (39.1%) of teachers’ estimates for pupils in England were adjusted down by one grade or more by an assessor’s algorithm, which amounts to around 280,000 entries.

Prime Minister Boris Johnson has dismissed accusations the system is disadvantaging pupils from poorer backgrounds, saying the marks are “robust” and “dependable”.

So how exactly were the A-level results worked out? Ashwin Iyengar, a PhD student in pure mathematics at the London School of Geometry and Number Theory, gives Sky News his thoughts on the process.

The document containing the algorithm is written in a way such that if you don’t already work for a test centre (or spend a week researching this), it’s an uphill battle to understand what’s going on.

There are a few inputs which determine your grade.

1. The most important is the grade distribution of your test centre from previous years 2017-2019 (for some subjects in phase 3-4 GCSE and certain A-levels they only use 2019).

2. The second most important is your “rank” within the test centre which is based on (a) how the test centre ranks you, and (b) your centre assessment grade, which seems to be what your teacher thinks you will do/mock exams

3. Lastly, your previous exam results (for GCSEs they look at key stage 2 assessment scores, and for A-levels they look at GCSE scores). They look at this both for you and for people who took the exam in previous years.

Then they seem to take the grade distribution from people in the past at your test centre (1), and then decide your grade based on your rank (2) and that distribution (1).

So for instance, if you’re halfway down the ranking list, then your grade is roughly whatever the person halfway down the ranking list in previous years obtained.

This definitely perpetuates inequality: the government assumes that you will score low if your centre scored low in the past. And they assume you’ll score highly if your centre scored highly in the past.

But there is one technical wrinkle.

So far I’ve only included input (1) and (2).

However, input (3), your previous scores, also affect the grade distribution.

How does it affect them?

For A-levels, instead of just looking at the grade distribution from your test centre, they try to look at what the grade distribution of your test centre would be, assuming people in the past had similar GCSE scores, and they adjust for this.

What is unclear to me is whether this technical wrinkle has all that much of an effect, which I can’t tell without looking at the data.

My suspicion is that it won’t have too much of an effect, and even if it does have an effect, it’s not making the system much fairer.

It’s just some statistical trick that makes it look like they’re including information about your previous scores, when really the thing that determines your score is previous year’s students’ scores.

The other thing that is unclear to me is whether they are doing this subject-by-subject, or whether they’re looking at mean (arithmetic average) GCSE score.

The document provided doesn’t make it clear which is the case, unless you already know the lingo.

In summary, people who come from areas where people have scored low are assumed to score low this year, and people who come from areas where people have scored high are assumed to score high this year.

It seems pretty unfair to me.

There are surely better ways to determine peoples’ futures which don’t perpetuate systemic inequality that is already entrenched in the way education works in this country.

Source: Read Full Article