FORECASTING IN EDUCATION

education grading exams tests students A levels

As anyone that has been paying attention to the A level fiasco in the UK,  it is clear from the U-turn by the UK Education Secretary, Williamson, that a huge blunder has been made!

But how did we get to this point?

It all started when due to the Coronavirus pandemic, schools across the world were shut down. Coming at a time when the final years exams were being held for school-leaving students in multiple countries, it froze the entire education system across the globe. The response has been varied across nations. While India decided to go ahead with the exams at a later date, the UK decided to go in for a ‘predicted grade’ system. Which  means that instead of having actual exams, a predicted grade would be generated based upon the algorithm provided by the Office of Qualifications and Examinations Regulation (Ofqual). (Ofqual has provided a detailed technical report about the algorithm they used to determine the grades to be given for A level and GCSE students in England in 2020. The entire 319-page report can be found here: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/909368/6656-1_Awarding_GCSE__AS__A_level__advanced_extension_awards_and_extended_project_qualifications_in_summer_2020_-_interim_report.pdf)

Essentially, in lieu of actual scores, the algorithm took into account factors such as – historic grades in school, buckets of grades across the country, teacher recommendations of estimated grades and comparative rankings, as well as correlation of grades across years.

What really happened?

The end result of the algorithm assigning predictive grades to the students, has been a huge downgrade of grades to a whopping 39% of the students, while 2.2% of the students got their marks upgraded. Interestingly, the students that got upgrades happened to be from private schools with privileged backgrounds, and the downgraded students happened to be from public schools with a middle class or lower middle class background.  As a result, there has been a huge uproar among students across UK, along with allegations of class-ism and unfairness. Bright students within historically underperforming schools have been the biggest victims of this fiasco, as well as those schools that have been making improvements over the years.

The algorithm failed to create a direct connection between an individual’s prior achievement and their predicted grade, which put the future of all those students that were relying on high grades to get them into good colleges and universities, in limbo. (For those interested in a deeper analysis of the various factors in the algorithm, I have found this to be a very comprehensive review of the methodology – https://rpubs.com/JeniT/ofqual-algorithm)

“Using the Right Data with the Right Process is the only way to get Right Results”

My perspective is that there are the following technical issues with the methodology: weighting, historical data and incorrect correlation.

1.Weighting – Ignoring or giving lesser weightage to certain variables like teacher recommendations, appears to be the biggest flaw in the algorithm design. Considering that the teachers know their students the best, this is a variable that should have been given a higher weightage.

2.Historical Data – The basis of forecasting is to look at historical data, identify a pattern, and all other things being non-changing, project that pattern into the future. The keyword here is ‘non-changing’. If a particular student or a certain school has been investing efforts in improving their performance over last year, there is no way to capture this effort since it is not a variable that is reflected in previous years’ results. Considering the general observation that everyone puts the maximum effort in the month before the exam, the algorithm totally discounted this key variable. It also sets into stone the particular socio-economic or demographic background of an individual without making any room for mobility.

2.Incorrect Correlation – Assuming that if a student has gotten a certain grade in Class 6, they will get that grade in Class 12 as well, is just incorrect correlation. But that is exactly what the algorithm did while putting students into buckets based upon their scores in Class 6.  A more balanced approach would have been to consider Class11 and 12 marks aggregate to create buckets.

Future dependent upon a single variable

This whole episode, however, has highlighted a simple fact – that for school students in countries like UK and India, the final year grade can make or break their chances to get into a good college/university.  This in turn has a huge lifelong impact on the lives of millions of students! Think about it- the outcome of a messed up system is what decides where an individual can graduate from, regardless of their actual performance and potential.

This is a perfect example of the injustice being brought upon some of the best and the brightest- https://www.bbc.com/news/av/education-53794490.

The parents and students suffering through such a system already recognize that the current setup does absolutely nothing to help a child identify where their actual potential lies. And then when the system downgrades their performance as well based upon some arcane bureaucratic decision-making, it only rubs salt into an open wound!

It is high time we thought about the purpose of an exam: it is really a mechanism to determine human ability, or has it become a tool for managing the disparity between the number of school leaving students and the college seats available for them?


Nikhil Bimbrahw

Chief Information Officer

Mr. Bimbrahw is presently working as Chief Information Officer at USO, New Delhi. He is helping USO with digital transformation and process automation efforts. He has a wealth of experience with Deloitte and Accenture in United States, Canada and India.