As we once again hear that the educational equity agenda is moving nowhere, the government’s insufficient understanding of how schools work with evidence seems increasingly frustrating. They make a simple deduction, to take a sentiment expressed by the Education Endowment Foundation’s Sir Kevan Collins this week:
Some schools outperform others in terms of closing the attainment gap between disadvantaged and non-disadvantaged children.
These successes are replicable.
Distill success, inject, repeat.
By pouring money into R&D and providing schools with the right guidance, we can solve all these problems. Sadly, educational success, rather than going through a Liebig condensor as the government hopes, tends to spread in a manner similar to the dilution process in the fashion industry- best explained by Miranda Priestly of The Devil Wears Prada:
“You go to your closet and you select that lumpy, loose sweater, for instance, because you’re trying to tell the world that you take yourself too seriously to care about what you put on your back, but what you don’t know is that that sweater is not just blue. It’s not turquoise. It’s not lapis. It’s actually cerulean. And you’re also blithely unaware of the fact that in 2002, Oscar de la Renta did a collection of cerulean gowns, and then I think it was Yves Saint Laurent who showed cerulean military jackets, and then cerulean quickly shot up in the collections of eight different designers. And then it filtered down through department stores, and then trickled on down onto some tragic Casual Corner where you no doubt fished it out of some clearance bin.“
In this particular metaphor Oscar de la Renta is Japanese lesson study, Yves Saint Laurent are all the people doing variations on lesson study internationally, and the cerulean jumper out of Casual Corner’s clearance bin is the version of lesson study that the Education Endowment Foundation declared an ineffective tool for CPD. (Read more about this on the IOE blog.) This is the quality of the evidence schools are being asked to base decisions on. When you look at the methodology section of these papers you never see a detailed analysis of how a selected intervention was faithfully reproduced from its original; rather, you get a detailed explanation of the scale and calculation of effect sizes etc. But if what your testing is out of Casual Corner then it probably doesn’t matter if you test it in 1 or 181 settings- the results are meaningless.
Remember when the Education Endowment Foundation’s toolkit suggested teaching assistants were ‘ineffective’ and schools started to lay them off? This week we are learning the evidence was wrong. Of course, all research endeavors involve errors and adjustments but in the mean time the education system has lost good professionals from our workforce in the name of ‘evidence-based practice.’
Randomised-control trials are seductive tools for governments but if they really want to have an impact they must think more wisely about the quality of research and the confidence of the conclusions being shared.