Mental sloth and the Edu data revolution

“Out of passions grow opinions; mental sloth lets these rigidify into convictions.” Nietzsche

The above thinker might be an odd one to turn to for inspiration on an education blog but he carries one benefit: he believed in nothing uncritically (perhaps other than himself) and in this educational climate that is of great importance. We have reached a stage with education where there is a critical mass of understanding about high performing schools and their methods such that we tend to unthinkingly apply these across the whole system. Each new initiative must, fashion-style, bear the hallmarks of the top brands. As schools seek to adorn themselves with the must-haves we find, paradoxically, that the road to innovation that the market reforms promise, in facts leads to a car park of abandoned ideas and stagnating standards.

‘Data-driven’ is one such hallmark. Fear not: whilst I plan to challenge your mental sloth on this one I am not going to argue that this is abandonment-worthy. I love data. However, as I have written previously, we don’t really have a broad definition of what data actually is in schools. We tend to assume it is anything that I can give a number but really it is very far from that. As a result, ironically, we fail to quantify the true extent of data collection in schools and how it piles up, rather unstrategically, to create a NQT-culling workload problem.

Part of the problem is the sheer level of conviction behind many of our data management exercises. We really must fully assess the minute levels of progress a child makes in six weeks and draw up a comparison of this to a flight path of progress based on a target that 90% of the time is utterly inaccurate. Why? ‘How else are we going to do it? We have to know what is happening in the classrooms.’ I both agree with the second statement and think I have an answer for the first question. Here are the principles I propose school leaders should adopt when seeking to tackle workload:

  1. Teacher time is a precious resource therefore we have a professional obligation as public servants to ensure that we do not waste it.
  2. A data collection exercise is only justifiable if it is a meaningful guide to action.

Here I define ‘data collection’ as any discrete activity that is intended to generate information for operational or evaluative purposes. The phrase ‘meaningful guide to action’ is deliberately vague to allow for context and contestation of meaning.

What would be the implication of this? Well, firstly, data collection would have to become much more intelligent. There would be no justification, for example, for a work review if your other data suggested that there was no action that would need to be taken with that year group or department. If I knew SEN were underachieving I would let a department know that all those books may well be the subject of a book review to see how these students are going. (But what if this leads people to focus obsessively on their SEN students? Well…good!)

What about numerical data like grades etc? Well, if the only time anyone actually sets the strategic direction- in terms of raising the achievement- of a department is once a year then that is the only time you would need quantitative data on the performance of those children. If you genuinely think that a department needs a strategic rethink every term then fine, collect data on every child every term. Fact is: most people know that isn’t the reality. Most organisations only strategically review their work once a year. They do it in a meaningful and collaborative way that everyone is on board with. This would be a perfect way to make gained time work for a department; not meaningless accountability tasks.

What about individual mid-year interventions? I would argue this: it takes, normally, a whole year for a child to catch up if they are underachieving. They need to be assigned a particular teacher who knows that they have a chronic underachiever in their class and do a proper handover from the previous year. I would track and monitor those students three times a year (say, 10-20% of a given cohort that I was focusing on) and leave the others. I do not send people with 20-20 vision to an eye clinic three times a year; I send them if they complain of headaches. There is no reason to obsessively check up on educationally healthy students.

All this assumes that KS4 attainment is relatively (excuse the phrase) ‘strong and stable’. There will be schools, and perhaps departments within schools, that will need a different regime. At any point where external examinations showed drastic underachievement in particular groups of students or areas of practice in the school, that would justify a more thorough approach to data collection. I would not routinely gather data on children who, year on year, are achieving; unless I had a strategic reason to do so.

Finally on standards: you don’t maintain standards in a school through data collection. That is not a good reason to check everyone all the time. You maintain high standards in a school through modelling, good support, good training and good leadership. Sure, constant data collection is one tool to set standards- largely through fear- and, as a result, I would argue it is the weakest tool in the box. It is the easiest for people to manipulate. More on that another day!

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s