I’ve been fairly absent from blogging/Twitter since the summer – an inevitable consequence of taking up a few new roles amidst the discord of new systems and specifications emerging from gov.uk with increasing regularity. But I don’t mean that as a complaint. Much that was there was broken, and much that is replacing it is good. Although life in the present discord is manic and stressful, it is also a time of incredible opportunity to improve on what went before, and to rework many of the systems in teaching that went unquestioned in schools for too long.
This Christmas I’m stopping to reflect on the term gone by, and on our efforts to improve three areas: Assessment, Curriculum, and Teaching & Learning. There are many failures, many ideas that failed to translate from paper to practice, but also a good number of successes to learn from and develop in January.
A Blank Slate
KS3 SATs died years ago. National Curriculum levels officially die in September, but can be ‘disapplied’ this year. With tests and benchmarks gone, there is a blank slate in KS3 assessment. This is phenomenally exciting. Levels saturated schools with problems – they were a set of ‘best fit’ labels, good only for summative assessment, that got put at the heart of systems for formative assessment. No wonder they failed.
At WA we decided to try building a replacement system, trialled in Maths, that could ultimately achieve what termly reporting of NC levels never could. We began with three core design principles:
1) It has to guide teaching and learning (it must answer the question “what should I do tonight to get better at Maths?”).
2) It has to be simple for everyone to understand.
3) It has to prepare students for the rigour of tougher terminal exams and challenging post-16 routes.
Principle 2 led us to an early decision – we wanted a score out of 100. This would be easy for everyone to understand, and by scoring out of 100 rather than a small number we are less likely to have critical thresholds where students’ scores are bunched and where disproportionate effort is concentrated. Scoring out of 100, we felt, would always encourage a bit more effort on the margin in a way that GCSE with eight grades fail to do.
Principle 1 led us to another early decision – we need data on each topic students learn. Without this, the system will descend into level-like ‘best fit’ mayhem, where students receive labels that don’t help them to progress. Yet there’s a tension here between principles 1 and 2. Principle 1 would have data on everything, separated at an incredibly granular level. However this would soon become tricky to understand and would ultimately render the system unused.
For me, Principle 3 ruled out using old SATs papers and past assessment material. These were tied to an old curriculum that did no adequately assess many of the skills we expect of our students. They also left too much of assessment to infrequent high-stakes testing, which does not encourage the work ethic and culture of study we value.
These three principles guided our discussions to the system we have now been running since September.
The Maths curriculum in Year 7-9 (featured in the next post) has been broken down into topics – approximately 15 per year. Each of these topics is individually assessed and given a score out of 100. This score is computed from three elements: an in-class quiz, homework results, and an end of term test. Students then get an overall percentage score, averaged from all of the topics they have studied so far. This means that for each student we have an indication of their overall proficiency at Maths, as well as detailed information on their proficiency at each individual topic. This is recorded by students, stored by teachers, and reported to parents six times a year.
Does it work?
Principle 1: Does it guide teaching and learning?
Lots of strategies have been put in place to make sure that it does. For example, the in-class quiz is designed to be taken after the material in a topic has been covered but before teaching time is over. The results are used to guide reteaching in the following lessons so that the students can retake with another quiz on that topic and increase their score. Teachers also produce termly action plans as a result of their data analysis, which highlight the actions needed to support particular students as well as adjustments needed to combat problematic whole class trends.
Despite this, we haven’t yet developed a culture of assessment scores driving independent study. Our vision is that students know exactly what they have to do each evening to improve at Maths, and I believe that this system will be integral to achieving that. We need a bigger drive to actively develop that culture, rather than expecting it to come organically.
|Extract from the Year 7 assessment record sheet.|
I’m also concerned that assessment at this level has not yet become seen as a core part of teaching and learning. Teachers are dedicated in their collection and recording of data, and have planned some brilliant strategies for extending their students’ progress. But it still just feels like an add-on, something additional to teaching rather than at the heart of it. One of our goals as a department next term must be to embed assessment data further into teaching; not to be content with it assisting from the side.
Principle 2: Is it easy to understand?
Unequivocally yes. Feedback from parents, tutors and students has been resoundingly positive. Each term we report each student’s overall score, as well as their result for each topic studied that term. One question for the future is how to make all past data accessible to parents, as by Year 9 there will be 40+ topics worth of information recorded.
Principle 3: Is it rigorous enough?
By making the decision to produce our own assessments from scratch we allowed ourselves to set the level of rigour. I like to think that if anything we’ve set it too high. We source and write demanding questions to really challenge students, and to prepare them to succeed in the toughest of exams. A particular favourite question of mine was asking Year 8 to find the Lowest Common Multiple of pqr and pq^2, closely rivalled by giving them the famed Reblochon cheese question from a recent GCSE paper.
|The Reblochon cheese question – a Year 8 favourite.|
Following the advice of Paul Bambrick-Santoyo (if you haven’t read Leverage Leadership then go to a bookshop now) we made all assessments available when teachers began planning to teach each topic. This has been a great success, and I’ve really seen the Pygmalion effect in action. By transparently raising the bar in our assessments, teachers have raised it in their lessons; and students have relished the challenge.
This assessment system works. It clearly tells students, teachers and parents where each individual is doing well and where they need to improve. Nothing is obscured by a ‘best fit’ label, yet the data is still easy to understand. Freeing ourselves from National Curriculum levels freed us from stale SATs papers and their lack of ambition. Instead we set assessments that challenge students at a higher level – a challenge they have met. The next step is making data and assessment a core part of teaching. Just like NC levels were once a part of every lesson (in an unhelpful labelling way), the results of assessment should be central to planning and delivering each lesson now.