Improving Quality Assurance: Reviewing our Reviews

I recently blogged about some of the things which go wrong with quality assurance in schools and set out some suggestions for how we might attempt to quality assure the curriculum more effectively. The post below builds on this by outlining the changes we have made to department reviews in my school, in an attempt to put the principles in the earlier posts into practice.

My school, like many others, has used department reviews for quite a long time (since well before I worked there, so I am not sure when they started). The form they used to take would be familiar to lots of readers: some lesson drop-ins, a student consultation and a work scrutiny to look for good practice and areas for improvement, all of which culminated in a report written by the senior leader with oversight of the department in question. My sense is that they were not particularly onerous for the staff on the receiving end and there was no grade or summative judgement, which were both good things, but I do not think they were especially useful. Some of the reasons for this are as follows:

  • They did not always have a clear focus (e.g. they might look generally at quality of KS3 lessons);
  • When they did have a clearer focus, it was not necessarily one about which reliable conclusions could be reached from our department review process (e.g. provision for disadvantaged students);
  • The activities undertaken were not necessarily well-matched to the needs of the review (e.g. lesson drop-ins tended to be pretty random, based on whatever happened to be on the timetable at a convenient time for the senior leader in question);
  • While I think it is important to consult students and listen to their views about their experiences in school, they were not necessarily qualified to provide useful feedback about the focus of the review.
  • Those leading the review (normally a non-specialist senior leader) sometimes lacked the expertise to assess what they were looking for, and there was nothing built into the review process to address this problem;
  • The aspects of practice scrutinised in department reviews were not necessarily the best things to look at in order to identify areas for development, because they might simply be symptomatic of something going on (or not going on) elsewhere, as I described in the first post in this series.

As a result of these shortcomings, our department reviews tended to praise readily visible things (e.g. engagement in lessons, marking in books), rather than getting to grips with the underlying issues which might have revealed important information. This in turn provided an incentive for teachers to prioritise those visible things, even though others might have had more impact.

After a lengthy hiatus, during which department reviews were suspended due to Covid, we decided to relaunch them, but to adapt them to help us understand how effectively our previous work to develop curricular understanding in the school and to review the curriculum was taking root. We could have scrapped them altogether, but we felt that reforming them was a better bet for a number of reasons:

  • We believed they could offer a suitable tool to do the job we wanted to do;
  • Staff were used to them, whereas something new and unfamiliar might have been too much at a stressful time;
  • They provide a means by which our trustees can get an oversight of quality assurance in the school, and this is important in enabling the trust board to fulfil its responsibilities;
  • I was leading this change and I am, by nature, a reformer rather than a revolutionary.

To cut a long story short, the following things now happen in a department review:

  1. The senior leader overseeing the review and relevant head of department agree on two specific topics/units of study as the foci of the review. They could be from the curriculum for any year groups.
  2. The senior leader looks at the department’s documentation (e.g. schemes of work) for the two topics and holds a curriculum conversation with the head of department to discuss their intent in depth, trying to gain an understanding of what the teachers want students to learn and what should be expected in lessons, including what makes the curriculum appropriately ambitious. The trustee with a link to the department is invited to attend.
  3. A range of lessons in the relevant year groups is identified, during which the two topics will be taught. The senior leader and head of department (and others if need be) visit them to consider how consistently the curriculum is being implemented as intended and how far all students are able to access it.
  4. Students are selected from the relevant year groups for a consultation, to which the linked trustee is invited. They are asked about what they have learned about the topics in question.
  5. The whole department is invited to a meeting at which everyone takes part in a scrutiny of books/other work for the relevant year groups, considering how far the work suggests that students have grasped what was intended for the two topics. This meeting broadens out into a general department consultation about how the two topics are taught, considering any obstacles to doing so effectively and how teachers find out whether students are gaining the intended understanding.
  6. The senior leader overseeing the review summarises the notes from all parts of the process into a short report.

It does not take a genius to notice that a good deal of what takes place appears very similar to what happened under our old review framework. Furthermore, what we do resembles an Ofsted deep dive in a number of ways, which might leave me open to accusations of hypocrisy, since I have previously criticised attempts by schools to recreate such things. I would argue, however, that appearances are deceptive on both these fronts, because the parallels are merely superficial. The points below outline key ingredients of our new approach to reviews, which I believe make them substantially different, both from what came before and from Ofsted. Furthermore, I would argue that our new reviews are much more in line with the good bets for curriculum QA which I proposed in my previous post in the series.

  • Reviews build on a lot of curricular CPD and review over the past few years, which have aimed to increase understanding of curriculum amongst teachers, heads of department and senior leaders. A good proportion of inset time is still devolved to departments for this purpose.
  • Reviews, which take place biennially for each department, are certainly not the only aspect of our curriculum quality assurance, but part of a much wider process of ongoing review and improvement. For example, more general curriculum conversations are scheduled to take place in routine meetings between heads of department and their line managers.
  • Reviews have a very clear focus, asking only about the ambition of the intent in the topics considered, the consistency of its implementation and its accessibility to students. The narrowing of the focus means we have a much greater chance of drawing valid inferences.
  • Reviews consider how far the department is implementing its own intent for the two topics under consideration and meeting the expectations set out by the head of department in the introductory conversation. Accountability, therefore, starts with the department’s own standards, rather than anything imposed on them.
  • Each review focuses on two selected topics only, making no attempt to extrapolate from them to the wider curriculum. We seek to work in the spirit of Matthew Evans’ superb blog post, advocating mapping rather than weighing as a form of quality assurance. The aim is to map a limited area in a large enough scale to provide useful information to the department concerned and to build our understanding as senior leaders in a meaningful way. In my view, one important measure of the success of a review is whether it reveals something to the head of department which they did not already know.
  • Thoughtful questions about the curriculum are embedded throughout the review process to improve the odds of gathering useful information and of finding out what might be causing any problems. I have written a guide about the questions to ask/consider at each stage, which I have mixed feelings about, but I have made it available (see below) in case it is of use/interest to anyone.
  • Data is triangulated from different sources (the opening conversation, lesson visits, student consultation, work scrutiny and department consultation) by considering the same issues in each part, instead of jumping to conclusions from one thing only.
  • The review process is very open (e.g. all department members are invited to take part in the work scrutiny and to give their thoughts in the consultation).
  • Reviews are formative rather than summative. No grades are awarded and the purpose is not to sum up how good we think the curriculum is, but to help departments in their ongoing work of curriculum improvement. The proforma for the report, which I have also made available (see below), is far from perfect, but it is very clear that the purpose is to identify strengths, raise questions for further investigation and make suggestions for follow-up.
  • Reviews are a two-way process and can throw things up for our attention as a senior team as well as for the department in question. After each review we reflect on the report at an SLT meeting and consider whether anything has been raised which requires our attention or action, before it is shared with the committee of trustees. Any concerns which are uncovered are seen as shared problems, rather than sticks with which to beat people.

We are still bedding in our new approach, so it is too soon to know whether they are having the impact we want (and I don’t suppose we will ever be able to attribute causation with certainty), but early signs are promising and feedback from both subject and senior leaders has been positive. I can say with confidence that after leading one review process, I have a far better understanding of the curriculum intent for the topics considered and of the various issues around its implementation than I did before, and I certainly feel I have learned a lot more than I did in one of the previous reviews. The subject leader concerned has also indicated (to me, at least!) that the process provided useful information to her department, which is of value as they move forward with the ongoing process of curriculum development.

I have set up a public folder of documents relating to department reviews, including those mentioned above, to help others understand our process and in case anyone wishes to adapt them to meet the needs of their settings and make use of them. The folder can be found here.

2 thoughts on “Improving Quality Assurance: Reviewing our Reviews

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s