I was struck by the recent Teacher Tapp results on pupil progress meetings, which indicated that three quarters of respondents have experience of them and that over half consider them useful. I have sat through many of these meetings, often as the senior leader in charge. I was always somewhat dubious about their value and these days I am highly sceptical.
For the sake of clarity, by ‘pupil progress meetings’ I am referring to occasions in which teachers are brought together after an assessment (e.g. mock exam) to review results of individual students against a yardstick (e.g. flight path, target grades). Where performance falls short of the yardstick, interventions are agreed in an attempt to bridge the gap.
I agree entirely that it is a good thing for teachers to consider how well students are learning and a meeting would seem to provide a suitable forum for this to happen, but the problems arise when we think about whether our aims are achievable and whether the benefits outweigh the costs.
Can we know whether a student is on track?
Results from a particular assessment tell us how well a student did on that assessment. The score or percentage does not equate to a grade, because the whole course has not yet been covered and the grade will ultimately be awarded for a series of different assessments taken at a different time by a different cohort of students. Although teachers can draw on all their knowledge of a student to give a rough forecast, we simply do not know for sure what grade anyone is achieving or will achieve right now. As Mark Enser (2019) puts it, ‘just because it is desirable doesn’t mean we can do it.’
The other thing which prevents us from knowing whether students are making good progress is that there is no trustworthy yardstick against which to measure. The statistics upon which target grades are based cannot be applied to an individual with any certainty to judge what she should achieve, and flight paths imply unreasonably that progress is consistent. I have further qualms about target grades, which I have aired on Twitter, but suffice to say for the purposes of this post that they do not offer secure grounds to determine whether a student is underachieving. Matthew Benyohai (2018) gives an excellent account of the futility of this type of progress tracking.
Do meetings help us select effective interventions?
Experience tells me that lots of the interventions which arise from pupil progress meetings are out of class strategies, such as revision sessions, mentoring, phone calls home etc. None of these are bad in and of themselves, but they don’t necessarily address why the student has struggled, which is often to do with misconceptions somewhere in their knowledge. They also generate a great deal of work, which results in less time for teachers to reflect on where those misconceptions might be and how to address them in their lessons.
Even if we do select the best intervention (e.g. a phone call home may be just right for a lazy student), the teachers will almost certainly have known about the problem long before the mock exam. As Robert Slavin (2019) explains, ‘teachers and principals already know a lot more about their students than any test can tell them.’ It would have been far better to ask teachers about causes for concern arising from work ethic at an earlier stage and to make contact home then. Attainment in formal assessments tends to be a lagging indicator, as Becky Allen (2018b) points out, and I agree with her that we would be better planning our interventions in advance and gathering data as we go to see who requires them, rather than waiting for a formal assessment.
So do I think that teachers should do nothing after a mock exam? Absolutely not. I want them to reflect very carefully on what the students’ responses reveal about understanding at a granular level and consider how to address this in the classroom both immediately and for future cohorts. Adam Boxer (2019) provides an excellent model of how this can be done. I think pupil progress meetings tend to act as a hindrance rather than a help.
Do meetings have desirable consequences?
As well as the students, we also have to consider the impact of pupil progress meetings on teachers. Experience tells me that they cause a lot of stress and make teachers feel blamed. When I was running them I tried hard to handle things sensitively and listen rather than lecture, but the fact remained that I was expecting teachers to do something extra when students did not get the result we were hoping for. My allegory of a blame culture is silly, but attempts to make a serious point.
When we make teachers accountable, one consequence is that it tends to distort the assessment e.g. teachers might spend excessive amounts of time preparing students for the mock or might adjust grade boundaries to err on the side of generosity. As Becky Allen (2018a) explains, the ‘very act of raising the stakes for teachers… [can] threaten our ability to measure progress reliably.’ Therefore the edifice of pupil progress meetings undermines the foundations on which they are built.
Do something better
Clearly I am not suggesting that discussion of pupil progress should be banned at meetings, but that much deeper thought should be devoted to what we really know, the most useful things to do and the likely consequences of our actions. In truth I suspect that the underlying motivation for most pupil progress meetings is the fear of inspectors and the desire to have evidence which we can point at to show robust action is being taken to address underachievement. But as Ruth Walker (2019) puts it, ‘the more pointable-at things aren’t always the best things.’ In this case I think it is much more valuable, albeit less visible, to ask teachers to reflect on the valid inferences they can draw from assessments and plan future lessons accordingly. One way we can create the time for them to do so is by cancelling unnecessary meetings, and I would put pupil progress meetings firmly in that category.
Allen R (2018) https://rebeccaallen.co.uk/2018/05/23/what-if-we-cannot-measure-pupil-progress/ (accessed 230120)
Allen R (2018) https://rebeccaallen.co.uk/2018/12/01/poor-attainment-data-often-comes-too-late/ (accessed 230120)
Benyohai M (2018) https://medium.com/@mrbenyohai/the-difference-between-measuring-progress-and-attainment-7269a41cdd8 (accessed 250120)
Boxer A (2019) https://achemicalorthodoxy.wordpress.com/2019/03/26/what-to-do-after-a-mock-assessment-sampling-inferences-and-more/ (accessed 230120)
Enser M (2019) https://teachreal.wordpress.com/2019/02/21/desirable-but-impossible/ (accessed 230120)
Slavin R (2019) https://robertslavinsblog.wordpress.com/2019/04/11/benchmark-assessments-weighing-the-pig-more-often/ (accessed 230120)
Walker R (2019) https://rosalindwalker.wordpress.com/2019/11/08/the-more-pointable-at-things-arent-always-the-best-things/ (accessed 230120)
Image found here: https://pixabay.com/photos/iocenters-conference-room-2673328/ and labelled as available for noncommercial reuse.