How do we apply the lessons from the cycle of planned failure to specific teaching strategies, so that we can work to improve outcomes for our students?
This blog was set up as a way of sharing my thoughts around the writing process for my upcoming Dunlosky "In Action" book, based around his seminal paper Strengthening the Student Toolbox. This is the third and final post in a 3 part series looking at The Cycle of Planned Failure (part 1 and part 2).
In the process of writing, I was mindful that a common pitfall of education is to come across an idea and implement it quickly. There are several reasons for this, not least because as teachers if we think something will help our students we want to get it to them as quickly as we can. However, this is the trap that the cycle has warned us of. Without proper analysis we will likely fall prey to the problem cycle and create new, previously unknown problems.
Take the idea of feedback for example. In the not so distant past, feedback became a bit of a "thing" in education. The problem started when this became defined as providing written feedback (as was too often the case across the country). The outcomes? Teachers were (and in some cases still are!) spending hours and hours writing personalised feedback with little impact, especially relative to the amount of time and effort put in. How did this come about? Why is it wrong? Clearly it would be outrageous to think high-quality feedback isn't a crucial part of effective teaching and so an emphasis should rightly be placed on it and the development of it. However, as the EEF Guidance Report on Feedback notes, what's crucial is:
"moving beyond choosing between feedback methods, such as written or verbal, towards a renewed focus on the principles of effective feedback."
In focusing on the mode of feedback over the principles of what makes feedback effective, not only was our original problem not solved (students having gaps & misconceptions in their learning), new problems were created (teachers spending time on marking & feedback, which could have been better spent elsewhere such as planning and developing practice).
The Dunlosky In Action book will contain recommendations and examples of classroom practice for the effective strategies outlined in the Toolbox paper. How can we heed the lessons of the past to ensure that when translating those strategies into our own classrooms, we are not falling prey to the cycle? What do we need to be mindful of to ensure we're applying these ideas with fidelity? If we look at the EEF's Implementation Guidance report, we'll find the start of our answer in the very first recommendation:
"Treat implementation as a process, not an event"
Using the cycle (below) we can see that it's crucial that we take the time to identify and analyse the problem, assess possible solutions, and then rigourously test and iterate it as we implement it. Let's contextualise this.
We notice that our students have an issue with retention - no matter how hard they try they just can't remember what they did last week, let alone even further back! So instead of going on to new learning or extending old learning, we end up having to reteach entire lessons. Students seems to "get it" in the lesson, but it "disappears" after that initial teaching. We see this in our lessons from student responses, but also from students' work where questions are based on previously covered learning and from conversations with colleagues who indicate that they have taught those topics to those students previously. So we've got multiple points of data that suggest there is a problem with retention - thorough analysis of our problem.
We then start to look for solutions and remember one of the strategies from The Toolbox: Distributed Practice. We dig further into the research on this and see that forgetting should be expected and how regular revisiting of key knowledge may solve our retention problem - plans made based on robust research. We adapt our lesson planning to include regular opportunities to revisit previously covered content. We set very specific goals (e.g. students can correctly give the equation for area of a triangle 3 weeks after first exposure, and then again on their end of term test in a few months time). We then carry out our plans and adapt them depending on if our goal has or hasn't been met.
As a result of our planning, we find that most students can correctly give the equation of a triangle after 3 weeks, but when required to do this on their end of term test a few months later the majority struggle. By having these shorter and longer-term goals in place, we are able to better judge the success of our implementation by also ensuring that it is durable - monitoring during implementation to assess impact. Having seen the difference in outcomes between the short and long-term goals, we again go back to our planning and adjust the timing of retrieval opportunities to better serve our students. We then start applying the technique to different topics and classes, continuing to monitor its impact and adjust accordingly.
Having thoroughly road tested it in a number of contexts with successful monitoring and evaluation, we have incorporated it into our practice and can continue to use it to chip away at the problem of retention.
My hope in sharing these blog posts is that we are able to find the time and space to take a long term approach to implementing the fantastic techniques discussed by Dunlosky in The Toolbox. That we treat implementation as a process, not as an event. And that, in attempting to continually improve outcomes for our students, we acknowledge the importance of taking a thoughtful and considered approach in adopting changes to our classroom practices instead of rushing into them.
Comments