Taking steps to improve the quality of instruction so that all children progress appropriately towards literacy is not a job for the faint-hearted. Yet my 2019 was filled with stories of courageous schools and teachers making huge strides. The ambition to make my tutoring practice (and others like it) a necessity only for those with extreme literacy difficulties is a little closer to becoming a reality.
Many of the schools who have decided to take the plunge towards more evidence-aligned literacy instruction have also taken into account the critical importance of valid assessment. These are the schools who have the best chance of succeeding.
I’m therefore offering a suggestion to all teachers and schools who want to raise the quality of their literacy instruction quickly:
Check your assessment battery. If it contains Running Records, you will make slower progress. They are time-consuming and inaccurate.
Besides that, when striving to change literacy instruction for the better, teachers and leaders can find themselves opposed. Sometimes the opposition is external. The school implementing change could be surrounded by schools or systems that are steeped in whole language or balanced literacy philosophies. Sometimes the opposition comes from within the school itself, from leadership, parents or colleagues.
The most effective way of dealing with opposition, of course, is to gather evidence that what you’re saying is true. Assessment data that is reliable, objective and measures what is being taught are both ammunition and armour in the battle.
So here’s the part that I try to get my school clients to understand: if they are committed to raising the standard of literacy instruction in their school and showing it, they have to be able to measure progress with validity.
Running Records, or indeed any assessments that attempt to incorporate the 3 cueing system into the assessment rubric, are not their friends.
To help challenge the idea of using Running Records, I try to get school leaders to ponder the following points:
- Word-level reading and language comprehension are two separate things. Reading comprehension is a product of both. The two have to be measured individually so that the cause of low reading comprehension is correctly identified. If you ask a child to read a passage out loud and then ask them comprehension questions about that passage, you won’t know how to account for any low scores unless you measure both processes separately. As Dr Heidi Beverine-Curry, The Reading League‘s Vice President for Professional Development says in this video, “Students’ decoding ability and language comprehension are discrepant in the primary years, so why measure it with the same instrument?”
- If teachers are using the tests with fidelity, then they have to mark every error as a meaning, structure or visual. If that’s what they’re doing, how does the result then inform their subsequent teaching? If it’s a so-called meaning error, what does a corrective lesson look like? If it’s structure, what are the key teaching points necessary to avoid this error? And if it’s ‘visual’, well, that’s not really even a thing, is it? And how do they correct it even if they believe in its existence?
- If they are using a systematic, structured literacy approach that has a stated sequence of introduction of graphemes and practice to mastery, then their continuum will not match the texts in the assessment. So why would anyone assess what they aren’t teaching?
- If they aren’t using the tests with fidelity, but are just using them to get a general picture of where their students are, why are they spending time on this at all? Aren’t there much better ways of doing this that don’t require such a large expenditure of time, energy and money?
- The text level that this type of measurement spits out is inaccurate, arbitrary, not replicable and misaligned with the teaching material. It gives false information to parents and students about skills and progress, since it doesn’t measure growth in the key areas of literacy. This is ultimately damaging to the populations most at risk of reading failure. For that reason, wouldn’t the use of Running Records in fact be unethical?
Questions that arise from these discussions and some answers
- So what should teachers do instead?
- How about using some of the readily available, low-cost or free assessments like the ones from Acadience, or Macquarie University? If you want to test listening comprehension, have a set of passages with comprehension questions, read them out to your students, then ask the questions. The Probe Test is one example of a resource that allows you to do this.
- Won’t parents be upset that they don’t get to gauge their children’s reading level?
- Only if you don’t keep parents in the loop. Explain why you’re moving toward even better assessment for a clearer picture. Establish a culture of collaboration with parents and get them to embrace meaningful assessment reporting.
- How do we report on the reading level students are at if we don’t use running records to reveal their level?
- That’s kind of like saying, “How do we tell them what colour their aura is if we don’t get a clairvoyant in?” The answer is, it’s not important. Book levels are not a valid measure. More about that here and here.
I’ve put out a survey to all teachers via social media regarding mandated Running Records. The results so far have been eye-opening and I’ll be following up with an article on the state of play worldwide. We’re seeing such an encouraging shift towards high quality practice, let’s match it with assessment of a similar calibre.