Taking steps to improve the quality of instruction so that all children progress appropriately towards literacy is not a job for the faint-hearted. Yet my 2019 was filled with stories of courageous schools and teachers making huge strides. The ambition to make my tutoring practice (and others like it) a necessity only for those with extreme literacy difficulties is a little closer to becoming a reality.
Many of the schools who have decided to take the plunge towards more evidence-aligned literacy instruction have also taken into account the critical importance of valid assessment. These are the schools who have the best chance of succeeding.
I’m therefore offering a suggestion to all teachers and schools who want to raise the quality of their literacy instruction quickly:
Check your assessment battery. If it contains Running Records, you will make slower progress. They are time-consuming and inaccurate.
Besides that, when striving to change literacy instruction for the better, teachers and leaders can find themselves opposed. Sometimes the opposition is external. The school implementing change could be surrounded by schools or systems that are steeped in whole language or balanced literacy philosophies. Sometimes the opposition comes from within the school itself, from leadership, parents or colleagues.
The most effective way of dealing with opposition, of course, is to gather evidence that what you’re saying is true. Assessment data that is reliable, objective and measures what is being taught are both ammunition and armour in the battle.
So here’s the part that I try to get my school clients to understand: if they are committed to raising the standard of literacy instruction in their school and showing it, they have to be able to measure progress with validity.
Running Records, or indeed any assessments that attempt to incorporate the 3 cueing system into the assessment rubric, are not their friends.
To help challenge the idea of using Running Records, I try to get school leaders to ponder the following points:
- Word-level reading and language comprehension are two separate things. Reading comprehension is a product of both. The two have to be measured individually so that the cause of low reading comprehension is correctly identified. If you ask a child to read a passage out loud and then ask them comprehension questions about that passage, you won’t know how to account for any low scores unless you measure both processes separately. As Dr Heidi Beverine-Curry, The Reading League‘s Vice President for Professional Development says in this video, “Students’ decoding ability and language comprehension are discrepant in the primary years, so why measure it with the same instrument?”
- If teachers are using the tests with fidelity, then they have to mark every error as a meaning, structure or visual. If that’s what they’re doing, how does the result then inform their subsequent teaching? If it’s a so-called meaning error, what does a corrective lesson look like? If it’s structure, what are the key teaching points necessary to avoid this error? And if it’s ‘visual’, well, that’s not really even a thing, is it? And how do they correct it even if they believe in its existence?
- If they are using a systematic, structured literacy approach that has a stated sequence of introduction of graphemes and practice to mastery, then their continuum will not match the texts in the assessment. So why would anyone assess what they aren’t teaching?
- If they aren’t using the tests with fidelity, but are just using them to get a general picture of where their students are, why are they spending time on this at all? Aren’t there much better ways of doing this that don’t require such a large expenditure of time, energy and money?
- The text level that this type of measurement spits out is inaccurate, arbitrary, not replicable and misaligned with the teaching material. It gives false information to parents and students about skills and progress, since it doesn’t measure growth in the key areas of literacy. This is ultimately damaging to the populations most at risk of reading failure. For that reason, wouldn’t the use of Running Records in fact be unethical?
Questions that arise from these discussions and some answers
- So what should teachers do instead?
- How about using some of the readily available, low-cost or free assessments like the ones from Acadience, or Macquarie University? If you want to test listening comprehension, have a set of passages with comprehension questions, read them out to your students, then ask the questions. The Probe Test is one example of a resource that allows you to do this.
- Won’t parents be upset that they don’t get to gauge their children’s reading level?
- Only if you don’t keep parents in the loop. Explain why you’re moving toward even better assessment for a clearer picture. Establish a culture of collaboration with parents and get them to embrace meaningful assessment reporting.
- How do we report on the reading level students are at if we don’t use running records to reveal their level?
- That’s kind of like saying, “How do we tell them what colour their aura is if we don’t get a clairvoyant in?” The answer is, it’s not important. Book levels are not a valid measure. More about that here and here.
I’ve put out a survey to all teachers via social media regarding mandated Running Records. The results so far have been eye-opening and I’ll be following up with an article on the state of play worldwide. We’re seeing such an encouraging shift towards high quality practice, let’s match it with assessment of a similar calibre.
19 thoughts on “Rethinking Running Records: Ammunition and armour in the battle for better literacy”
Thank you for this! You’ve written cogently about the discomfort I’ve felt for years with running records. They are not precise tools and are so time-consuming. Some districts spend an entire day with paid subs so that teachers can do running records. That time could be much more effectively spent teaching teachers about things like orthographic mapping and phonemic manipulation. Keep up the great work!
What a waste! Thank you, Lisa, for your contribution!
Firstly, where is the research behind these opinions? Secondly, PROBE (an alternative assessment mentioned above) uses running records as a component of the entire assessment. Running Records inform teaching when the teacher has a deep understanding of reading comprehension and the pedagogy to support it. They are free and quick to administer if you are experienced. The MSV is critical and can be used to determine next step teaching. I find this article deeply annoying and questionable.
Thank you for your comments, Naomi. I’ll answer them one by one:
YOU: “Firstly, where is the research behind these opinions?”
ME: What do you mean by that exactly? Are you saying I have to be a researcher to critique an educational resource? Or are you calling into question my knowledge and experience more generally? If you want me to provide links to research backing any of the claims I’ve made, it might be quicker to read my fully referenced books.
YOU: “Secondly, PROBE (an alternative assessment mentioned above) uses running records as a component of the entire assessment.”
ME: I guess you didn’t read the Probe 2 Assessment Manual for meaning then. Here it is, since you missed it:
“OPTION 3: LISTENING COMPREHENSION.
PURPOSE: When it is suspected that a poor decoder has a higher comprehension level. This option is generally used after analysing an Option One result.
CONSIDERATIONS: This is an informal assessment. While results cannot be recorded as an independent reading level, they can be recorded as a listening comprehension result. Results can confirm that the reading vocabulary is the bigger problem.”
YOU: “Running Records inform teaching…”
ME: I just wrote about why they don’t in this article.
YOU: “Running Records inform teaching when the teacher has a deep understanding of reading comprehension and the pedagogy to support it.”
ME: I think you may be alluding to the purported “Complex View of Reading”, which, unfortunately (and this may be news to you) does not align with the established scientific consensus on the process of literacy acquisition.
YOU: “The MSV is critical”
ME: It really isn’t, or I’d use it in my very successful intervention practice. If it were good for my students, I’d be all over it like a rash. It’s not.
YOU: “I find this article deeply annoying and questionable.”
ME: Your questions have been answered in the most logical way I can muster. Your emotions regarding the article should be taken as a sign that perhaps you are personally, rather than professionally challenged by my statements. However, everything I do and say professionally comes from the desire to put my students first. Their wellbeing and life chances depend on my getting it right. There is no room for philosophy or personal feelings and no room for failure. Running records are a failure in assessment. Taking it personally helps no one.
Understanding where a teacher’s frustrations come from is perhaps one of the most important steps we need to take right now. A teacher’s practice is so enmeshed in their training and beliefs and they have sincerely believed the evidence behind both. This is a symptom of the outstanding marketing job that RR has done on teacher training institutions and schools. As a teacher, it is understandable to feel incensed at ‘new’ knowledge interrupting what you have firmly believed for so long. RR may also be heavily supported in their school and by their peers. I think we can only ask for an open mind on both sides of the coin. Translating evidence into practice is never easy. It takes reserves of stamina and open curiosity about eachother’s beliefs that busy people can find exasperating and exhausting. We all want the best for our students and remaining open and responsive is part of the job.
Exactly, Tina, thanks. If I went crashing into schools telling them how wrong I think they are, I’d be out of the consulting business in two seconds flat. In all the encounters with teachers I’ve had over my decades of work, I’ve never met one who lacked good intentions. My job is to recognise the many strengths a teacher and a whole school has and help them maximise those with the most up to date, user-friendly resources possible. High quality assessment is a huge part of that picture. It gives me joy to meet so many who are keen to know and do better.
Boom! (mike drop)
Applause, well stayed.
Excellent response and a pleasure to read.
Naomi, check out the many Science of Reading pages on Facebook. I think you’ll find the research to back this up. There’s a great article you can google called Teaching Reading is Rocket Science that you might like.
Thank you, I recently commented on a post about a value in using Running Records and it’s incorporation of the 3 Cueing System for assessments purposes only. The SoR is my north star however I thought I saw value in assessing reading through this lens in addition to the use of Acadience, Aimswebs and other such tools. Your points to ponder have made me rethink my beliefs and comment, point 2 especially. I appreciate your insight and sharing. I have a new and deeper understanding of why Running Records are not worth the time.
Naomi, I certainly understand how unsettling it can be to be confronted with information that contradicts the paradigm that your instructional practice in reading is rooted in. Like many others, my undergraduate instruction and my induction to teaching involved a lot of balanced literacy and discussions about the three queuing system. As a result, I learned to do running records like a champ! There was nothing I loved more than analyzing my students’ errors and using that information to change my instruction. What I could never quite reconcile, however, was the fact that despite my efforts my students just didn’t grow as I expected. 20 years and two masters degrees later, I’ve been given the opportunity to dig into the work of many amazing researchers. I also spent quite a bit of time reading the national reading panel report from the late 90s. This information was convicting, to say the least.
If you have time, please listen to these podcasts! Excellent information.
Thank you, thank you, thank you. Your article is just what I need to support my ongoing argument. Where will I find your survey, or is it too late.
Thanks Maggie. Yes, the survey closed on Jan 27 but there will be a follow up article with results shortly!
I have such a problem with high schools who ALL seem to use running records and LLI for intervention. When I question the use their answer is that ‘our students don’t have any problem decoding, it’s their lack of vocabulary and comprehension that we are working on and that’s why F&P is good’…what’s the best response to a statement like this? Most of the articles I’ve read are based around their use at a K-2 level…are there any good pieces I could reference at a secondary level?
It’s a good question, Hayley. It does all come back down to the acceptance or non-acceptance of the Simple View of Reading as a process model. And I don’t see any evidence that F&P materials do bring about positive changes in vocabulary or background knowledge, which are essential elements of comprehension. I take it those using F&P subscribe to the notion that comprehension ‘strategies’, like inferring etc. deserve the lion’s share of instructional time?
Ooh yes! You ask any of the principals I’ve met recently and they’ll tell you the problem with student literacy is they can’t infer…🤦♀️
Thank you. I love this. I have started at a new school this year that does a Systematic Synthetic Phonics approach. I like it because I agree that the brain needs to learn how to break the code before learning how to read and spell. However, I am told they use PM benchmarking as an assessment. My understanding is that PM is a whole language approach. I was looking for alternatives. I will send a link to this article to the Literacy team. Many thanks
Great blog post – thank you!
Do you think there are benefits for:
a) using running records as a fluency and comprehension assessment and snapshot of retelling skills? The ones we use have a comprehension component with literal and inferential questions, a retelling component and a fluency rubric to record prosody/pace/expression.
b) doing informal running records by listening to a child read a hundred words and seeing how their fluency is and asking them some comprehension questions. When I do this, I have a scan or photocopy of what they read and just put a tick if the whole line was read correctly and if not, jot down words that were not read out correctly.