How can we answer the question: Is my child reading at grade level?
Most parents are under the impression they are already getting an answer to this question when they receive the results of a commonly-used reading assessment, the Fountas and Pinnell Benchmark Assessment, often abbreviated as “F&P” or BAS. Approximately 80% of classrooms in the United States assess and report student reading levels based on this system. The problem? It is a wildly inaccurate assessment. This current system leaves the probability of identifying struggling readers who need extra support to the chance of a coin toss. In other words, it has an accuracy rate of about 50% (Burns et al 2015). This benchmark system, created by Irene Fountas and Sue Pinnell, assigns a letter to represent a child’s reading level. After a 30-45 minute assessment, a student is assigned a letter that corresponds to a reading level, let’s say a “level G,” which is according to the system, midyear first grade. The main flaws with this system are:
- The accuracy of the system has been proven to have a 54% reliability rate (Burns, 2015).
- The reading level determined by the assessment does not provide any details on what specific skills the student has or which skills the student needs to develop.
- Students are often placed in small groups for instruction based on their assigned reading level. Yet, there is no information available to teachers as to what skills these students should be working on in these small groups!
The assessment system also uses the assigned level to purposely provide direction in finding each child a “just right” level of book for independent reading. A “just right” level holds such innate appeal. It sounds like something that would be perfect for our children; interesting text with just enough challenge to be engaging without reaching a point of frustration. Yet research shows that more than half of students who attempt to read at their assigned “just right” level are matched with text that is too easy or too difficult for them to read (remember that 54% accuracy rate of this assessment system?) A team of researchers examining this system took a closer look at a cohort of students who were all deemed to be a “level G.” Among this group of students, there were some who scored in the 1% in reading skills on the NWEA/MAP standardized national assessments, and others who scored in the 75% on the same test! As reading researcher and professor Tim Shanahan says, these assessments “are so widely used it would make sense that there would be lots of evidence as to their efficacy. Except that there is not” (Shanhan 2011).
At Redwood Literacy, and other educational settings that are dedicated to following evidence-based instruction, students are assessed in a wholly different way. Using the DIBELS (Dynamic Indicators of Early Literacy Skills) 1-minute probes, instructors glean essential, specific, and actionable information on student reading skills. These measures, which have been researched, vetted, and validated over the course of decades of research, can be used to monitor risk for reading difficulties, identify specific skills that are strengths or weaknesses, and monitor progress toward specific goals. DIBELS (and other assessments that are based on the same principles) provide information on how well students can decode words, how fluently they read, and how much they understand what they read. Armed with this information, teachers can develop instruction that is truly targeted to the needs of each individual student AND monitor progress to ensure that each student is developing the requisite reading skills in a timely manner to keep them on track. These measures have been shown to have over 90% accuracy; a far cry from the widely-used, time-consuming measures that leave measuring students’ reading skills up to a 50/50 coin toss.
While this type of assessment is not intended to diagnose a reading difficulty, it can accurately flag students who might be at risk or struggling, indicating that further assessments are required to pinpoint the areas for intervention. Some critics of DIBELS point to the issue of reading speed as problematic. With DIBELS, students are asked to read a passage for 1 minute, and the correct words the student reads are recorded to calculate Oral Reading Fluency. This measure gives you the number of words the students read correctly in one minute. It is a critical number, not because students should aim to read faster and faster. It is essential that we don’t impart this flawed notion to our children. However, ORF is critical because decades of research and validation show that a student needs to have an ORF between 50-75% (see the norms chart here) to comprehend what they are reading. If a child is below this benchmark, it is an urgent red flag that further diagnostic testing must be done to tease apart which skills are underdeveloped for that student. This same ORF measure can be used as a part of monitoring progress, combined with detailed information about a student’s decoding, spelling, and comprehension skills.
Additional Resources
PS: Are you concerned your child isn’t receiving the reading instruction they need? You can learn more about Amplify’s mClasss DIBELS assessment on their website. This is the assessment Redwood uses to assess our student’s reading levels for baseline data as well as progress monitoring. Want help understanding your child’s reading levels? Sign up for a DIBELS assessment with Redwood here. You can use this data to advocate for your child at school or inform your selection of outside intervention services.
PSS: If you find yourself resonating with this article, google “dyslexia tutoring near me” to find support. Connecting with a knowledgeable professional can be transformative in empowering you with what you need to support your child with dyslexia. Also, reach out to other parents. You can google “dyslexia parent groups near me”, ask around at your child’s school, or attend a local event with dyslexia as its theme. Redwood Literacy and Redwood Schools are also here to help if we can. Reach out anytime.