If you’ve been on parent Facebook lately, you’ve probably seen the i-Ready takes piling up. Kids hate it. Parents hate it. Teachers were quietly recruited as spokesmodels and didn’t know what they were signing up for. The data showing it actually moves reading scores is, charitably, thin.
But almost every one of those posts ends with the same hedge: “It’s a decent screener, though.”
Here’s the thing. It’s not. And states are starting to put that on the record.
I went deep into two state reviews this month, one from California and one from Michigan, because if you’re trying to figure out whether the universal screener your district uses is doing its job, the most useful information out there isn’t a marketing brochure, it’s the documents your state department of education had to publish under a brand new dyslexia screening law. Those documents tell you what your school does not.
Two states. Same conclusion.
California, December 17, 2024. The state’s Reading Difficulties Risk Screener Selection Panel, a body created specifically to vet screening tools under California’s new dyslexia screening mandate, published its approved list for the 2025-26 school year. Districts had to pick from this list before June 30, 2025.
Four tools made the cut: Amira, DIBELS 8, Multitudes (out of UCSF’s Dyslexia Center), and ROAR (an open source assessment out of Stanford).
i-Ready, the biggest player in the room, did not.
Michigan, December 18, 2025. Under Public Acts 146 and 147, Michigan’s new K-12 literacy and dyslexia laws, the Michigan Department of Education was required to publish a list of valid and reliable K-3 screening and progress monitoring assessments by January 1, 2026. They dropped it two weeks early, and the document came with two lists: tools the state trusts to find kids with reading difficulties (Amira, MAP Reading Fluency, DIBELS 8), and tools that didn’t clear the bar.
i-Ready landed on the second list.
In the words of the great twentieth-century philosopher Brandy: almost doesn’t count.
A diagnostic is not a screener, no matter what the marketing says
Before we get into the technical receipts, the most important distinction for parents to understand is this:
A diagnostic tells you what’s going on for a kid you already know is struggling.
A screener is supposed to find the struggling kid early, before the gap widens, so the school can intervene.
Those are different jobs. The state approval frameworks, built on top of the National Center on Intensive Intervention’s screening standards, set a high bar for the second one. They demand evidence the tool can identify kids who are at risk but not yet failing, with strong sensitivity and a known false-negative rate. A test that’s great at confirming what teachers already know is a fine diagnostic. It’s a poor screener.
Three issues kept showing up across the state reviews. Each one explains why a tool can score well on instructional placement and still miss the kid in your house.
1. The borderline blind spot
Michigan’s review specifically asked vendors to demonstrate how well their tools identify students in the 30th-to-40th-percentile band. Those are the borderline kids. The ones who are slipping but haven’t fallen yet. That is exactly where a screener earns its keep.
Curriculum Associates’ submitted evidence leaned instead on classification accuracy in the 10th-to-20th-percentile range. Translation: they showed evidence for catching kids who are already at the bottom of the distribution, not for catching kids who are early in the slide.
Independent research lines up with the regulator. A 2025 peer-reviewed study by Campaña and Solomon, published in Assessment for Effective Intervention, analyzed i-Ready’s own 20th-percentile risk cut score against year-end state assessment outcomes. Their finding: the previous year’s state test produced stronger classification accuracy than the fall i-Ready Diagnostic for predicting who would and wouldn’t read on grade level.
By the time i-Ready is confidently flagging a kid, the kid is already at the 10th percentile. That isn’t a smoke detector. That’s the fire alarm going off after the roof has already collapsed.
2. The phonemic-awareness problem
If you’ve read anything about the science of reading in the last decade, you’ve encountered the headline finding: phonemic awareness, the ability to hear and manipulate the individual sounds in spoken words, is one of the strongest early predictors of who will and will not learn to read. Reading Rockets has the long version.
i-Ready measures phonological awareness (the umbrella term that contains phonemic awareness) through multiple-choice items on a screen. A kindergartener taps the picture that “starts with the same sound” as another picture. That’s a recognition task. It is a very weak proxy for the actual cognitive work the research base is measuring, which is segmenting, blending, and manipulating sounds in your head and saying them out loud.
Michigan flagged this directly in their review. WestEd’s analysis of literacy screeners across two states made the same point in plainer language: when you compare a multiple-choice screen to an individually administered task where the kid produces sounds aloud, individual students get classified as at-risk on one assessment and not the other. They are not measuring the same thing.
The state-approved tools (Amira, MAP Reading Fluency) ask kids to say sounds out loud. The microphone is doing diagnostic work that a tap-to-select item simply cannot do.
3. Adaptive logic is great for instruction. It is risky for screening.
i-Ready is computer-adaptive, which is genuinely useful when the goal is teaching. Get a fast read on what a kid knows, meet them there, move them forward. For screening, though, adaptive logic introduces a hole.
The algorithm is calibrated so that students get roughly half the items right and half wrong. Efficient for placement. But it means the test branches. If a student does well enough on the early phonics items, the test can skip phonological awareness items entirely. (I have a whole post on the specific score where this happens. It’s lower than you think. It’s Part 3 of this series.)
That is not a bug. It is the design. But the consequence is quiet and devastating: a kid with a phonological awareness gap who happens to be a competent guesser on phonics questions can finish the test without that gap ever being measured. If the test never asked, the school never knows. The dashboard says “on grade level.” The kid is not.
What this means for you
If i-Ready is your district’s universal screener, you don’t need to start a war with the school. You need to know which questions to ask and which numbers to look at.
Three things to internalize:
A green light on the dashboard does not mean what you think it means. It means your kid scored above whatever cut score the district set, on whatever subset of items the algorithm chose to administer, in whatever order it chose. None of those things are the same as “your kid can read.”
The composite score can hide a foundational gap. A strong vocabulary kid (read: a bright, conversational kid who has been read to a lot) can compensate for a phonological awareness gap on a multiple-choice screen. The composite looks fine. The profile is not. This is especially true for twice-exceptional kids whose verbal scores prop up the average.
You have a right to the underlying data. Under FERPA, the full assessment report and subtest-level scores belong to your child’s educational record. The parent summary is a courtesy. The data is your right.
Your action step this week
Email your child’s teacher and request the full i-Ready (or whatever screener your district uses) report for the most recent administration. Not the parent letter. The actual report with subtest scores, percentile ranks, and the scoring tables used to interpret the results. Use this template:
“Hi [Teacher], could I please request a copy of [child’s name]’s most recent reading screener report, including the subtest or domain-level scores and the scoring tables the school uses to interpret the results? I’d like to look at the underlying data, not the summary letter. Thank you.”
You don’t have to explain why. You don’t have to justify it. If they push back, the magic word is FERPA, and the district must comply within 45 days. Once you have the data, the next two posts in this series will help you actually read it.
Up next in this series:
- Part 2: i-Ready Scored 50% in Michigan’s K-3 Screener Review. Here’s the Line-by-Line.
- Part 3: When a Diagnostic Pretends to Be a Screener
Part of the Tests hub. For parent-friendly framing of how testing works in special education and what to push back on, see What You Need to Know About Tests.

Leave a Reply