Decoding Mom review of i-Ready as a K-3 reading screener in Michigan, showing where it fell short

i-Ready Scored 50% in Michigan’s K-3 Screener Review. Here’s the Line-by-Line.

This is the deep dive. If you want the higher-level take on what California and Michigan said about i-Ready as a reading screener, start with Part 1 of the series. Then come back here for the receipts.

In December 2025, the Michigan Department of Education completed its consensus scoring of the i-Ready Assessment under MCL 380.1280f, the state’s review process for valid and reliable K-3 reading screeners. The bottom-line finding, in MDE’s own words from the published list:

“Not approved as a valid and reliable K-3 screening and progress monitoring assessment.”

This post is about what is actually inside the Michigan consensus document. The line-by-line scoring is more specific and more useful for parents than any summary I could write on top of it. If your district uses i-Ready as a universal screener, this is the document you can point to when you ask the school why your kid landed where they landed.

What Curriculum Associates submitted

To be precise: the submission was a combined package. Curriculum Associates submitted the i-Ready Diagnostic (the computer-adaptive assessment most parents recognize) plus the i-Ready Literacy Task for Fluency, a separate teacher-administered 1:1 assessment.

So the question MDE was answering was not “does the Diagnostic alone work as a K-3 screener?” It was “does the Diagnostic plus the Literacy Task fluency add-on, taken together, meet the K-3 screening requirements?” The answer was no.

That distinction matters because most districts that use i-Ready do not actually administer the 1:1 Literacy Task piece. They use the computer-adaptive Diagnostic and call it the screener. So the version Michigan reviewed was the more generous version, the version with the extra teacher-administered fluency add-on, and even with that boost it didn’t clear the bar.

Where the elements broke down

Michigan’s review framework evaluates screeners against six required elements: phonemic awareness, rapid automatized naming, letter-sound correspondence, single-word reading, nonsense-word reading, and oral passage reading fluency. Each element gets scored either met, partially met, or not met. Here is how i-Ready did.

Phonemic awareness: not met

MDE’s exact language:

“It is unclear what students see and hear in all ways that phonemic awareness is assessed. For example, it is unclear how segmenting is assessed by providing students with answer choices.”

This is the multiple-choice problem stated by the regulator itself. Phonemic awareness is the ability to manipulate sounds in your head: segmenting them, blending them, isolating them, swapping them. Asking a kindergartener to click the picture that has a particular sound is not the same task as asking them to produce that sound.

WestEd’s 2024 review of early literacy screeners made the same point in plainer language: when you put a multiple-choice screen next to an individually administered task where the kid produces sounds aloud, the same student can be classified at-risk on one and not at-risk on the other. They are not measuring the same construct. MDE didn’t use those words. They made the same call.

Letter-sound correspondence: not met. Nonsense-word reading: not met.

The recurring objection here is the adaptive engine itself. MDE flagged that:

“It is unclear which type of Phonics task most students encounter when they start the school year, and then how the adaptive nature of the assessment works for presenting other items.”

For nonsense-word reading specifically:

“It is unclear whether all students, especially those with low Phonics skills, would encounter nonsense word tasks as required by MCL 380.1280f.”

In plain English: an adaptive test can route some kids around the very tasks the law requires you to assess. The whole point of nonsense-word reading is to force a kid to decode without leaning on sight word memory or context. If the algorithm decides the kid doesn’t need to see those items, the algorithm has just substituted its judgment for the law’s. That is not a screener doing its job.

Single-word reading: partially met.

Met “only for the Fluency task at first grade, not for the Diagnostic at any grade level.” Worth re-reading that sentence. The computer-adaptive Diagnostic, the part most districts actually administer, did not meet the single-word reading criterion at any grade. The credit i-Ready earned on this element came entirely from the 1:1 fluency add-on, which most schools don’t use.

Rapid automatized naming and oral passage reading fluency

These elements were addressed only through the Literacy Task add-on, not the core Diagnostic. Same caveat: if your district uses the standard i-Ready administration, the kid never sees these tasks.

The borderline blind spot, in MDE’s own words

This is the part of the document any parent of a “yellow zone” kid should read twice. Michigan’s review explicitly asked vendors to show accuracy at the borderline, the 30th-to-40th-percentile band where kids are slipping but haven’t yet fallen. From the classification accuracy section of the consensus document:

“Vendors are asked to provide classification accuracy data for cutpoints between the 30th and 40th percentile on a spring criterion measure in order to identify students who display early signs of reading difficulties. The provided classification data were based on cut scores between the 10th and 20th percentiles.”

Curriculum Associates did not submit data for the band Michigan actually cares about. They submitted data for kids who were already at the bottom of the distribution. The kids a screener barely needs to find, because they are already failing in plain sight.

This is not a paperwork issue. The Campaña and Solomon study, published in Assessment for Effective Intervention in 2025, analyzed i-Ready against state assessment outcomes and found that a student’s previous year state test score predicted spring proficiency better than the fall i-Ready Diagnostic. The screener that costs your district money and instructional time produced less signal than a data point your school already has on file.

i-Ready scored 67% on Michigan’s classification accuracy criteria. The threshold for approval was 97%. That is not a near-miss. That is a 30-point chasm on the single most important question a screener has to answer: does it actually find the kids who need help?

The numbers in one place

For anyone who wants the receipts side by side, here’s the high-level scoring summary from the Michigan consensus document. Categories are evaluated as met, partially met, or not met.

Required elementi-Ready result
Phonemic awarenessNot met
Letter-sound correspondenceNot met
Nonsense-word readingNot met
Single-word readingPartially met (Fluency task, 1st grade only)
Oral passage reading fluencyPartially met (via Literacy Task add-on)
Rapid automatized namingPartially met (via Literacy Task add-on)
Overall classification accuracy67% (threshold: 97%)
Final consensusNot approved

The categories where i-Ready cleared the bar are real but narrow. The categories where it didn’t are the ones that determine whether a screener actually catches kids early.

Michigan’s full approved list (Amira, DIBELS 8, MAP Reading Fluency) and the not-approved list, including i-Ready, are on the MDE K-12 Literacy and Dyslexia Law page. The National Center on Intensive Intervention’s Academic Screening Tools Chart is another place to compare these tools on an apples-to-apples technical basis.

What this means for parents and advocates

If i-Ready is the universal screener your district uses, the Michigan review is the most specific public document you can point to when you push back on the results. Three places in particular.

Phonemic awareness in K and early first grade. MDE’s own concern is that multiple-choice items with answer choices may not measure the productive task. If your kindergartener is “in the green” on phonemic awareness but cannot segment or blend sounds aloud at home, the gap is real and the screener is not catching it. Ask the school for an individually administered phonemic awareness measure as a follow-up.

The 30th-to-40th-percentile band. Curriculum Associates did not submit accuracy evidence for this range. Independent research suggests i-Ready tends to under-identify here. If your kid lands in the high 30s to low 40s, they are in the band the tool was not validated to flag. That is not a green light.

The adaptive engine and foundational phonics. MDE’s recurring objection was that the adaptive logic can route students around the very tasks the law requires. If your child performed well on the items the algorithm decided to show them, they may never have encountered the foundational phonics or nonsense-word tasks at all. Ask whether the report shows which subtests were administered and which were skipped.

Three questions worth asking your school

  1. Does our universal screener require my child to produce sounds out loud, or only recognize them on a screen?
  2. Where is the “at benchmark” cut tied (the 20th percentile? the 40th?) and how does it map to our state proficiency test?
  3. What diagnostic follow-up happens for students who land in the 30th-to-40th-percentile band?

If the school does not have clean answers to these, you have your starting point.

Your action step this week

Pull up your child’s most recent i-Ready (or other screener) report and find two numbers: the overall percentile rank and the phonemic awareness or phonological awareness subtest score. If the subtest score is missing from the parent letter, email the teacher today and request the full assessment report including domain-level scores. If the overall rank is between the 30th and 40th percentile and the school told you “your child is on track,” you now have a state-level document that says: not necessarily.

In Part 3, I walk through what happens inside the test when the algorithm decides to skip phonological awareness entirely, and what that meant for one specific kid (mine).

Part of the Tests hub. For parent-friendly framing of how testing works in special education and what to push back on, see What You Need to Know About Tests.


Comments

3 responses to “i-Ready Scored 50% in Michigan’s K-3 Screener Review. Here’s the Line-by-Line.”

  1. […] Part 2: i-Ready Scored 50% in Michigan’s K-3 Screener Review. Here’s the Line-by-Line. […]

  2. […] you might want Part 1 for the overview of what California and Michigan said about i-Ready, and Part 2 for the line-by-line of Michigan’s review. This one is the personal one. The one where I tell […]

  3. […] Two states (Michigan and California) have already pulled i-Ready as an approved K-3 reading screener after evaluating it against state criteria. If the screener your school uses has the same gaps i-Ready has, you’d want to know. (Read the Michigan line-by-line breakdown.) […]

Leave a Reply

Your email address will not be published. Required fields are marked *