Most parents never see the test their kid took. They see the result, sometimes a label like “approaching grade level” or “on track,” and they’re told the school’s plan based on it. The screener itself, the data behind the label, and the rules the school is using to decide who gets help: those usually stay with the school.
That’s a problem. Not because schools are hiding things on purpose, but because the decisions made on the back of a screener can determine whether your kid gets reading intervention, gets evaluated for dyslexia, or gets told to keep practicing at home. And not every screener is built to catch what your kid actually needs.
Two states (Michigan and California) have already pulled i-Ready as an approved K-3 reading screener after evaluating it against state criteria. If the screener your school uses has the same gaps i-Ready has, you’d want to know. (Read the Michigan line-by-line breakdown.)
Here are five questions to ask. Most of the answers should be on file with the school’s reading coordinator or special education team. If they’re not, that’s information too.
1. Has it been validated as a reading screener in your state?
Your state’s department of education usually maintains an approved list of reading screeners that meet specific criteria for K-3 use. Some commercial screeners are on the list. Some aren’t. Some are widely used in classrooms but have never been formally evaluated by the state for this specific purpose.
Ask the school to confirm two things: whether the tool is on your state’s approved list, and when it was last evaluated. State approvals can expire and be revoked. In December 2025, Michigan revoked its approval of i-Ready as a K-3 reading screener after re-evaluating it against the state’s criteria.
If the screener isn’t on your state’s approved list, the school can still use it, but the results may not satisfy the legal threshold for triggering further evaluation or intervention services.
2. What’s the classification accuracy?
Classification accuracy is how often a screener correctly identifies kids who actually have reading difficulty, and correctly clears kids who don’t. It’s a percentage. Higher is better.
This number matters because false negatives (a kid who needs help but doesn’t get flagged) are how kids fall through the cracks. False positives (a kid flagged who doesn’t actually need intervention) are how time and resources get spent on the wrong kids.
Michigan’s bar for K-3 reading screeners is 97% classification accuracy. When Michigan tested i-Ready, the tool came in at 67%. That’s a 30-percentage-point gap.
Ask the school what the published classification accuracy is for the screener they use, and whether it meets the threshold your state requires. The number is in the screener’s technical manual, which the school’s reading coordinator should have access to.
3. Does it test all five reading domains?
Reading is not one skill. Researchers and reading scientists generally agree on five core domains:
- Phonemic awareness (hearing and manipulating sounds in spoken words)
- Phonics (mapping those sounds to letters)
- Fluency (reading accurately, at appropriate speed, with expression)
- Vocabulary (knowing what words mean)
- Comprehension (understanding what’s read)
A real reading screener tests all five. Some commonly used tools test two or three and infer the rest. A screener that doesn’t directly test phonemic awareness can miss the earliest and most important predictor of dyslexia in K-1 kids.
Ask the school which of the five domains the screener directly tests, and whether the others are inferred or skipped. If the answer is anything other than “all five, directly,” ask what supplemental assessment is used to fill the gap.
4. Is it being used as a screener or as a diagnostic tool?
This is the question most parents don’t realize they should ask.
A screener is a fast first-pass assessment. It tells the school which kids might need a closer look. It’s designed to be brief, broadly applicable, and easy to administer to every kid in a grade level.
A diagnostic is a deeper assessment given to kids the screener has already flagged. It pinpoints what specifically is going wrong: phonological processing, working memory, rapid naming, decoding accuracy, fluency under load.
These are not interchangeable. A screener used as a diagnostic will under-identify specific weaknesses. A diagnostic used as a screener will over-flag kids who don’t need intervention. Some commercial tools are marketed for use across the screener-to-diagnostic spectrum, which is part of why state evaluations have started to push back.
Ask the school directly: is the tool being used on my kid functioning as a screener, a diagnostic, or both? If it’s being used as both, what’s the protocol for kids who get flagged?
5. Are subtest score breakdowns delivered automatically, or do parents have to request them? And are intervention decisions based on subtest scores or on composites?
This is the question most parents never know to ask. It’s also the one that determines whether you’ll get useful information about your kid or just a label.
A composite score is the rolled-up summary, often expressed as a percentile, grade-level equivalent, or a label like “on track.” Composites smooth across subtests. A kid who’s strong in one area and weak in another can score “average” overall while having a real problem hiding in a single domain.
A subtest score is the actual data from each section of the screener: a separate score for phonemic awareness, a separate score for phonics, and so on. Subtest data shows you where your kid actually is, broken out by skill.
There are two parts to this question, and both matter:
First, is the school’s default to send subtest data home with the screener report, or only the composite or a label? Many schools send only the composite. The subtest data exists; it’s in the system. It just doesn’t get sent to parents unless someone asks for it. Ask for the subtest breakdown to be included automatically going forward, in writing.
Second, is the school’s intervention decision based on the composite or on subtests? A school that decides “intervention or no intervention” based on the composite will miss kids whose composite looks fine but whose phonemic awareness is at the 15th percentile. A school that uses subtest data to flag specific weaknesses will catch those kids. There’s no single correct answer here legally, but the answer tells you whether the school’s process is built to find the kid you’re worried about.
If the school’s process is composite-based and composite-only, that’s the moment to ask for an additional look at the subtest data, especially if your kid is the kind of reader who’s strong on some skills and weak on others. That profile is exactly what a composite score can hide, and it’s the same dynamic behind why confidence intervals decide IEP eligibility.
Why these questions matter
None of these questions are about doubting your kid’s school. They’re about understanding the inputs and rules the school is using to make decisions that affect your kid.
If the screener is on your state’s approved list, the classification accuracy meets the threshold, all five domains are tested, the tool is being used in the role it was designed for, and the school can show you subtest data, you have a process working as intended.
If the answers come back fuzzy (“we can look into that,” “it’s just what we use,” “you’d have to ask the publisher”), that’s not a flag of bad faith. It’s a flag that the school may be operating with less information about its own assessment than it could, and you have room to ask for more.
You’re not being difficult. You’re being a parent with questions about a test result that affects your kid’s path. Schools that take reading seriously will appreciate the conversation.
Want a printable version of these five questions to bring to your next school meeting? Subscribe below for the Parent Playbook when it drops.

Leave a Reply