Using student assessment data to determine instructional impact is one of the most common leadership practices of this decade. So let’s start the week off with a few data points:
- The average passing rate for Ms. Jones’ class is 5 points higher than Ms. Lopez’s!
- Ms. Jones’ class scored higher than Ms. Lopez’s class on four common assessments.
- Ms. Jones’ common assessment passing rates: 92 (pre-test), 91 (week 2), 88 (week 4), 90 (week 6).
- Ms. Lopez’s common assessment passing rates: 58 (pre-test), 65 (week 2), 76 (week 4), 85 (week 6).
Data overkill is real. Worse than overkill is data that’s misleading. One of the worst examples of misleading data is the false positive.
Can you tell which data points above are false positives?
Is Ms. Jones’ higher passing rate indicative of more effective instructional practices?
Are Ms. Jones’ students growing more than Ms. Lopez’s?
Of course, there is always a slew of contextual factors to consider. Notwithstanding, the data itself speaks volumes. The question is, are we listening to what is actually being said?
(Answer Key: All of Ms. Jones’ datasets are false positives. Ms. Lopez’s students appear to be learning more. This could mean she is having the larger impact on learning, even though her passing rates are lower!)
Get More on Data:
- What is Lead Data?
- What is Sensitive Data?
- Sensitive vs. Lethargic Data (Tom Waters Blog)
- The Future of Assessment