by JC Bowman and Audrey Shores

 

Like any other test, I am careful about reading too much into the latest National Assessment of Educational Progress (NAEP) results. With NAEP, representative samples of students rather than the entire national, state, or district populations take the test every two years and only in 4, 8, and 12th grades.

National Center for Education Statistics Commissioner Peggy Carr said “specific pandemic-era local decisions, like how long to keep a school or district shuttered, aren’t solely the cause of these results. Exploring that deserves more research.” My response: “Why?”

Not every student in a state takes NAEP, only a random sample of students.  There is no individual data. Students in both public and private schools are assessed. At the state level, only public-school assessments are reported to the media.

Are the results replicable? For example, would an independent group of researchers utilizing the same process, and the same students, get the same results as the original study with a similar or same test? We will never know.

Eighth-grade math had the sharpest decline in Tennessee; when compared to 2019 data we dropped 8 points. The average scores are not considerably different from the rest of the nation. Discussing NAEP, Harvard professor Kevin Mahnken believes the results “validated the public’s worst fears about pandemic learning loss.”  Katie Reilly at Time Magazine says, “American students saw some of the biggest declines in academic achievement recorded in the last 50 years.”

The Wall Street Journal says the national test results reveal the damage from school closures. Education Historian Diane Ravitch writes: “The moral of the story is that students need to have human contact with a teacher and classmates to learn best. Virtual learning is a fourth-rate substitute for a real teacher and interaction with peers.”

There is no dispute that we witnessed a decline since 2020, when scores for each grade level were nearly identical to 2004 data for math and 2008 in reading. While the small gains made in intervening years, most notably in math, have been further negated with the most recent data, average scores are significantly higher than when they first started being recorded in the 1970s.

There are a multitude of factors that contribute to a high-quality public school system. Performance, funding, safety, class size, quality of educators, instructional days, and community support are among the differences. School closures due to COVID are also a factor. The effects on student mental health due to the pandemic is a well-documented factor as well. How do we measure this disruption? Do we factor in COVID interventions such as tutoring or summer school?  There are so many details about what the results show, with little information on how the results are derived.

For educators, NAEP is just a glimpse of a small sample of students. We know that there are both supporters and detractors of NAEP. We try not to fall into either camp, cheerleaders or critics. And we urge others not to overreact. Too often we celebrate when we do well on NAEP, then panic and advocate for change when we do not. In the process, we lose the momentum of other innovations that we have enacted.

We often never get the results on some programs or initiatives before we move on to the next crisis and cure-all for what ails us in public education. No matter the results, we can continue the sense we have of crisis in public education. Here come the experts, with a suitcase full of solutions for the problems that they have identified. NAEP assesses representative samples of students, not the entire student population. NAEP is just a snapshot. Nothing more. Nothing less.

– – –

JC Bowman is the executive director of Professional Educators of Tennessee. Audrey Shores is the chief operating officer of Professional Educators of Tennessee.