Education

We Get National Reading Test Results Every 2 Years. Writing? Try 20.


Every two years we hear how American students are doing in reading and math, but we’ve heard nothing about their writing since 2011. And we won’t be hearing anything more until the next test—in 2030.

Reading and math are important, which is why Congress mandated those tests be given every two years as part of the National Assessment of Educational Progress, known as the NAEP. But writing is equally vital—and not just for success in college and the workplace. Writing can provide powerful boosts to reading comprehension, analytical thinking, and learning in general.

But as the results from NAEP writing tests in 2011 show, American students, on average, don’t write well. Only 27% of eighth- and twelfth-graders scored “Proficient” or above in writing in 2011, an even lower proficiency rate than the one for reading, where it’s hovered at around a third. For certain groups of students, the writing results are significantly worse. For example, among twelfth-graders whose parents didn’t finish high school, only 8% are Proficient, and 40% score below the “Basic” category. Given these alarming figures, it seems we should be testing writing at least as often as reading, if not more.

So why have a 20-year gap between writing tests—or, in the case of fourth-graders, whose last reported results were in 2002, a gap of almost 30 years? Actually, it’s not a gap between tests. In 2017, fourth- and eighth-graders took NAEP writing tests. But the results were never released.

The main reason: In 2011 eighth- and twelfth-graders took the writing test on laptops. But in 2017, the National Center for Education Statistics, which administers NAEP tests, switched to tablets with attached keyboards. NCES had used the tablets for some math and reading tests that same year and found that while students generally scored lower than when using paper and pencil, testing experts were able to control for the difference. NCES staff decided to repurpose those tablets for the 2017 writing test because the agency’s laptops were becoming outdated, according to Dr. Grady Wilburn, an NCES statistician. They didn’t anticipate that the switch to tablets would cause “a mode difference,” Wilburn says.

But apparently they did. A study found “a pattern of lower performance” on the 2017 writing tests that prevented the release of the results, which were deemed not comparable to those from 2011. A report looking into the 2017 results is still pending.

It would seem sensible, then, to give another writing test as soon as possible—and go back to using laptops. But no. The National Assessment Governing Board (NAGB), which oversees the tests, has decided that first, we need a new writing “framework.”

That takes time. The last time a new writing framework was created, for the 2011 test, it involved “extensive research, outreach, and in-person meetings over the course of 18 months,” with more than 500 individuals participating. Other subjects that needed new frameworks “were ahead of [writing] in line, so to speak,” according to NAGB spokesperson Stephaan Harris. So 2030 is the earliest possible date. (Originally it was 2029, but the pandemic has pushed everything back a year.)

Is a new writing framework necessary? Laura LoGerfo, NAGB’s Assistant Director for Reporting and Analysis, says it has always been Board policy to review frameworks every 10 years or so, to stay current with changes in curriculum and instruction.

But that hasn’t always happened. According to NAGB’s assessment timeline, the same writing framework was used from 1992 to 2007, a period of 15 years. In reading, the framework stayed the same from 1990 to 2009. And the current U.S. history and civics frameworks were created in 1994 and 1998, respectively (although they’re getting a makeover soon).

What’s more, in the case of writing, curriculum and instruction have actually now moved closer to the existing NAEP framework. The Common Core writing standards, which were adopted by many states in 2010 and have heavily influenced others, followed the NAEP in dividing writing into three types: to convey experience, to explain, and to persuade. The standards relied on the distribution of questions on the NAEP in requiring that students write roughly equal amounts of each type. Previously, the emphasis in classrooms was mostly on personal narrative—writing “to convey experience”—especially at lower grade levels.

Not to mention that a new framework can impose significant costs—and not just in terms of money, which seems to be tight. (A 2013 administration of the writing test for fourth-graders was canceled for budgetary reasons, according to NAGB and NCES staff). When NAGB members debated a new reading framework last year, bitter infighting broke out. Beyond that, a new framework, with new test items, inevitably makes it harder to compare a new test with what came before—exactly the kind of problem that prevented the 2017 writing results from being reported.

I’m of two minds about all this. I’m shocked and dismayed that we apparently won’t have writing test results for yet another decade, and I find the explanations unconvincing. At the same time, I’m dubious that you can accurately measure writing quality via a stand-alone writing test.

One possibility, which LoGerfo says a few NAGB Board members have raised, is to assess writing using written answers on the NAEP U.S. history and civics tests. For that to happen, those tests would need to be modified to allow for lengthier responses. A more fundamental problem for testing experts, LoGerfo says, is that “assessing writing through these other assessments conflates knowledge of the content area and writing skill.”

That, however, is only a problem if you believe you can evaluate those things separately. If, on the other hand, you see them as inextricably intertwined, you come to a different conclusion: The more you can equalize students’ background knowledge of a topic, the easier it is to isolate and measure differences in writing skill.

Instead, the NAEP writing test—like its reading test—uses items that theoretically don’t require background knowledge. Kids might be asked to make up a story or write about personal experience. Even there, though, knowledge can play a role. One prompt on a fourth-grade pilot test asked kids to imagine they had magically awakened on the sidewalk beneath the Eiffel Tower. That advantages kids who have been to Paris or at least know something about its most famous landmark.

Other prompts call for imagination or introspection: write a story about exploring an island no humans have been to, or an essay about “a time when the way you thought or felt about something changed.” Some kids aren’t as imaginative as others, or are less inclined to introspection. Does that mean they’re worse writers?

Giving separate writing tests—and reading tests—also reinforces the widespread and harmful idea that writing and reading comprehension should be taught in isolation from each other and from subjects like history and science. If those aspects of education were better integrated—and if teachers got better training in how to teach them—NAEP scores would almost certainly be much higher than they are, and gaps in scores would be narrower.

Perhaps the best solution is to use NAEP U.S. history tests to assess writing. That’s far from perfect, because curricula vary—and the proficiency rate of only 15% on those tests suggests that many students know little about the content. But a survey of high school students found a surprising degree of consensus on the identities of the most famous Americans, excluding presidents and first ladies: Martin Luther King, Jr., Rosa Parks, and Harriet Tubman. Perhaps students could be given writing prompts about those historical figures. (States are in a better position than the federal government to give fair tests because they can do more to ensure that all or most of their schools are using the same curriculum—although so far, Louisiana is the only state that’s experimenting with a reading and writing test grounded in common content.)

Failing that, despite my misgivings I would urge NAGB to keep giving NAEP writing tests. They should administer the next round sooner than 2030—with the existing framework, using laptops, to help ensure comparability with 2011.

Tests like the NAEP can’t address or even identify what’s wrong with our education system. But they can at least sound a general alarm. That alarm rings loudly every two years for reading and math, leading to headlines and rounds of hand-wringing. The hoopla has its downsides, but it’s better than silence. And the last thing we need for writing—which has been neglected for far too long—is another ten years of silence.



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.