Skip to main content

Failing Data in Education Reform

Few phrases have been written or uttered more often than "data-driven instruction" or "evidence-based decision making" since No Child Left Behind (NCLB) codified "scientifically-based" practices in 2001.

The accountability era begun in the early 1980s intensified the near mania for data in the U.S. that can be traced back to the first few decades of the twentieth century and the promises associated with quantifying student learning and teacher quality through standardized testing. Although the past century and the more recent thirty-year cycle of accountability based on standards and standardized testing have not delivered on the promises (see Hout & Elliot, 2011), "No Excuses" Reformers, including the current Department of Education headed by Secretary of Education Arne Duncan, remain steadfast in pursuing better tests based on better standards.

The "No Excuses" Reform movement is driven by bully politics that ironically includes an unbending faith in data and evidence while simultaneously ignoring (or misreading) the data and the evidence. One stark example of that pattern was exposed by Jack Hassard at Anthony Cody's Living in Dialogue blog:

"In May, 2012, the National Council on Teacher Quality (NCTQ) issued a report entitled: What Teacher Education Programs Teach About K - 12 Assessment. Anthony Cody mentioned this study in a recent post entitled Payola Policy: NCTQ Prepares its Hit on Schools of Education.

"The title intrigued me, so I went over to the NCTQ website, and read and studied the report which is about what education courses teach about assessment. This post is my review of the NCTQ study, and I hope after you finish reading the post you will realize how bogus reports like these are, given the quality of research that professors of education have been doing for decades. The study reviewed here would never have been published in a reputable journal of research in education, not only in the U.S., but in any other country in the world. I'll make it clear why I make this claim."

From NCLB to Race to the Top to international PISA comparisons to cyclical handwringing about NAEP and SAT scores to growing calls for value-added methods for evaluating teachers, we have ample data to conclude that we have historically and are currently failing data in education and education reform

 

Failing Data in Education Reform

In NCTQ's report discredited by Hassard, the claim that teachers are not properly prepared to function efficiently in a data-driven education world is paradoxical evidence that "No Excuses" Reformers are themselves data-challenged.

"Data can be an immensely powerful asset, if used in the right way," explains Jonathan Gray in "What data can and cannot do," adding: "But as users and advocates of this potent and intoxicating stuff we should strive to keep our expectations of it proportional to the opportunity it represents." [1]

Further, Gray offers five cautions about data: (1) "Data is not a force unto itself," (2) "Data is not a perfect reflection of the world," (3) "Data does not speak for itself," (4) "Data is not power," and (5) "Interpreting data is not easy."

Here, then, let's consider how Gray's warnings can inform our awareness of the increasing misuse of data in education reform:

Data in education should not be a force unto itself. In other words, education reform should move away from teaching to the test and relentless quantification of both teaching and learning. Currently, "No Excuses" Reformers are seeking intensifying both, thus making data "a force unto itself," decontextualized and corrosive.

Data in education, specifically quantified data, are not perfect reflections of learning and teaching. Standardized test scores are approximations, at best, of teaching and learning; and standardized test scores remain powerfully correlated with factors beyond the control of schools or teachers. As Gray notes: "Data is often incomplete, imperfect, inaccurate or outdated. It is more like a shadow cast on the wall, generated by fallible human beings, refracted through layers of bureaucracy and official process." Thus, increasing testing and the high-stakes associated with those tests for students and teachers are profound misuses of data.

Data in education, specifically quantified data, should not speak for themselves. Each year, SAT data are released, and despite the College Board's own caution not to rank and compare states by that data, journalists and politicians across the U.S. allow SAT data to speak for themselves—ignoring disproportionate populations of test takers from state to state, ignoring the high correlations between SAT data and students' parental income and levels of education. Again, as Gray warns: "In many ways official datasets resemble official texts: we need to learn how to read and interpret them critically, to read between the lines, to notice what is absent or omitted, to understand the gravity and implications of different figures, and so on. We should not imagine that anyone can easily understand any dataset, any more than we would think that anyone can easily read any policy document or academic article."

Education data should not be power. For too long, quantified data in education have been used as political baseball bats to bludgeon public schools and more recently public school teachers. As well, quantified data have for too long been used by teachers to bludgeon students. The misuse of data shifts the locus of power outside any agents in the education process—teachers and students—and places that agency in the mirage of objectivity. In effect, quantified data are the antithesis of human agency.

Interpreting education data is complex, and better left to people with expertise and experience in education, measurement, and research. Several years ago, Gerald Bracey recognized the increased misuse of data by the inexpert, especially as that data were being misreported in the media (See Yettick, 2009, and Molnar, 2001). Bracey recognized that think tanks were snookering the public and swaying education reform policy (See Welner, et al., 2010). Data must not be interpreted without context, and data always pose problems with causation and correlation. Politicians, think tank advocates, journalists, and billionaires may have many strong qualities and areas of expertise, but that doesn't mean any of them understand educational data. (I highly recommend the Shanker Blog and School Finance 101 as models of careful and expert considerations of educational data.)

Gray ends his piece with optimism: "I'm sure as time goes by we'll have a more balanced, critical appreciation of the value of data, and its role within our information environment."

I struggle with such optimism in the context of education reform, however, because, as Anthony Cody has explained, current "No Excuses" Reformers are trapped in the blinders of groupthink that reinforce their self-righteous zeal.

Educators and researchers are therefore under more pressure than ever to use our own expertise with evidence to counter that zeal with the complex and powerful weight of the data: "We should strive to cultivate a critical literacy with respect to our subject matter," as Gray suggests.

[1] Throughout Gray's article "data" is followed by singular verbs, which is nonstandard, but I will retain the published wording in all quotes while conforming to standard usages of "data" as plural in my original sentences.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

P.L. Thomas

P. L. Thomas, Professor of Education (Furman University, Greenville SC), taught high school English in rural South Carolina before moving to teacher education. He...