Debunking the Persistent Myth of Lagging U.S. Schools (Guest Post by Alfie Kohn)
Beliefs that are debatable or even patently false may be repeated so often that at some point they come to be accepted as fact. We seem to have crossed that threshold with the claim that U.S. schools are significantly worse than those in most other countries. Sometimes the person who parrots this line will even insert a number -- “We’re only ____th in the world, you know!” -- although, not surprisingly, the number changes with each retelling.
The assertion that our students compare unfavorably to those in other countries has long been heard from politicians and corporate executives whose goal is to justify various “get tough” reforms: high-stakes testing, a nationalized curriculum (see under: Common Core “State” Standards), more homework, a longer school day or year, and so on. But by now the premise is apt to be casually repeated by just about everyone -- including educators, I’m sorry to say -- and in the service of a wide range of prescriptions and agendas. Just recently I’ve seen it on a petition to promote teaching the “whole child” (which I declined to sign for that reason), in a documentary arguing for more thoughtful math instruction, and in an article by the progressive journalist Barbara Ehrenreich.
Unsurprisingly, this misconception has filtered out to the general public. According to a brand-new poll, a plurality of Americans -- and a majority of college graduates! -- believe (incorrectly) that American 15-year-olds are at the bottom when their scores on tests of science knowledge are compared to students in other developed countries.
A dedicated group of educational experts has been challenging this canard over the years, but their writings rarely appear in popular publications and each typically focuses on just one of the many problems with the claim. Here, then, is a concise overview of the multiple responses you might offer the next time you hear someone declare that American kids come up short. (First, though, I'd suggest politely inquiring as to the evidence for his or her statement. The wholly unsatisfactory reply you’re likely to receive may constitute a rebuttal in its own right.)
1. Even taking the numbers at face value, the U.S. fares reasonably well. Results will vary depending on subject matter, age, which test is being used, and which year’s results are being reported. It’s possible to cherry-pick scores to make just about any country look especially good or bad. The U.S. looks considerably better when we focus on younger students, for example -- so, not surprisingly, it’s the high school numbers that tend to be cited most often. (When someone reduces all student performance to a single number, you can bet it's the one that casts our schools in the worst possible light.)
But even with older students, there may be less to the claim than meets the eye. As an article in Scientific American noted a few years back, most countries’ science scores were actually pretty similar. That's worth keeping in mind whenever a new batch of numbers is released. If there’s little (or even no) statistically significant difference among, say, the nations placing third through ninth, it would be irresponsible to cite those rankings as if they were meaningful.
Overall, when a pair of researchers carefully reviewed half a dozen different international achievement surveys conducted from 1991 to 2001, they found that “U.S. students have generally performed above average in comparisons with students in other industrialized nations.” And that still seems to be the case with the most recent data, which include math and science scores for grade 4, grade 8, and age 15, as well as reading scores for grade 4 and age 15. Of the eight results, the U.S. scored above average in five, average in two, and below average in one. Not exactly the dire picture that’s typically painted.
2. What do we really learn from standardized tests? While there are differences in quality between the most commonly used tests (e.g., PISA, TIMSS), the fact is that any one-shot, pencil-and-paper standardized test -- particularly one whose questions are multiple-choice -- offers a deeply flawed indicator of learning as compared with authentic classroom-based assessments. One of them taps students’ skill at taking standardized tests, which is a skill unto itself; the other taps what students have learned and what sense they make of, and what they can do with, what they've learned. One is a summary statistic labeled “student achievement”; the other is an account of students’ achievements. Anyone who cites the results of a test is obliged to defend the construction of the test itself, to show that the results are not only statistically valid but meaningful. Needless to say, very few people who say something like “the U.S. is below average in math” have any idea how math proficiency has been measured.
3. Are we comparing apples to watermelons? Even if the tests were good measures of important intellectual proficiencies, the students being tested in different countries aren’t always comparable. As scholars Iris Rotberg and the late Gerald Bracey have pointed out for years, some countries test groups of students who are unrepresentative with respect to age, family income, or number of years spent studying science and math. The older, richer, and more academically selective a cohort of students in a given country, the better that country is going to look in international comparisons.
4. Rich American kids do fine; poor American kids don’t. It’s ridiculous to offer a summary statistic for all children at a given grade level in light of the enormous variation in scores within this country. To do so is roughly analogous to proposing an average pollution statistic for the United States that tells us the cleanliness of “American air.” Test scores are largely a function of socioeconomic status. Our wealthier students perform very well when compared to other countries; our poorer students do not. And we have a lot more poor children than do other industrialized nations. One example, supplied by Linda Darling-Hammond: “In 2009 U.S. schools with fewer than 10 percent of students in poverty ranked first among all nations on PISA tests in reading, while those serving more than 75 percent of students in poverty scored alongside nations like Serbia, ranking about fiftieth.”
5. Why treat learning as if were a competitive sport? All of these results emphasize rankings more than ratings, which means the question of educational success has been framed in terms of who’s beating whom.
a) Education ≠ economy. If our reason for emphasizing students' relative standing (rather than their absolute achievement) has to do with “competitiveness in the 21st-century global economy” -- a phrase that issues from politicians, businesspeople, and journalists with all the thoughtfulness of a sneeze, then we would do well to ask two questions. The first, based on values, is whether we regard educating children as something that’s primarily justified in terms of corporate profits.
The second question, based on facts, is whether the state of a nation’s economy is meaningfully affected by the test scores of students in that nation. Various strands of evidence have converged to suggest that the answer is no. For individual students, school achievement is only weakly related to subsequent workplace performance. And for nations, there’s little correlation between average test scores and economic vigor, even if you try to connect scores during one period with the economy some years later (when that cohort of students has grown up). Moreover, Yong Zhao has shown that “PISA scores in reading, math, and sciences are negatively correlated with entrepreneurship indicators in almost every category at statistically significant levels.”
b) Why is the relative relevant? Once we’ve debunked the myth that test scores drive economic success, what reason would we have to fret about our country’s standing as measured by those scores? What sense does it make to focus on relative performance? After all, to say that our students are first or tenth on a list doesn’t tell us whether they’re doing well or poorly; it gives us no useful information about how much they know or how good our schools are. If all the countries did reasonably well in absolute terms, there would be no shame in being at the bottom. (Nor would “average” be synonymous with “mediocre.”) If all the countries did poorly, there would be no glory in being at the top. Exclamatory headlines about how “our” schools are doing compared to “theirs” suggest that we’re less concerned with the quality of education than with whether we can chant, “We’re Number One!”
c) Hoping foreign kids won’t learn? To treat schooling as if were a competitive sport is not only irrational but morally offensive. If our goal is for American kids to triumph over those who live elsewhere -- to have a better ranking -- then the implication is that we want children who live in other countries to fail, at least in relative terms. We want them not to learn successfully just because they’re not Americans. That’s built into the notion of “competitiveness” (as opposed to excellence or success), which by definition means that one individual or group can succeed only if others don’t. This is a troubling way to look at any endeavor, but where children are concerned, it’s indefensible. And it’s worth pointing out these implications to anyone who uncritically cites the results of an international ranking.
Moreover, rather than defending policies designed to help our graduates “compete,” I’d argue that we should make decisions on the basis of what will help them to develop the skills and disposition to collaborate effectively. Educators, too, ought to think in terms of working with – and learning from – their counterparts in other countries so that children everywhere will become more proficient and enthusiastic learners. But every time we rank “our” kids against “theirs,” that becomes a little less likely to happen.
1. W. Wayt Gibbs and Douglas Fox, “The False Crisis in Science Education,” Scientific American, October 1999: 87-92.
2. Erling E. Boe and Sujie Shin, “Is the United States Really Losing the International Horse Race in Academic Achievement?” Phi Delta Kappan, May 2005: 688-695.
3. See, for example, Alfie Kohn, The Case Against Standardized Testing (Heinemann, 2000); or Phillip Harris et al., The Myths of Standardized Tests(Rowman & Littlefield, 2011).
4. For example, see Iris C. Rotberg, “Interpretation of International Test Score Comparisons,” Science, May 15, 1998: 1030-31.
5. Linda Darling-Hammond, “Redlining Our Schools,” The Nation, January 30, 2012: 12. Also see Mel Riddile, “PISA: It’s Poverty Not Stupid,” The Principal Difference [NASSP blog], December 15, 2010; and Martin Carnoy and Richard Rothstein, “What Do International Tests Really Show About U.S. Student Performance?”, Economic Policy Institute report, January 28, 2013.
6. Keith Baker, “High Test Scores: The Wrong Road to National Economic Success,” Kappa Delta Pi Record, Spring 2011: 116-20; Zalman Usiskin, “Do We Need National Standards with Teeth?” Educational Leadership, November 2007: 40; and Gerald W. Bracey, “Test Scores and Economic Growth,” Phi Delta Kappan, March 2007: 554-56. “The reason is clear,” says Iris Rotberg. “Other variables, such as outsourcing to gain access to lower-wage employees, the climate and incentives for innovation, tax rates, health-care and retirement costs, the extent of government subsidies or partnerships, protectionism, intellectual-property enforcement, natural resources, and exchange rates overwhelm mathematics and science scores in predicting economic competitiveness” (“International Test Scores, Irrelevant Policies,”Education Week, September 14, 2001: 32).
7. Yong Zhao, “Flunking Innovation and Creativity,” Phi Delta Kappan, September 2012: 58. Emphasis added.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.