Skip to main content

Radical Eyes for Equity: NCTQ: “The Data Was Effectively Useless”

You can count on two things when the National Council on Teacher Quality (NCTQ) releases one of their “reports.”

First, media will fall all over themselves to report NCTQ’s “findings” and “conclusions” without any critical review of whether the “findings” or “conclusions” are credible (or peer-reviewed, which they aren’t).

Second, NCTQ’s “methods,” “findings,” and “conclusions” are incomplete, pre-determined (NCTQ has a predictable “conclusion” that teacher education/certification is “bad”), and increasingly cloaked in an insincere context of diversity and equity (now teacher education/certification are not just “bad” but especially “bad” for minority candidates).

So the newest NCTQ report has been immediately and uncritically amplified by Education Week (who loves to take stands for “scientific” evidence while also reporting on “findings” and “reports” that cannot pass the lowest levels of expectations for scientific research).

There is great irony in this report and EdWeek’s coverage that includes two gems:

“The data was effectively useless,” said Kate Walsh, the president of NCTQ….

Said Walsh: “We do think that states ought to be asking some hard questions of institutions that have really low first-time pass rates. … We shouldn’t be afraid of this data. This data can help programs get better.”

-- FIRST-TIME PASS RATES ON TEACHER LICENSURE EXAMS WERE SECRET UNTIL NOW. SEE THE DATA

Walsh, in the first comment, is referring to data on passing rates on standardized testing, used for teacher licensure, but the irony is that she would be more accurate if she were referring to the NCTQ “report” itself.

The “report” admits that a number of states refused to cooperate (NCTQ has a long history of lies and manipulation to acquire “data” and many institutions and organizations have wisely stopped complying since the outcomes of NCTQ’s are predictable); therefore, this NCTQ “report” is similar to all their other “reports” in terms of incomplete data and slipshod methodology (a review of another NCTQ “report” by a colleague and me, for example, noted that NCTQ’s methodology wouldn’t be accepted in an undergraduate course, much less as credible scholarship to drive policy).

NCTQ and EdWeek, however, are typically not challenged since their claims and coverage fit a misleading narrative that the public and political leaders believe (again ironically in the absence of the data that Walsh claims “[w]e shouldn’t be afraid of”)—everything about U.S. public education, from teacher education to teacher quality, is total garbage.

NCTQ is a hack, agenda-driven think-tank, and EdWeek has eroded its journalistic credibility by embracing NCTQ’s “reports” when it serves their need for online traffic (see EdWeek’s obsession with the misleading “science of reading” movement where EdWeek shouts “science!” and cites NCTQ reports that fail the minimum requirements of scientific methodology).

This “report” on standardized testing in the teacher licensure process shouldn’t be viewed as in any way valuable for drawing conclusions about teacher education (teacher ed is a real problem that I have criticized extensively, but NCTQ hasn’t a clue what those problems are, and frankly, they don’t care) or for making policy.

However, what is interesting to notice is that NCTQ has chosen to use a shoddy analysis of previously hidden data on standardized testing to (once again) damn teacher education and traditional certification (both of which actually do deserve criticism and re-evaluation) even though there is another position one could take when analyzing (more rigorously and using a more robust methodology and the peer-review process) this data.

What if the problem with passing rates is not the quality of teacher education, but the inherent inequity built into standardized testing throughout the entire system of formal education?

Across the educational landscape—from NAEP to state-based accountability testing to the SAT and ACT to teacher licensure exams—standardized testing remains deeply inequitable, mostly correlated with socio-economic status, race, and gender in ways that perpetuate inequity.

In the very recent past, NCTQ was fully on board with the value-added method (VAM) for determining teacher quality, recall, and that movement eventually fell apart under its own weight since narrow forms of measurement, standardized testing, are actually a lousy way to understand teaching and learning.

If we take Walsh seriously about data (and we shouldn’t), here is a simple principle of gathering and understanding data—one data point (a standardized test score) will never be as powerful of valuable (valid/reliable) as multiple data points:

“Multiple data sources give us the best understanding of something,” said Petchauer, who was not involved in NCTQ’s report. “I get worried when a single high-stakes standardized test can trump other indicators of what a teacher knows and is able to do.”

-- FIRST-TIME PASS RATES ON TEACHER LICENSURE EXAMS WERE SECRET UNTIL NOW. SEE THE DATA

For one example, the Holy Grail of data credibility for the SAT has always been to be as predictive as GPA (GPA is the result of dozens of data points over years, and thus, a far more robust data set that one test score). GPA is more predictive.

Teacher education, like all education, remains inadequate, especially for marginalized populations, but one of the key elements in that claim is the overused of standardized testing.

If NCTQ and EdWeek were interested in challenging the use of high-stakes testing, then there may be some value in NCTQ’s most recent “report” (although the data is incomplete and the analysis is shoddy).

NCTQ’s “report” makes a big deal out of the licensure pass rates being hidden until their “report,” but once again, NCTQ’s agenda and total lack of scientific credibility as research makes this unveiling even worse than the data being hidden.

Ultimately, NCTQ’s misinformation campaign could be averted if and when media choose to practice what they preach. EdWeek is obsessed with teachers using the “science of reading” but their journalists routinely publish articles citing “reports” that never reach the level of “scientific.”

Whether you are a journalist or a researcher/scholar, you really are no better than the data, evidence, or sources you choose to stand with.

When your data are not credible, neither are you.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

P.L. Thomas

P. L. Thomas, Professor of Education (Furman University, Greenville SC), taught high school English in rural South Carolina before moving to teacher education. He...