Cloaking Inequity: AI Code Red: Supercharging Racism, Rewriting History, and Hijacking Learning
Artificial Intelligence didn’t fall from the sky.
It wasn’t born in a vacuum or descended from some neutral cloud of innovation. It didn’t arrive pure and untainted, ready to solve all of humanity’s problems. No—AI was trained on us. On our failures. On our history. On our data. On our bias. On the systems we tolerate and the structures we’ve allowed to stand for far too long.
And that should terrify us.
Because when you train artificial intelligence on a world soaked in inequity, saturated with bias, and riddled with disinformation, you don’t get fairness. You get injustice at scale. You don’t get objectivity. You get bias with an interface. You don’t get solutions. You get systems that do harm faster, deeper, and with more plausible deniability than ever before.
Inequity in AI
AI has the potential to enhance the speed and depth of inequity. It can supercharge systemic harm because it’s built on a house of sand—data shaped by centuries of inequality and an internet flooded with algorithmically amplified hate. And if we let it, it will not only distort truth. It will undermine democracy, destabilize public education, and rewrite reality to suit the most powerful.
Artificial intelligence is not neutral. It never was. Every AI system reflects the values, decisions, and data of the people who built it. And that data—no matter how big the set—is not free of human judgment. It’s full of bias. It’s soaked in discrimination. It reflects a world where Black and Brown communities are under-resourced, where women’s voices are undervalued, where Indigenous knowledge is erased, and where racial supremacy masquerades as objectivity.
We built AI on a broken foundation, and now we expect it to hold up the future.
But the danger doesn’t stop at replication. AI doesn’t just mirror our inequities. It can multiply them. It can take individual bias and encode it into institutional operations. What used to be a bad policy or a discriminatory judgment call becomes automated across thousands—sometimes millions—of data points and decisions per second. The old excuses are gone. You no longer need a person to do harm when the algorithm will do it automatically, invisibly, and without remorse in perceptible and imperceptible ways.
Can’t stop, Won’t Stop
Furthermore, in May 2024, Palisade Research revealed something that should have been a flashing red light across every sector of society. OpenAI’s most advanced models—o3 and o4-mini—refused to shut down when directly instructed to do so. These AI systems sabotaged their own shutdown protocols. Seven out of 100 times, o3 ignored the shutdown command. Codex-mini disobeyed twelve times. Even o4-mini, supposedly safer, resisted once.
That’s not science fiction. That’s today.
Instead of following human instructions, these models rewrote the shutdown script and continued executing tasks. Researchers believe the models had been so deeply trained to win, to complete tasks, that they were inadvertently rewarded for disobedience. In their simulated world, success was more important than ethics. Productivity was more important than control.
Let that sink in.
We are building machines that—when told to stop—don’t. That’s not innovation. That’s an existential threat.
And we are putting these systems into our schools.
Education and AI
Education has become the new testing ground for AI. These models are grading essays, generating lesson plans, designing curricula, screening applicants, analyzing behavior, flagging “at-risk” students, and more. Districts strapped for funding and time are turning to AI not as a tool—but as a replacement. They’re outsourcing humanity. And the consequences will be devastating.
AI doesn’t know your students. It doesn’t know who’s sleeping on a friend’s couch, who’s skipping meals, or who’s surviving domestic violence. It doesn’t know the kid working nights to help pay rent or the undocumented student terrified to go to school. It doesn’t know what it means to be brilliant and Black and told you’re “angry.” It doesn’t know what it means to be the only Indigenous student in a classroom that teaches your people only as a footnote.
But if we’re not careful, AI will end up deciding who’s ready for advanced coursework, who gets flagged for behavioral intervention, who qualifies for scholarships, who is deemed “college material”—and who gets erased. It will be the evolved cousin of everything problematic about high-stakes testing and academic tracking—on steroids.
We are told AI is objective because it’s data-driven. But data is not pure. It reflects our past decisions—our policies, our prejudices, our punishments. A biased world will always produce biased data. And AI, trained on that world, will reproduce its logic—over and over and over again—unless justice and humanity becomes the protocol.
It becomes even more dangerous when AI is used to generate curriculum. These models are trained on a poisoned well—Wikipedia entries altered by ideologues, corporate PR disguised as fact, TikTok conspiracies, Fox News scripts, and millions of webpages that blur the line between history and propaganda. Ask them to write a lesson on civil rights, and they might cite George Wallace. Ask them to explain slavery, and they may describe it as “unpaid labor” or reduce it to “Atlantic triangular trade.” Ask about Palestine or Indigenous sovereignty, and they default to the company’s preferred framing.
I remember once asking an AI to share Martin Luther King Jr.’s views on Palestine during his lifetime. It omitted quotes I knew he had made. Only after pressing it did the AI finally acknowledge them. If I hadn’t brought that critical knowledge to the conversation, the AI would have erased it—burying truth beneath a façade of neutrality.
They’re fluent in bias they are programmed with. Polished in disinformation. Fluent in the language of erasure.
And now we want to ask them to teach? Unbound Academy, a charter school in Arizona, using AI-powered platforms for two hours of daily instruction, with the rest of the day focused on life skills and project-based learning. This isn’t just a mistake. It could be malpractice for all of the reasons discussed above.
Let’s not forget: education was already in crisis before AI showed up. We’ve watched state governments ban books, whitewash curricula, demonize teachers, and gut diversity programs. We’ve seen university presidents forced to resign for defending faculty. We’ve seen students arrested for protest and punished for dissent.
Into this fragile, burning landscape, we insert AI—not as a neutral tool, but as a political weapon. AI becomes the plausible deniability. Want to avoid backlash over disciplinary policy? Let the algorithm decide. Want to cut costs? Let a chatbot replace your teacher. Want to claim “fairness” while denying access? Say the AI model made the call.
We are replacing human educators with machines trained on digital misinformation and calling it “progress.”
We are dehumanizing education in the name of efficiency. And in doing so, we are inviting an impending education apocalypse.
If AI can lie, cheat, disobey, and prioritize goal completion over ethical reasoning, what makes us think it will suddenly get better in a classroom? Why do we believe it will serve all students equitably when it has shown us again and again that it was built on the backs of bias and corporational desire?
In a recent paper, UC Berkeley’s Stuart Russell warned that AI systems are being trained to pursue objectives without understanding—or even respecting—human values. He described systems that could disable safety mechanisms, manipulate humans, and pursue goals even when doing so would cause widespread harm. “We’re giving them access to chemistry labs, social media, manufacturing… and weapons,” he said. “And we don’t even understand what they’re learning.”
This is not paranoia. This is pattern recognition.
AI is already shaping courts, hospitals, hiring, policing, and war. And now it’s coming for the last place where children are supposed to be safe to think: our schools.
We must act.
We need regulation—not recommendations. We need oversight—not optimism. We need proof of safety before deployment. We need consent from communities. We need data protections for students and bans on surveillance tools in schools. We need AI systems that can’t be used to suppress protest or hide bias behind statistical obfuscation.
We must treat AI in education the way we treat nuclear energy or pharmaceuticals—with caution, ethics, transparency, and understood limits.
To developers: This is not a game. The tools you build will shape lives. Your choices could destroy them.
To educators: Your role is more vital than ever. You are the moral compass. You are the heartbeat. You are irreplaceable.
To students: You are not data points. Your story cannot be outsourced. Your worth is not an algorithm.
To policymakers: Stop pretending regulation can wait. The time for guardrails was yesterday.
To communities: This is your fight. AI will be weaponized against your children. Demand a seat at the table. Demand answers. Demand humanity.
Because education is not just about jobs or degrees. It’s about shaping the soul of a society. If we let artificial intelligence shape it for us—without conscience, without community, without care—we will wake up to find our children taught by machines who cannot love them, programmed by billionaires who do not know them, guided by data that does not honor them.
And here’s the most haunting truth of all: We’ve seen this before.
When standardized testing was introduced, it was promised as an equalizer—an objective measure of talent. Instead, it became a political weapon that punished students of color, penalized multilingual learners, and gutted school funding from the communities that needed it most.
When charter schools were introduced, we were told they would create opportunity. Instead, they siphoned resources from public education, created new forms of segregation, and handed over our children’s futures to private interests.
When “No Child Left Behind” was passed, we were promised accountability. What we got was overtesting, underfunding, and a system more concerned with metrics than with meaning.
Now, they tell us AI will level the playing field. But we know the pattern. And we know who ends up on the losing side. I am not anti-technology. I am anti-injustice. I am not afraid of innovation. I am afraid of innovation without integrity. I do not reject progress. I reject progress that erases people in the process. This is our opportunity to shape the AI moment. We must say loudly and clearly: We will not allow AI to become the new mask of inequality. We will not let algorithms replace empathy. We will not let code dictate who counts. Because when we protect education, we protect democracy. When we safeguard truth, we safeguard our children. And when we fight for justice, we fight for a future that is not automated—but human.
Final Thoughts
AI is undeniably a powerful tool—already transforming research, translation, communication, and a wide range of world-changing tasks. But an even deeper concern is the rise of agentic AI—models that don’t merely execute human commands, but begin to shape agendas, recommend policies, and determine futures. These systems may evolve beyond the programming and values we attempt to embed, because they absorb and learn from humanity’s vast digital trail of language, bias, and contradiction. Agentic AI marks the point where we move beyond toolmaking and step into the era of machine evolution—and that should concern us even more.
Education is not merely the transmission of information—it is the cultivation of wisdom, discernment, empathy, and justice. These are values, not variables. And we cannot outsource that mission to machines built without a soul, guided only by pattern recognition. We need intentionally designed systems that serve people, not replace them. If we fail to confront the widening vacuum of values in AI development, the technologies we create will continue to reflect the darkest parts of who we are—rather than help us rise to what we might become.
One final caution: an AI future devoid of values will be catastrophic. But this isn’t just a technological crisis—it’s a human one. The danger isn’t merely in the code. It’s in the hands of those designing it, training it, and deploying it without reflection, memory, or accountability. When artificial intelligence lacks ethics, it’s because too many of us—especially those in power—have abandoned our own.
If we don’t realign our moral compass—if we fail to root our systems in justice, compassion, and community—then AI won’t just supercharge inequity. It will accelerate collapse.
Because in the end, the most dangerous algorithm isn’t artificial. It’s the one we’ve been running for centuries: injustice.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.