Educators Over AI

Because No Bot Can Build a Beloved Classroom

By Jesse Hagopian

Illustrator: Boris Séméniako

In April, Secretary of Education Linda McMahon stood onstage at a major edtech conference in San Diego and declared with conviction that students across the United States would soon benefit from “A1 teaching.” She repeated it over and over — “A1” instead of “AI.” “There was a school system that’s going to start making sure that 1st graders, or even pre-Ks, have A1 teaching every year. That’s a wonderful thing!” she assured the crowd.

The moment quickly went viral. Late-night hosts roasted her. A.1. Steak Sauce posted a mock advertisement: “You heard her. Every school should have access to A.1.”

Funny — until it wasn’t. Because behind the gaffe was something more disturbing: The person leading federal education policy wants to replace the emotional and intellectual process of teaching and learning with a mechanical process of content delivery, data extraction, and surveillance masquerading as education.

This is part of a broader agenda being championed by billionaires like Bill Gates. “The AIs will get to that ability, to be as good a tutor as any human ever could,” Gates said at a recent conference for investors in educational technology. As one headline bluntly summarized: “Bill Gates says AI will replace doctors, teachers within 10 years.”

This isn’t just a forecast, it’s a capitalist dream of replacing relationships with code and scalable software, while public institutions are gutted in the name of “innovation.”

Software Is Not Intelligent

We need to stop pretending that algorithms can think — and we should stop believing that software is intelligent. While using the term “AI” will be necessary to be understood at times, we should begin to introduce and use more accurate language.

And no, I’m not suggesting we start calling it “A1”— unless we’re talking about how it’s being slathered on everything whether we asked for it or not. What we’re calling AI is better understood as Artificial Mimicry: a reflection without thought, articulation without a soul.

Philosopher Raphaël Millière explains that what these systems are doing is not thinking or understanding, but using what he calls “algorithmic mimicry”: sophisticated pattern matching that mimics human outputs without possessing human cognition. He writes that large pre-trained models like ChatGPT or DALL-E 2 are more like “stochastic chameleons” — not merely parroting back memorized phrases, but blending into the style, tone, and logic of a given prompt with uncanny fluidity. That adaptability is impressive — and can be dangerous — precisely because it can so easily be mistaken for understanding.

So-called AI can be useful in certain contexts. But what we’re calling AI in schools today doesn’t think, doesn’t reason, doesn’t understand. It guesses. It copies. It manipulates syntax and patterns based on probability, not meaning. It doesn’t teach — it prompts. It doesn’t mentor — it manages.

In short, it mimics intelligence. But mimicry is not wisdom. It is not care. It is not pedagogy. As WIRED magazine writes, “ChatGPT and Bard don’t really ‘know’ anything, but they are very good at figuring out which word follows another, which starts to look like real thought and creativity when it gets to an advanced enough stage.”  

Real learning, as the renowned psychologist Lev Vygotsky showed, is a social process. It happens through dialogue, relationships, and shared meaning-making. Learning unfolds in what Vygotsky called the Zone of Proximal Development: that space between what a learner can do alone and what they can achieve with the guidance of a more experienced teacher, peer, or mentor — someone who can respond with care, ask the right question, and scaffold the next step.

AI can’t do that. 

It can’t sense when a student’s silence means confusion or when it means trauma. It can’t notice a spark in a student’s eyes when they connect a concept to their lived experience. It can’t see the brilliance behind a messy, not fully developed idea, or the potential in an unconventional voice. It cannot build a beloved community.

It can generate facts, follow up with questions, offer corrections, give summaries, or suggest next steps — but it can’t recognize the emotional weight of confusion or the quiet excitement of an intellectual breakthrough.

That work — the real work of teaching and learning — cannot be automated.

Schools Need More Instructional Assistants and Less Artificial Intelligence

AI tools like MagicSchool, Perplexity, and SchoolAI do offer convenience: grammar fixes, sentence rewording, tone improvements. But they also push students toward formulaic, high-scoring answers. AI nudges students toward efficient compliance, not intellectual risk; such tools teach conformity, not originality.

Recently, my son used Raina, MagicSchool’s AI chatbot, during one of his 6th-grade classes to research his project on Puerto Rico. The appeal was obvious — instant answers, no need to sift through dense texts or multiple websites. But Raina never asked the deeper questions: Why does a nation that calls itself the “land of the free” still hold Puerto Rico as a colony? How do AI systems like itself contribute to the climate crisis that is threatening the future of the island? Raina delivered tidy answers. But raising more complicated questions — and helping students wrestle with the emotional weight of the answers — is the work of a human teacher.

AI can help simplify texts or support writing, but it can also miseducate. Over time, it trains students to mimic what the algorithm deems “effective,” rather than develop their own voice or ideas. Reading becomes extraction, not connection. The soul of literature is lost when reading becomes a mechanical task, not an exchange of ideas and emotions between human beings.

Many teachers, underpaid and overwhelmed, turn to AI out of necessity.

But we have to ask: Why, in the wealthiest country in the history of the world, are class sizes so large — and resources so scarce — that teachers are forced to rely on AI instead of IAs? Why aren’t we hiring more educators to lower class sizes? Why aren’t we hiring more librarians to curate leveled texts and giving teachers more planning time so they can better tailor learning themselves?

AI doesn’t just flatten learning — it now can monitor students’ digital behavior in deeply invasive ways. Marketed as safety tools, these systems track what students write, search, or post, even on school-issued devices taken home — extending surveillance into students’ personal lives. Instead of funding counselors, schools spend thousands (like a New Jersey district’s $58,000) on surveillance software. In Vancouver, Washington, a data breach exposed how much personal information, including mental health and LGBTQ+ identities, was quietly harvested. One study found almost 60 percent of U.S. students censor themselves when monitored. As Encode Justice leaders Shreya Sampath and Marisa Syed put it, students care that their “data is collected and commodified,” and that their peers “censor themselves in learning environments meant to encourage exploration.”

AI use in schools is uneven and largely unregulated, yet districts are increasingly promoting its adoption. Even without a clear policy framework, the message many educators receive is that AI is coming, and they are expected to embrace it. Yet this push often comes without serious discussion of pedagogy, ethics, or the structural inequities AI may actually deepen — especially in under-resourced schools. 

Beware of the Digital Elixir

In today’s AI gold rush, education entrepreneurs are trading in old scripts of standardization for sleek promises of personalization — touting artificial intelligence as the cure for everything from unequal tutoring access to teacher burnout. Take Salman Khan, founder of Khan Academy, who speaks in lofty terms about AI’s potential. Khan recently created the Khanmigo chatbot tutor and described it as a way to “democratize student access to individualized tutoring,” claiming it could eventually give “every student in the United States, and eventually on the planet, a world-class personal tutor.” Khan’s new book, Brave New Words, reads like a love letter to AI — an emotionless machine that, fittingly, will never love him back. It’s hard to ignore the irony of Khan titling his book Brave New Words — an echo of Aldous Huxley’s dystopian novel Brave New World where individuality is erased, education is mechanized, and conformity is maintained through technological ease. But rather than treat Huxley’s vision as a warning, Khan seems to take it as a blueprint, and his book reads like a case study in missing the point. In one example, Khan praises Khanmigo’s ability to generate a full World War II unit plan — complete with objectives and a multiple-choice classroom poll. Students are asked to select the “most significant cause” of the war:

    a.  Treaty of Versailles
    b.  Rise of Hitler
    c.  Expansionist Axis policies
    d.  Failure of the League of Nations

But the hard truths are nowhere to be found. Khanmigo, for example, doesn’t prompt students to wrestle with the fact that Hitler praised the United States for its Jim Crow segregation laws, eugenics programs, and its genocide against Native Americans.

Like so many snake oil education “cures” before it, Khan has pulled up to the schoolhouse door with a wagon full of digital elixirs. It’s classic edtech hucksterism: a flashy pitch, sweeping claims about revolutionizing education, and recycled behaviorist ideas dressed up as innovation. Behaviorism — a theory that reduces learning to observable changes in behavior in response to external stimuli — treats students less as thinkers and more as programmable responders. Khan’s vision of AI chatbots replacing human tutors isn’t democratizing; it’s dehumanizing.

This isn’t the kind of education the wealthy want for their own children. They get small classes, music teachers, rich libraries, arts and debate programs, and human mentors. Our kids are offered AI bots in overcrowded classrooms. It’s a familiar pattern — standardized, scripted learning for the many; creativity and care for the few. Elites claim AI will “level the playing field,” but they offload its environmental costs onto the public. Training large AI models consumes enormous amounts of energy and water, and fuels the climate crisis. The same billionaires pushing AI build private compounds to shield their children from the damage their industries cause — instead of regulating tech or cutting emissions, they protect their own from both the pedagogy and the fallout of their greed.

Em Winokur is an Oregon school librarian who joined the Multnomah Education Service District’s “AI Innovators” cohort to offer a critical voice in a conversation dominated by hype and industry influence. She has seen the contradictions firsthand. “Edtech companies aren’t invested in our students’ growth or in building a more caring world,” Winokur said. “What we need isn’t more AI — it’s more teachers, support staff, and real training, especially after COVID left so many educators underprepared.”

Of course, hedge fund managers, CEOs, and the politicians they bankroll will scoff at this vision. They’ll call it impractical, unaffordable, unrealistic. They’ll argue that the economy can’t support more educators, or school psychologists, smaller classes, or fully staffed school libraries. And then, without missing a beat, they’ll offer AI as the solution: cheaper, faster, easier. Theirs is a vision for a hollowed-out, mechanized imitation of education.

Beyond the Bot: Reclaiming Human Learning

Many educators and students aren’t passively accepting this AI-driven future. Youth-led groups like Encode Justice are at the forefront of efforts to regulate AI — by banning facial recognition in schools, requiring transparency in how student data is collected and used, and demanding accountability from the tech companies profiting off surveillance. Their advocacy includes legislative campaigns, open letters, and direct testimony to policymakers, all grounded in the belief that young people deserve a voice in shaping how technology impacts their lives. Meanwhile, organizations like the Algorithmic Justice League are challenging the spread of biometric surveillance and warning of its racial biases.

AI in schools isn’t progress — it’s a sign of much deeper underlying problems with U.S. schooling that reveal how far we’ve strayed from the purpose of education. For decades, policymakers and profiteers have swapped human care for high-stakes testing, scripted curriculum, and surveillance. AI isn’t the disease — it’s a symptom of a colonizer’s model of schooling that is extractive and dehumanizing, rather than liberating. That means regulating AI isn’t enough — we must dismantle the logic that brought it in.

Dismantling the logic behind AI in schools means rejecting the entire framework that treats education as a commodity instead of a common good. It means funding smaller class sizes, not software. Valuing joy, creativity, and critical thinking — not compliance and metrics. Investing in human relationships instead of automation. Teaching students to ask questions, not just answer them. 

That begins by reclaiming the purpose of education — not to rank and sort, but to teach us how to live together with justice, dignity, and love. A machine can deliver content, but it cannot offer love. And without love — without a classroom rooted in trust, justice, and humanity — there is no true learning. As bell hooks reminds us:

The loving classroom is one in which students are taught, both by the presence and practice of the teacher, that critical exchange can take place without diminishing anyone’s spirit. . . . Love in the classroom prepares teachers and students to open our minds and hearts. It is the foundation on which every learning community can be created. Love will always move us away from domination in all its forms. Love will always challenge and change us.

Trying to train a bot to be a teacher is a fool’s errand. Expecting AI to care for our children is like planting plastic flowers and expecting them to grow. Let’s fight instead for schools where human beings explore what it means to be alive together — and where love is the soil from which deep learning blossoms. 

Jesse Hagopian is a Rethinking Schools editor, a high school teacher, and on the staff of the Zinn Education Project. He is the co-editor of the Rethinking Schools books Teaching for Black Lives and Teaching Palestine. He also serves on the Black Lives Matter at School steering committee and is the director of the Black Education Matters Student Activist Award.

An earlier version of this article appeared in Truthout.

Site Search