Are You Being Taught to Hate AI? The Manufactured Panic Over Artificial Intelligence
- Legit Politic
- Dec 22, 2025
- 3 min read

Public fear about artificial intelligence didn’t arise in a vacuum. It was taught, reinforced, and amplified — often through trusted institutions — even as real-world data has failed to match the panic.
An examination of funding networks, media fellowships, and labor-market research suggests the anti-AI narrative dominating headlines may be less organic than it appears.
The fear gap
Polls show Americans are increasingly worried that AI will eliminate jobs and destabilize society. But a major analysis from the Yale Budget Lab finds little evidence to support that fear — at least so far.
The Yale researchers report that since the release of generative AI tools like ChatGPT, the U.S. labor market has not experienced “discernible disruption.” Changes in employment trends largely predate AI’s rise, and occupations most exposed to AI have not seen unusual job losses. Measures of AI exposure and automation show no meaningful relationship with unemployment.
In short: the anxiety is real. The economic damage is not — yet.
So why does the public conversation feel so apocalyptic?
Following the funding
Part of the answer may lie in how AI coverage is being produced.
A December investigation by Semafor reported that the Tarbell Center for AI Journalism places fellows inside major newsrooms to report on artificial intelligence. Those outlets include Bloomberg, Time, The Verge — and the Los Angeles Times.
Semafor also reported that Tarbell is funded in part by the Future of Life Institute, a nonprofit explicitly dedicated to warning about the dangers and risks of advanced AI.
The funding arrangement drew scrutiny after OpenAI privately complained to NBC News about an AI story written by a Tarbell-funded reporter, citing concerns about the funding source. NBC later added a disclosure noting the relationship.
Critics argue this structure risks nudging coverage toward worst-case narratives by subsidizing newsroom labor focused on AI harms. Tarbell has denied that funders influence editorial decisions, saying it maintains a strict firewall between funding and reporting.
Still, the model raises questions: when outside organizations pay to place reporters on a single, highly contested beat, does that shape what stories get told — and how?
An “industrial complex” of fear?
The concern fits into what some technology leaders describe as a broader ecosystem built around AI alarmism.
In a widely shared post on X, David Sacks labeled this network the “AI Existential Risk Industrial Complex,” arguing that a web of organizations, donors, and advocates amplifies catastrophic AI narratives across media, academia, and policy debates.
That critique echoes reporting from the AI Panic newsletter, which describes the existential-risk movement as a well-funded, top-down network — not a spontaneous grassroots backlash. The newsletter points to overlapping donors, shared messaging, and investments not just in research and advocacy, but also in media outreach.
Together, these efforts help set the tone of the public conversation — often long before data can confirm or refute the claims being made.
Narrative versus numbers
None of this proves AI is harmless or that future disruption won’t come. Even the Yale Budget Lab cautions that longer-term impacts may emerge as adoption deepens.
But it does highlight a growing disconnect between measured outcomes and manufactured urgency.
If AI were already destroying jobs at scale, the numbers would show it. So far, they don’t.
Which raises a more uncomfortable question than whether AI will take your job:
Are you being taught to fear AI — and if so, by whom?
As artificial intelligence continues to evolve, the fight may be less about machines replacing workers, and more about who controls the story we’re told while the evidence is still coming in.



