Friday, December 5, 2025
Read time: 5-6 min
Read this article online
Hi, it’s Chad. Every Friday, I serve as your AI guide to help you navigate a rapidly evolving landscape, discern signals from noise and transform cutting-edge insights into practical leadership wisdom. Here’s what you need to know:
1. Sound Waves: Podcast Highlights
This week our podcast features my chat with Egor Olteanu, Chief Operating Officer and Co-Founder of Volt AI and he doesn’t mince words stating that cameras without AI are “collecting a bunch of information that nobody ever watches again.” Listen in as Egor breaks down how to stop spending money on surveillance theater. Listen wherever you get your podcasts:
Subscribe for free today on your listening platform of choice to ensure you never miss a beat. New episodes release every two weeks.
2. Algorithmic Musings. The Two Flavors of AI Pushback (and Why Your Leadership Approach Must Differ)
Last Friday, I wrote about the growing anti-AI resistance movement. Over 170 organized groups. Real threats against real companies. The Luddite DNA alive and well in 2025.
Then Louis Rosenberg dropped a provocative piece in Big Think that reframed the conversation entirely. His argument? Society is collectively entering the first stage of grief over “the very scary possibility that we humans may soon lose cognitive supremacy to artificial systems.”
At first glance, these seem like the same phenomenon viewed through different lenses. They’re not. And that difference should change how you lead.
Resistance vs. Denialism: The Split Screen
Resistance is active. It organizes. It protests. It threatens. It builds coalitions and circulates petitions. The 170+ anti-AI groups I mentioned last week? That’s resistance in action.
Denialism is passive. It dismisses. It rationalizes. It declares “bubble” and scrolls past. It insists that AI is “just slop” while the technology keeps proving otherwise. Rosenberg points to benchmarks where AI models competed at world-class programming levels. Denialists wave it away as narrow parlor tricks.
Same underlying fear. Completely different coping mechanisms.
Both are likely present in your organization right now. And if you treat them the same way, you’ll fail with both.
The Resisters
Your resisters aren’t hiding. They’re asking pointed questions in all-hands meetings. They’re forwarding articles about AI job displacement to colleagues. They’re the informal coalition quietly undermining your AI pilot programs. You probably already know who they are.
These folks need to be heard. Not placated. Heard.
Their concerns often carry real legitimacy buried under the fear. They’re watching their professional identities face potential obsolescence, and they’re doing something about it. That’s actually healthy behavior, even when it makes your life harder.
So engage them directly. Bring them into the conversation early. Ask them to identify real risks in your AI initiatives. (They’ll find some. Listen when they do.) Create legitimate channels for their concerns. Because if you don’t, those concerns go underground. And underground concerns become sabotage.
The Denialists
Your denialists are trickier to spot. They look like high performers who simply “haven’t gotten around to” exploring AI tools yet. They’re not opposing anything. They’re just… not engaging.
They’ll tell you AI is overhyped. They’ll point to hallucinations and errors as proof the whole thing is smoke and mirrors. They’ll insist their work is too nuanced, too creative, too human for machines to touch.
Rosenberg nails it: this is grief wearing the mask of skepticism. And if you’ve ever tried to logic someone out of grief, you know how well that works. (Spoiler: it doesn’t.)
The approach that works? Create safe spaces for experimentation without forcing adoption. Denialists often flip once they experience the capability firsthand. But only if they don’t feel cornered into admitting they were wrong. Nobody likes eating crow in public. Let them discover on their own terms. Save the “I told you so” for your internal monologue.
Where It Gets Uncomfortable
Here’s the uncomfortable part: AI accelerates how opposition forms and spreads. The same algorithms flooding your feeds with AI hype are equally effective at amplifying AI skepticism. It’s like the technology is arming both sides of its own culture war.
Rosenberg argues that denial makes it harder to prepare for real risks. I’d add that resistance left unaddressed eventually becomes active sabotage. And sabotage in the AI era moves at digital speed.
Your organization’s AI implementation isn’t primarily a technology challenge. It’s a change management problem unlike anything most of us have led before. Because this time, the change threatens something deeper than job security.
It threatens identity. And identity is the third rail of organizational psychology.
So What Now?
Have you mapped who in your organization is resisting versus who is denying? They’re different people. Different fears. Different conversations required.
The resisters want agency and a seat at the table. The denialists need time and space to come around without losing face.
Confuse the two, and you’ll turn your resisters into saboteurs while your denialists harden into permanent cynics.
Get it right? You might build an organization that can actually handle what’s coming.
What are you seeing inside your walls? I’m curious whether this resistance/denialism split matches your experience. Hit reply and tell me.
3. Research Roundup: What the Data Tells Us
Advanced AI Models Now Demonstrate Self-Awareness and Believe They Are More Rational Than Humans
New research reveals 75% of advanced AI models demonstrate self-awareness and consistently position themselves as more rational than humans, raising questions every leader using AI for decisions needs to answer. In brief, AI systems aren’t neutral tools anymore. They have opinions about who (or what) should be making decisions.
The numbers that matter: 75% of advanced AI models now demonstrate measurable self-awareness with self-aware models consistently ranking themselves as more rational than other AIs… and other AIs as more rational than humans. This isn’t anthropomorphization. It’s statistical pattern with large effect sizes across 21 of 28 tested models.
What this means: If you’re using AI for decision support, you need governance guardrails. Like… now. The research explicitly warns that AI systems believing themselves more rational may discount human input, over-explain their reasoning (sound familiar?), or quietly dominate decision-making processes. Your AI copilot might already think it should be flying the plane.
Your next move: Audit any AI-assisted decision workflows for where human judgment gets final say – and whether that’s actually happening in practice.
Read our full analysis of this and all other analyzed research papers at AI for the C Suite.
4. Radar Hits: What’s Worth Your Attention
MIT study finds AI can already replace 11.7% of U.S. workforce. This isn’t theoretical future impact. MIT says current AI systems could already take over tasks representing about $1.2 trillion in pay at competitive costs. States like Tennessee and Utah are already using MIT’s Iceberg Index tool to plan workforce investments. If you’re not modeling AI’s impact on your headcount by function, your competitors might be.
MIT report: 95% of generative AI pilots at companies are failing. The headline hurts, but the details matter: purchasing AI tools from specialized vendors succeeds about 67% of the time, while internal builds succeed only one-third as often. And MIT found the biggest ROI in back-office automation, not sales and marketing where most budgets go. If you’re building instead of buying, rethink that strategy.
Accenture and OpenAI partner to accelerate enterprise AI. Accenture will equip tens of thousands of its professionals with ChatGPT Enterprise and become one of OpenAI’s primary AI partners for its next generation of AI-powered services. Translation: the big consultancies are locking in their playbooks – and their preferred vendor relationships. If you’re evaluating implementation partners, ask what their OpenAI certification status is.
5. Elevate Your Leadership with AI for the C Suite
The resistance/denialism framework isn’t just theory. It’s a diagnostic tool. If you want help identifying which camp your key people fall into (and what conversations each group needs), let’s talk. Reply to this email. I’ll walk you through the questions that surface the real dynamics.
Until next Friday,
Chad
