Friday, February 13, 2026
FRIDAY – AI FOR THE C SUITE
Read time: 7-8 min · Read online
Hi, it’s Chad. Every Friday, I serve as your AI guide to help you navigate a rapidly evolving landscape, discern signals from noise and transform cutting-edge insights into practical leadership wisdom. Here’s what you need to know:
1. Sound Waves: Podcast Highlights
This Monday, I’m flying solo to unpack a landmark study from The Quarterly Journal of Economics that should change how every CFO thinks about AI ROI. (See my Algorithmic Musings below for more.) Researchers just proved that 10-20 hours of a specific type of practice produced a 22% improvement in sustained mental performance, and it has nothing to do with the tools on your tech stack. I break it down, connect the dots and make it actionable for you and your team. Hit play wherever you get your pod on.
Apple · Spotify · iHeart · Amazon · YouTube
Subscribe for free today on your listening platform of choice to ensure you never miss a beat.
2. Algorithmic Musings: The Mental Muscle AI Can’t Build For You
Remember Rocky IV? Rocky Balboa trains for his fight against Ivan Drago by hauling logs through Siberian snow, doing sit-ups in a barn, and sprinting up a mountain. Meanwhile, Drago has an entire Soviet sports-science lab. Machines. Monitors. Injections. Every technological advantage money and a superpower can buy.
Rocky wins. And not because the movie needed a happy ending (okay, partly that). He wins because he built something the machines couldn’t build for him. Endurance.
A study published in 2024 in The Quarterly Journal of Economics has landed on a finding that every executive adopting AI needs to understand. And it has everything to do with why Rocky was still standing in the fifteenth round.
Your Brain Is a Muscle. Train It Like One.
Here’s what the researchers did. They gave 1,600 Indian primary school students just 10 to 20 hours of sustained cognitive practice. Some kids did math. Others played non-academic puzzle games like mazes and tangrams. Both groups showed a 22% improvement in their ability to sustain mental performance over time. Both groups improved their school grades, even in subjects completely unrelated to what they practiced.
Let’s focus on that prior sentence for a moment: The puzzle-game kids improved their Hindi, English, and math grades. They never practiced any of those subjects. They just practiced thinking hard for sustained periods.
The researchers call this cognitive endurance: your ability to sustain effortful mental performance over time. It’s trainable. It is not fixed. And it may be the most important form of human capital in the AI era.
And here’s what should concern every leader who relies on incentive programs. When the researchers offered students incentives (toys for better test performance), the students did try harder at the start of the test. But the incentives did nothing to prevent their performance from declining over time.
You cannot motivate your way past cognitive fatigue. You have to train the capacity itself.
What AI Is Doing to Your Cognitive Workday
Think about what AI is doing to your workday right now. It’s removing the routine cognitive tasks. The data gathering. The first-draft writing. The scheduling optimization. What’s left?
The hard stuff.
Ambiguity. Judgment. Integration. Strategy. The decisions where you sit with complexity for an extended period and arrive at something that no algorithm could have generated alone.
Those tasks demand cognitive endurance. Yet here’s the risk that almost nobody is talking about: the very AI tools that free up your cognitive bandwidth may also be reducing your practice at sustained thinking. Prompt-hopping. Context-switching. Micro-dopamine hits from instant AI responses. You’re training yourself for cognitive sprinting when your job increasingly demands cognitive marathoning.
It’s the Drago problem.
The Organizational Angle
Students in lower-quality schools, where they spent less time in independent, focused practice, showed worse cognitive endurance. An additional year of schooling improved endurance, but only in better schools.
Now translate that to your organization. Are you designing work environments that build your leaders’ capacity for sustained thinking? Or are you inadvertently creating the organizational equivalent of a low-quality school, one where AI handles so much of the cognitive load that your people never get the reps they need?
This is especially critical for emerging leaders who came up through operational roles. They may have deep expertise but less practice at the kind of extended strategic engagement that builds endurance. AI won’t close that gap. Without deliberate intervention, it will widen it.
Three Things You Can Do About It
First, design for endurance, not just efficiency. Protect time for sustained, uninterrupted strategic thinking. Use AI to handle the preparatory work and the follow-up, but guard the human middle. That extended engagement where judgment lives? That’s the part you can’t outsource.
Second, build cognitive endurance into your leadership development. The research shows that the content doesn’t matter as much as the practice of sustained effort. Strategic simulations. Red-team exercises. Scenario reasoning. These aren’t just development activities. They’re endurance training for your leadership bench.
Third, rethink your AI adoption through this lens. For every tool you deploy, ask a simple question: Is this making my leaders better thinkers, or just faster reactors?
The Bottom Line
|
The scarce resource in the AI era is not intelligence. It is sustained applied intelligence. AI expands your cognitive reach. But human endurance governs depth, integration, judgment, and coherence. That’s the strategic leverage point. |
Rocky didn’t need a better lab. He needed a mountain to climb.
What mountain are you and your leaders climbing? If you’re trying to figure out how AI adoption and leadership development fit together in your organization, drop me a line. I’m always up for a good conversation about building the kind of leaders that technology can’t replace.
3. Research Roundup: What the Data Tells Us
Agentic AI Oversight: Why Your Teams Are Already Disagreeing About Control
Before your next AI governance meeting, know this: new research from Lehigh University shows your departments are already defining “human control” over AI agents in incompatible ways… and it happened within days of adoption, not months.
The numbers that matter: Researchers analyzed 2,733 posts across two early agentic AI communities and found statistically significant divergence in how people frame oversight. Operational users focused on execution boundaries. Permissions, rollback, cost limits. Public-facing users focused on identity, trust, and whether AI agents should carry social authority. That split showed up within one week.
What this means for your Monday morning: Your IT team thinks “AI oversight” means permission scoping and kill switches. Your marketing team thinks it means disclosure labels and brand reputation protection. Neither is wrong. Yet if you write one governance policy assuming a single definition, it will satisfy nobody. The research confirms this isn’t a communication failure; it reflects truly different risk profiles.
The catch: This study analyzed Reddit communities, not enterprise teams. But the functional split maps cleanly to what I see in middle-market deployments every week.
Action item: Before drafting any AI governance framework, survey each department on what “human control” means to them. You’ll find the gap is already there. Build role-specific modules under a shared umbrella instead of a one-size-fits-all policy that operational teams ignore and customer-facing teams can’t use.
Read our full analysis of this research at AI for the C Suite.
4. Radar Hits: What’s Worth Your Attention
AI Doesn’t Reduce Work, It Intensifies It. An eight-month study found that employees using AI didn’t work less. They worked faster, took on broader tasks, and let work bleed into breaks and evenings, all voluntarily. The productivity spike is real, but so is the burnout risk hiding behind it. If you’re rolling out AI tools, you need guardrails around workload expansion, not just adoption targets. Ask your team leads what new tasks people are absorbing that used to require separate headcount.
Are LTMs the Next LLMs? A New Type of AI for Structured Data. LLMs are great with text but struggle with spreadsheets. Large Tabular Models are built specifically for the structured data most businesses actually run on. Startup Fundamental just raised $255M and already has seven-figure Fortune 100 contracts. If your AI strategy focuses only on chatbots and document tools, you’re ignoring where most of your operational data lives. Worth asking your analytics team what they know about this space.
5. Elevate Your Leadership with AI for the C Suite
Wondering whether your AI rollout is building sharper leaders or just faster ones? That’s the kind of question I help executive teams answer. Reply to this email to grab 30 minutes with me. I read every one.
Stay safe. Stay healthy. Be strong. Lead well.
Chad
