Friday, January 9, 2026

FRIDAY – AI FOR THE C SUITE

Read time: 9-10 min · Read online

Hi, it’s Chad. Every Friday, I serve as your AI guide to help you navigate a rapidly evolving landscape, discern signals from noise and transform cutting-edge insights into practical leadership wisdom. Here’s what you need to know:


1. Sound Waves: Podcast Highlights

My next episode featuring Brendan Norman, Co-founder and CEO of Classify, drops this coming Monday where we discuss the future of online advertising in an AI-powered world. When your CMO spends a dollar on digital advertising, publishers see just 35-40 cents. Brendan Norman built Facebook’s Audience Network from zero to $3 billion, and now he’s explaining where the rest of your money disappears and what middle-market companies can do about it.

Subscribe wherever you get podcasts. New episodes drop every Monday.

Apple · Spotify · iHeart · Amazon · YouTube

Subscribe for free today on your listening platform of choice to ensure you never miss a beat. New episodes release every two weeks.


2. Algorithmic Musings: Who’s Accountable When Nobody Decides?

Agentic AI forces a fundamental question about leading systems that act faster than you can observe.

Remember the bullet time sequence in The Matrix? Neo sees the bullets coming in slow motion. He bends, twists, watches them sail past. For that moment, he perceives events at machine speed. Fast enough to react, to intervene, to choose.

It’s a compelling fantasy. And it’s exactly wrong for understanding what happens when AI systems start making autonomous decisions.

You don’t get bullet time. The decisions happen at machine speed while you’re stuck in regular human time. A thousand credit approvals. A hundred risk flags. Fifty exceptions, all processed before you finish reading this sentence. By the time you perceive what’s happening, it’s already happened. Repeatedly.

The Millisecond Accountability Problem

Recent research on agentic AI in credit risk assessment makes this concrete. (See this week’s Research Roundup for more). These systems don’t just recommend decisions. They make them. Thousands per second. With documented accuracy improvements over traditional approaches.

The accountability trap: You’re legally and ethically accountable for every single one of those decisions. Decisions you didn’t make. Decisions you can’t review in real-time. Decisions you may not fully understand.

Traditional oversight assumes humans can review decisions before or shortly after they’re executed. That model collapses when your AI system processes loan applications faster than you can blink.

From Approval Chains to Policy Boundaries

So what does leadership actually look like when you can’t approve individual decisions?

You stop approving decisions and start designing the boundaries within which decisions occur. Your job shifts from reviewing exceptions to architecting principles. That shift is equal parts liberating and terrifying.

For decades, leadership in most organizations has meant sitting at the top of an approval chain. Information flows up. Decisions flow down. The higher your position, the more consequential the decisions that land on your desk.

Agentic AI inverts this entirely. The consequential decisions (thousands of them) happen autonomously within parameters you set. Your value isn’t in making those decisions. It’s in defining the guardrails, calibrating the thresholds, and ensuring the system’s judgment aligns with your organization’s values.

Three Questions You Need to Answer

If your organization deploys any system that makes autonomous decisions, or plans to, you need clarity on three things.

First, who designed the boundaries? When an autonomous decision goes wrong, regulators and stakeholders will want to know who defined the rules the system operated within. “The vendor configured it” isn’t an answer that protects you. If you’re accountable for outcomes, you need visibility into the principles governing those outcomes. And ownership of them.

Second, how do you audit what you can’t observe? Traditional audits sample decisions after the fact. But when decisions happen at machine speed, sampling becomes statistically meaningless without new approaches. What monitoring infrastructure exists? How would you detect systematic drift before it becomes a crisis?

Third, can the system explain itself? Recent research shows a dramatic gap between AI systems that embed explanations in their decision process versus those that bolt on explanations after the fact. The difference matters enormously for regulatory compliance, customer trust, and your ability to actually govern what’s happening.

What This Means for You

You don’t need to deploy cutting-edge agentic AI tomorrow to start preparing for this shift. But you do need to start asking different questions about the automated systems already operating in your organization.

How many decisions happen daily without direct human approval? Who defined the rules those systems follow? What would you do if one of those systems made a thousand wrong decisions before anyone noticed?

Neo got to see the bullets coming. You won’t. The question is whether you’ve built a system you can trust when you can’t watch every shot.

Trying to figure out what this shift means for your organization? Drop me a line. I’m always up for a conversation about leading through technological disruption.


3. Research Roundup: What the Data Tells Us

Agentic AI for Credit Risk: Faster, Smarter Lending Decisions

Source: Intelligent Systems and Applications in Engineering, Vol 12 No 23, 2024

If you’re running a lending operation, this research validates what the best credit teams already suspect: your current scoring models are leaving accuracy on the table. Researchers built an autonomous AI system that predicts credit risk, explains its reasoning in plain English, and adapts to changing economic conditions without manual recalibration.

The numbers that matter: The agentic AI framework hit 94.2% accuracy versus 87.6% for conventional machine learning models. More important for your compliance team: the system’s explainability score reached 0.92 compared to 0.61 for traditional approaches. That’s the difference between “the algorithm said no” and “we declined this application because of X, Y, and Z.”

What this means for your Monday morning: If you’re processing thousands of loan applications monthly, that 7-point accuracy gap translates directly to fewer defaults and better risk segmentation. The real win is speed: this architecture makes decisions in milliseconds instead of waiting for batch processing overnight. For BNPL or digital lending products, that’s table stakes.

The catch: You’ll need clean, consistent data infrastructure and computing resources to make this work. Organizations with fragmented data sources or legacy systems will need to invest in the plumbing before they see these gains.

Action item: When evaluating credit technology vendors, ask one specific question: Is explainability built into the decision flow, or bolted on afterward? The research shows integrated explainability dramatically outperforms post-hoc explanations. That distinction matters when regulators come calling.

Read our full analysis of this and all other analyzed research papers at AI for the C Suite.


4. Radar Hits: What’s Worth Your Attention

Natural language is replacing API calls as the default software interface. The question is no longer “which function do I call?” but “what outcome do I want?” For executives planning AI integrations, this means your developers should be piloting Model Context Protocol layers now. If your tech team is still hand-coding every integration, you’re building for yesterday’s architecture.

Programmer employment dropped 27.5% since 2023, but software developer roles held steady. The difference: coders who just write instructions are getting displaced, while those who design systems aren’t. When you’re hiring, look for candidates who can manage AI “team members” and demonstrate collaborative problem-solving. The skill gap is real, and it’s not closing.

Plaud’s new $179 AI pin and desktop app compete directly with meeting notetakers like Granola. With 1.5 million devices already sold, the hardware-plus-software approach to meeting capture is gaining traction. If your team is still manually summarizing meetings, this category is worth a pilot.


5. Elevate Your Leadership with AI for the C Suite

If the three questions in today’s Algorithmic Musings made you realize you don’t have good answers, that’s exactly the conversation I have with clients. Governance for autonomous systems isn’t about creating more bureaucracy. It’s about knowing where your accountability starts before something goes wrong.

Reply to this email if you want to talk through what this looks like for your organization. Or share this newsletter with a colleague who’s wrestling with the same questions.

Until next week, keep asking hard questions.


Stay safe. Stay healthy. Be strong. Lead well.

Chad