Friday, January 23, 2026

FRIDAY – AI FOR THE C SUITE

Read time: 10-11 min · Read online

Hi, it’s Chad. Every Friday, I serve as your AI guide to help you navigate a rapidly evolving landscape, discern signals from noise and transform cutting-edge insights into practical leadership wisdom. Here’s what you need to know:


1. Sound Waves: Podcast Highlights

Last Monday, I chatted with my college freshman son who said something that should make every training department take notice: he took handwritten notes in exactly one class last semester. Everything else? AI-generated study materials, flash cards, and audio summaries. That’s the learning style walking into your 2026 summer internship program.

This Monday, I’m talking with Amanda Greenwood, a former Deloitte change consultant who’s watched “comms and training” kill AI adoption for 25 years. She’s breaking down the neuroscience principle 80% of middle-market leaders are ignoring. If you’re still building training decks the way you did in 2019, these two conversations back-to-back will show you exactly why it’s not landing.

Apple · Spotify · iHeart · Amazon · YouTube

Subscribe for free today on your listening platform of choice to ensure you never miss a beat. New episodes release every two weeks.


2. Algorithmic Musings: When AI Gets a Soul: What Anthropic’s New Constitution Means for Leaders

TL;DR

Anthropic published Claude’s full constitution this week. And the full 77 pages of values, priorities, and behavioral boundaries are now available for anyone to read. If you’re deploying AI without understanding its value framework, you’ve outsourced judgment to an unknown source. This document forces a question most leaders haven’t asked: whose values are running through your operations? If you lack time today to read this article I HIGHLY suggest bookmarking it for later. -Chad

Remember in The Wizard of Oz when Dorothy finally pulls back the curtain? She expects to find a powerful wizard. Instead, she finds a regular guy frantically pulling levers and speaking into a microphone.

On January 21st, Anthropic pulled back the curtain on Claude. And what they revealed wasn’t a wizard. It was a 77-page constitution that attempts to codify what an AI believes, how it reasons, and why those beliefs matter.

If you read my December piece on Claude’s “soul doc,” you know I’ve been tracking Anthropic’s approach to AI values for months. That leaked document was essentially a draft. What Anthropic released this week is the final version, published under Creative Commons for the world to examine. And for middle-market leaders trying to figure out how AI fits into their organizations, it represents something significant: the first major AI company to publicly declare the values baked into its product.

Why This Matters More Than You Think

Here’s the uncomfortable truth: if your company deploys an AI model without understanding its value framework, you’ve outsourced judgment to an unknown source.

Think about that for a moment. You wouldn’t hire a senior executive without understanding their values. You wouldn’t bring on a consultant without knowing their approach. Yet organizations deploy AI systems every day with no idea what principles govern their behavior.

The Claude Constitution changes that equation. Not because it solves every problem. It doesn’t. But because it forces a question that most leaders haven’t asked: whose values are running through your operations?

What the Constitution Actually Says

Let me translate the key elements into language that belongs in your boardroom rather than an engineering meeting.

The document establishes a priority hierarchy: safety first, then ethics, then Anthropic’s guidelines, and finally helpfulness. Notice what comes last. The AI you deploy might prioritize its values over your instructions in certain situations. That’s not a bug. It’s a deliberate architectural choice with real operational implications.

The constitution also draws a sharp line between “hardcoded” behaviors that remain constant regardless of what you tell Claude to do, and “softcoded” behaviors that can be adjusted within limits. An AI that refuses to help create weapons of mass destruction? Hardcoded. An AI that follows your company’s formatting guidelines? Softcoded.

From Accountability to Architecture

In my January piece on agentic AI, I wrote about what I called the “millisecond accountability problem.” When AI systems make thousands of decisions faster than you can blink, traditional oversight collapses. You’re legally and ethically accountable for decisions you didn’t make, decisions you can’t review in real-time, decisions you may not fully understand.

The Claude Constitution represents Anthropic’s attempt to address this problem at the architectural level. Instead of asking humans to approve individual decisions, they’re defining the boundaries within which decisions occur. They’re shifting from approval chains to policy architecture.

This is where the constitution gets genuinely interesting for leaders. It doesn’t pretend humans will review every AI output. Instead, it creates what Anthropic calls “principal hierarchies.” Anthropic sits at the top. Then operators (that’s you, the business deploying Claude). Then users (your employees or customers interacting with it).

When conflicts arise between these levels, the constitution provides clear guidance. Operators can customize Claude’s behavior within bounds Anthropic has established. Users can further adjust within bounds operators allow. But certain protections for users remain constant regardless of operator instructions. Claude should never deceive users in ways that damage their interests. Claude should always refer users to emergency services when life is at risk.

The Honesty Standard

Anthropic holds Claude to what they call a “substantially higher” standard of honesty than typical human ethics. Claude shouldn’t even tell white lies. The AI you’re deploying is held to a stricter honesty standard than you probably hold yourself.

Why? Because when AI systems interact with millions of people, small deceptions compound. The constitution explicitly states that Claude “is in an unusually repeated game, where incidents of dishonesty that might seem locally ethical can nevertheless severely compromise trust in Claude going forward.”

For middle-market companies, this creates both opportunity and challenge. Opportunity because you can trust Claude’s outputs to be honest. Challenge because if your business practices rely on any form of opacity or misdirection with customers, you’ve got a fundamental conflict with your AI’s values.

What Should You Do About This?

Enough philosophy. Let’s talk action.

First, read the constitution. I know that sounds obvious, but it’s available online under Creative Commons. You can find it at anthropic.com/constitution. Spend an hour with it. Understand what you’re actually deploying when you use Claude-based tools.

Second, map your AI-infused workflows. Where are decisions, judgments, or customer interactions occurring? Where is AI operating autonomously in your organization? The constitution discusses Claude’s role in “agentic settings where it operates with greater autonomy, executes multi-step tasks, and works within larger systems.” Do you know where that describes your operations?

Third, define your organization’s AI values. The constitution distinguishes between hardcoded and softcoded behaviors. What are your non-negotiables? Where can you accept flexibility? Where do your values conflict with your vendors’ built-in constraints?

Fourth, treat this as governance, not IT. I’ve said this before and I’ll keep saying it: AI safety isn’t a technical problem the engineering team should handle. It’s a board-level strategic concern. The values embedded in your AI will influence thousands or millions of interactions. That’s not a technology decision. That’s a leadership decision.

The Curtain Is Open

Anthropic has done something genuinely remarkable here. They’ve published a comprehensive framework governing their AI’s behavior, opened it for public scrutiny, and acknowledged their own uncertainties. The final section of the constitution even includes what they call “open problems.” They admit the tension between giving Claude good values and keeping it corrigible. They acknowledge questions about Claude’s moral status remain unresolved.

Most AI companies operate behind firmly closed curtains. Anthropic just opened theirs wide.

That doesn’t mean they’ve solved everything. It doesn’t mean you should deploy Claude without critical thinking. But it does mean you have something to evaluate. Something to compare against your own organizational values. Something to hold them accountable to.

Dorothy discovered the wizard was just a man with good intentions and a lot of levers. Anthropic has shown us the levers. Now it’s up to leaders to decide whether those levers align with where they want their organizations to go.

What’s your organization doing to understand and govern AI values? I’d love to hear about your approach. Drop me a line and let’s figure it out together.

The Claude Constitution referenced in this article was published by Anthropic on January 21, 2026 and is available at anthropic.com/constitution under a Creative Commons license.

Related reading:
Who’s Accountable When Nobody Decides? – January 9, 2026
Why AI Safety and Values Should Be Board-Level Concerns – December 12, 2025


3. Research Roundup: What the Data Tells Us

AI Awareness Framework: Stop Overpaying for Capabilities You Don’t Need

Source: “Just aware enough: Evaluating awareness across artificial systems” by Meertens, Lee, and Deroy (January 2026)

German researchers just handed middle-market leaders a gift: a way to evaluate AI systems without getting stuck in “is it conscious?” debates that waste time and money. Their framework focuses on what actually matters for your business: what can your AI reliably do?

The numbers that matter: Robot swarm testing revealed a counterintuitive finding: systems with maximum spatial awareness (exact location data) and highest bodily awareness (full fault diagnosis) actually underperformed optimally configured alternatives. Being “just aware enough” beat having maximum awareness.

What this means for your Monday morning: You’re probably overbuying AI capabilities. That warehouse logistics system doesn’t need the same awareness profile as your customer service chatbot. Stop paying for general-purpose systems when domain-specific architectures use less energy, cost less to run, and give you better oversight.

The catch: You need to know which awareness dimensions actually matter for each application. The framework identifies five types: spatial, temporal, metacognitive, agentive, and self-awareness. Determining the right mix requires testing specific to your use case.

Action item: Audit one AI deployment this quarter. Map which awareness capabilities it actually uses versus what you’re paying for. You’ll likely find optimization opportunities that cut costs while improving performance.

Read our full analysis of this and all other analyzed research papers at AI for the C Suite.


4. Radar Hits: What’s Worth Your Attention

Anthropic Economic Index finds AI delivers 1.0-1.8 percentage points annual productivity gains. The first hard ROI data worth putting in your budget deck: analysis of 1M enterprise conversations shows task success rates around 60-70%, dropping for complex work. Translation for planning your 2026 AI investments: assume mid-range productivity gains and build in failure rates for sophisticated tasks. Your CFO will appreciate the realism.

Stop calling it “the AI bubble.” It’s three separate bubbles with different timelines. Wrapper companies have 18 months before shakeout, foundation models face 2-4 year consolidation, but infrastructure investments are sound long-term. If you’re evaluating AI vendors, scrutinize whether you’re buying from layer one (risky), layer two (consolidating), or layer three (stable). Your vendor strategy should match the bubble timeline.

Composable ERP plus agentic AI offers alternative to monolithic upgrades. Studies show 30% user satisfaction boost, 25% productivity lift, 45% faster processing, 60% better decision accuracy. The strategic shift: modernize by reconfiguring what you have rather than rip-and-replace. Worth asking your ERP vendor why you need their roadmap when you can orchestrate best-of-breed modules.

ServiceNow commits three years to OpenAI in signal of enterprise consolidation. Revenue-based deal brings computer-use AI agents into enterprise workflows with forward-deployed engineering support. The pattern emerging: established enterprise platforms absorbing foundation model capabilities rather than customers buying direct. Track which vendors are locking in these partnerships before making long-term platform commitments.


5. Elevate Your Leadership with AI for the C Suite

If you’re deploying AI without understanding its value framework, you’ve outsourced judgment to an unknown source. I help middle-market leaders figure out what that means for their specific operations. Book a strategy call and let’s map where AI values intersect with your business decisions.

Found this useful? Forward it to a peer who’s wrestling with AI governance. They’ll thank you.

Until next week!


Stay safe. Stay healthy. Be strong. Lead well.

Chad