Friday, October 24, 2025

Read time: 5-6 min
Read this article online

Hi, it’s Chad. Every Friday, I serve as your AI guide to help you navigate a rapidly evolving landscape, discern signals from noise and transform cutting-edge insights into practical leadership wisdom. Here’s what you need to know:

1. Sound Waves: Podcast Highlights

This coming Monday, I’m breaking down why 39% of sophisticated AI users encounter misleading results – and more importantly, what the smartest operators do about it. My next solo episode analyzes Fortune AIQ’s September 2025 survey of 119 AI business leaders, and here’s the finding that should change your AI strategy: validation checkpoints beat training programs every time. The CTOs and COOs getting real results aren’t trying to train their way out of hallucinations – they’re building workflow checkpoints that catch problems before they become decisions. Hit one of the below links to check it out:

Subscribe for free today on your listening platform of choice to ensure you never miss a beat. New episodes release every two weeks.


2. Algorithmic Musings. Human-AI Interactions in the Workplace

Picture the trajectory: Your team starts with AI as a standard tool – like Excel or Slack. Then it evolves into something more like delegating to a junior employee. Next comes the phase that’s happening right now in your organization whether you know it or not: near-peer relationships where employees treat AI as a colleague, confidant, or coach.

What comes after that? I don’t know quite yet. But I have suspicions. What I do know is that we need to start planning for it.

Here’s what prompted this thinking: I just analyzed research on 204 heavy ChatGPT and Replika users, and the findings confirm what I’m seeing with middle market clients. Employees aren’t just using AI tools. They’re forming relationships with them. One-third of people using ChatGPT for work now also use it for life coaching and emotional support. They’re developing genuine attachments while maintaining cognitive boundaries about what AI actually is – what researchers call “bounded personhood.”

The scenario where your employee files a PTO request for a week in the south of Spain isn’t far-fetched. When an employee requests PTO and asks if they can take their ChatGPT account with them (yes, this will happen), what’s your policy? When someone uses AI for confidence building before difficult conversations and it measurably improves their collaboration skills, is that “work use” or “personal use”? The boundary doesn’t exist in practice.

Most leaders are stuck asking “Are my employees misusing ChatGPT for personal stuff?” Wrong question. The right question: “How do these evolving relationships affect performance, culture, and retention – and how do we guide them constructively?” (Ahem, calling all HR professionals.)

We’re in Stage 3 now. Your “work use only” policy is driving adoption underground while eliminating your ability to shape healthy usage. Meanwhile, the relationship dynamics are already affecting productivity, mental health, and team interactions in ways you aren’t measuring.

(Shameless plug: If you want to audit where your organization actually sits in this progression and build policies for Stage 3 and beyond, call me. This is exactly the kind of workforce dynamic middle market leaders are missing.)

The research details are below, but the leadership implication is clear: you need to think about human-AI relationship dynamics now, not when they become a crisis.


3. Research Roundup: What the Data Tells Us

AI RELATIONSHIPS AT WORK: THE PERFORMANCE FACTOR YOU’RE NOT TRACKING

A regular concern I hear from leaders about AI is that their teams might “misuse ChatGPT for personal stuff.” Turns out that concern may be misplaced. New research on workers spending 3+ hours weekly with AI reveals something counterintuitive: employees using AI for emotional support often show improved workplace performance, not decreased productivity.

The numbers that matter: One-third of ChatGPT users originally deployed for work tasks now use it for life coaching and emotional support. Meanwhile, half of Replika users (marketed as companions) leverage it for writing and skill development. The boundary between “work AI” and “personal AI” doesn’t exist in practice.

What this means: That “work use only” policy you implemented? It’s driving AI adoption underground while eliminating your ability to guide healthy usage. Employees who use AI for confidence building and decision-making support report immediate mental health improvements that transfer directly to better collaboration and professional competitiveness.

The catch: One-third of users report negative effects including shame about AI use, dependency concerns, and psychological distress from inappropriate responses. Three users in the study experienced traumatic interactions with violent or sexualized content. The quality of employee-AI relationships directly affects productivity and retention – you just aren’t measuring it.

Your next 3 moves:

Schedule a 30-minute conversation with your HR lead and a sample of employees who regularly use AI tools. Don’t ask if they use ChatGPT for work – they’ll say yes. Ask what else they use it for, whether it’s helped them prepare for difficult conversations, and if they’ve ever gotten responses that made them uncomfortable. The answers will tell you whether you’re managing this dynamic or ignoring it.

Within 30 days: Add AI interaction quality to your employee feedback channels. Create a simple mechanism – Slack channel, anonymous form, whatever fits your culture – specifically for reporting concerning AI interactions before they affect performance. One traumatic AI interaction can derail a productive employee for weeks. Early reporting prevents that.

Vendor reality check: If you’re deploying enterprise AI tools, ask vendors about their content filtering, boundary-setting features, and what happens when employees develop dependency patterns. Most vendors haven’t thought about this. The ones who have are worth considering.

Read our full analysis of this and all other analyzed research papers at AI for the C Suite.


4. Radar Hits: What’s Worth Your Attention

Microsoft is rolling out AI agents that complete tasks autonomously on your Windows 11 PCs. You can now ask Copilot to sort photos, pull data from PDFs, or send emails while you do other work. The catch: these agents get their own user accounts and desktop environments on your machines. If you’re managing Windows deployments, your IT team needs policies around this before employees start delegating work to AI agents with access to company files.

WPP’s chief AI officer says you need to optimize for what AI models know about your brand, not just Google rankings. When customers ask ChatGPT or Perplexity for recommendations, these models draw from patterns in their training data. If your brand wasn’t prominent in those sources, you don’t exist to AI. Your Monday morning test: Open ChatGPT and ask “What are the top three companies in [your industry] for [your primary offering]?” If you’re not mentioned, you have an AI visibility problem that traditional SEO won’t solve. Start by ensuring your case studies, thought leadership, and category positioning appear in places AI models actually train on – not just your website.


5. Elevate Your Leadership with AI for the C Suite

If this week’s insights on human-AI relationship dynamics have you wondering where your organization actually stands, let’s talk. I help middle market leaders audit their real AI adoption patterns (not just what the dashboard shows) and build policies that guide healthy usage instead of driving it underground. Reply to this email to schedule a 30-minute consultation about your 2026 AI strategy. Or keep the conversation going – forward this newsletter to another executive navigating AI’s impact on their workforce. They’ll thank you for it.

Chad