insights

What Using AI Can Teach Us About How We Think at Work

An observation from coaching professionals on AI tools across a high-risk industrial organisation, written by Cam Stevens.

10
March
2026

I’ve been running one-on-one AI coaching sessions inside a mid-sized industrial organisation for the past few months. The work is practical — helping safety, operations, and technical professionals get more out of Microsoft Copilot and large language models in their day-to-day work. It’s hands-on, session by session, tailored to each person’s role and what they’re actually trying to get done.

The coaching itself isn’t what this article is about. The coaching is going well and doing what it’s supposed to do. But there’s an observation that keeps surfacing through the process that I think is worth sharing, because it’s showing up consistently — not just in this engagement, but across other organisations I’ve been coaching and supporting with their digital safety transformation strategies.

People respond to AI tools in predictable ways

Across every group I’ve coached, three broad profiles keep appearing.

There are people who pick the tools up quickly and start finding genuinely useful applications within a session or two. They ask good questions. They iterate. When an output isn’t right, they can tell you why it’s not right and adjust their approach. These aren’t necessarily the most technically literate people in the room. They’re the ones who were already thinking critically about their work before AI showed up.

Then there are people who’ve decided early that this isn’t for them. They’ll call themselves a Luddite or say they’re not a tech person. Sometimes that’s genuine anxiety, sometimes it’s just a comfortable position they’ve settled into over the years. Either way, it’s hard to shift, and it’s not something a training session on prompt engineering is going to change.

The third group is the most interesting to me. They’re willing, they show up, they try. But they struggle in ways that don’t seem to have anything to do with the technology. They can’t clearly describe what a useful output would look like. They can’t evaluate whether what they got back actually helps. When I ask what they were trying to achieve, the answer is vague. They’re going through the motions of using AI without the underlying thinking that makes it productive.

That third group is the one that’s made me pay attention, because in most cases they weren’t producing particularly sharp work before AI came along either. The tool didn’t create that gap. It just made it visible.

The capabilities that seem to matter most aren’t technical

The more coaching sessions I run, the more I’m seeing the same three things separate people who get value from AI tools from those who don’t. None of them has anything to do with technical skill.

The first is what I’d call professional agency — whether someone actively owns and interrogates their own work, or whether they’re just following a process. There’s a difference between a safety professional who understands why a control exists and one who just checks that the paperwork says it does. I recently heard a group defend a flammables handling process they’d been running for a decade on the basis that nobody had been hurt. They hadn’t been safe. They’d been lucky. That’s what an absence of professional agency looks like — and it’s the same thing that shows up when someone sits in front of an AI tool and just accepts whatever it gives them.

The second is metacognition — the ability to think about your own thinking. One participant had been using AI for weeks and was getting poor results. It turned out he didn’t realise he was logged into the free web version of a chatbot instead of his organisation’s licensed enterprise tool. He wasn’t noticing what was actually happening in front of him. The people who get the most from these tools are the ones who can look at an output and say, “That’s not what I needed, let me work out why and try again.” That takes awareness of your own intent and the gap between what you asked for and what you got back.

The third is interaction style. Large language models respond well to context, specificity, and framing. People who naturally communicate that way — who think out loud, give background, and iterate in conversation — tend to get noticeably better results than people who treat the tool like a search engine with one query in and one answer out. That’s a learnable skill, and it’s part of what the coaching develops. But some people pick it up much faster than others, and it tends to correlate with how they already approach their work.

AI seems to amplify what’s already there

There’s a popular idea that AI will be a great equaliser — that it’ll lift everyone to a baseline of competence. There’s another idea that it’ll make everyone lazier or dumber. Based on what I’m seeing in practice, I don’t think either of those is quite right.

What engaging with LLMs on a regular basis actually seems to do is amplify what’s already there. If someone already has good critical thinking, AI enhances it. They ask better questions, move faster, and produce better work. If someone doesn’t have that foundation, they’re no better or worse with the tool than without it — they just produce output that looks more polished on the surface without being any more thoughtful underneath.

That’s worth paying attention to. AI-generated mediocrity is harder to spot than the regular kind, because it comes with better grammar and more confident formatting. The substance hasn’t improved, but the presentation has — and that can fool people if they’re not looking carefully.

What this might mean for how we think about AI readiness

I want to be clear — I’m not saying technical AI coaching is unnecessary. It’s what we do at PKG Safety Innovation and through the Safety Innovation Academy, and it works. People need help understanding what these tools can do, how to use them effectively, and where they fit into their actual workflows. That practical, hands-on work matters.

But the observation I keep coming back to is that the people who get the most from coaching are the ones who already had strong thinking habits before we started. They were already asking good questions, already interrogating their own work, already comfortable with iteration and ambiguity. The coaching gives them a new set of tools to apply those habits to. It accelerates what was already there.

That raises a question worth thinking about: if you’re investing in AI readiness across your organisation, are you also investing in the underlying thinking capabilities that make the tools useful? Professional agency, critical thinking, reflective practice — these aren’t AI skills, they’re professional skills. But they’re turning out to be the strongest predictors of who actually gets value from AI and who doesn’t.

A note for people leading AI rollouts in their organisations

If you’re partway through an AI adoption program, pay attention to the patterns. Not just who’s logging in and who isn’t, but who’s actually producing better work with the tools and who’s just using them. There’s a difference.

The person who picks it up quickly probably isn’t more technically gifted than the person who’s struggling. They’re likely just someone who was already doing the kind of thinking that AI rewards — questioning, iterating, being specific about what they need and honest about whether they got it.

That’s not a reason to stop doing what you're doing. It’s a reason to think more broadly about what you’re developing when you develop your people. The technology will keep changing. The thinking that makes it useful won’t.