Rage Against the Machine Learning urges mining safety pros to trade red tape for real tech wins and claim their seat at the innovation table.
AI and automation might reshape the future of mining safety, but Cam Stevens argues the real risk is leaving safety professionals out of the conversation.
When he took the stage at APCOM 2025 to deliver his keynote, Rage Against the Machine Learning: Exploring the Role of AI and Automation and its Impact on WHS in High-Risk Industry, the founder of Pocketknife Group didn’t waste time with small talk. He went straight for the jugular: in high-risk industries, safety teams are often bystanders when transformative technology decisions are made – even when those technologies are directly intended to protect people.
Cam, a chartered health and safety professional who has worked at the intersection of safety and technology for years, is also the Safety Lead at the Australian Automation and Robotics Precinct. His consultancy specialises in helping mining, energy, infrastructure, and construction organisations integrate emerging tools – from AI and computer vision to wearable tech – into safety-critical systems without losing sight of human factors.
“I’d deployed 191 wearable and connected work technology projects,” Cam told the audience. “Almost all had a health and safety use case. Yet only eight safety professionals were involved in choosing or rolling them out. That floored me. Why weren’t we in the room?”
That question became the backbone of a presentation that mixed professional candour with TED Talk energy, wrapped in the technical grit of someone who has lived the reality of deploying new tech at the coalface – sometimes literally.
The elephants in the room
Cam structured his talk around what he called the “elephants” – the unspoken, systemic issues holding back meaningful, responsible technology adoption in safety-critical environments.
Elephant One: The old safety playbook is running out of runway.
Cam argued that leadership conversations, safety management systems, and the traditional hierarchy of controls have delivered major improvements in fatality prevention – but they have reached a plateau. Without rethinking these frameworks in the context of digital tools, further progress will stall.
“The hierarchy of controls was first developed in the 1950s,” he said. “It’s taken us as far as it can. If we can change the way work is designed – not just how it’s managed – we can genuinely prevent fatalities. Digital transformation gives us that opportunity.”
He drew a sharp distinction between administrative controls, which simply support existing work processes, and engineering or technology-based interventions that change how work is actually executed. From large-scale automation to real-time risk decision support, Cam urged safety leaders to grasp that difference and advocate for design-level changes.
Elephant Two: Safety is still seen as a blocker, not an enabler.
“In tech, people ask ‘what if we could mine Saturn?’ In safety, the first instinct is ‘stop – risk!’” Cam said. “We’re seen as controllers, not enablers. We need to say yes first, then figure out how to do it responsibly.”
That mindset shift, he argued, is essential if safety teams are to be trusted partners in innovation rather than gatekeepers of the status quo.
Hype without a plan
Cam’s third elephant was the phenomenon of “lazy, non-strategic, hype-driven implementation” – executives demanding AI adoption without a clearly defined problem to solve.
“CEOs are saying, ‘Give me your AI roadmap.’ But safety data is messy. People aren’t machines – our datasets are full of nuance and context you can’t capture in a checkbox. If you start with the tech instead of the problem, you end up with bad outcomes.”
He pointed out that while mining companies have vast repositories of safety-related data, much of it is ill-suited for high-quality AI models. “We collect a lot of information, but there’s a lack of context, nuance, and consideration of diverse human needs. That makes it risky to jump straight into AI-based analytics or chatbots for safety management systems.”
The answer, in his view, is problem-led adoption: identify the operational challenge first, then explore whether AI or automation is the right tool to address it.

Lessons from other sectors
Cam used examples from outside mining to show what’s possible when technology is implemented strategically and with workforce engagement.
In Singapore, major construction projects above a certain value must integrate computer vision systems with wearable devices for biometric monitoring, GPS tracking, vehicle recognition, and linked competency records – ensuring only authorised workers and equipment are on site. In Hong Kong, similar mandates are tied to government contracts.
In Turkey’s steel sector, simulation platforms and interconnected IoT networks are used to model automated manufacturing processes and manage safety risk in real time – with the same visualisation layer used for both training and public engagement.
“These are controls that are already expected in some jurisdictions,” Cam said. “They’re not without ethical considerations, but they’re proving their worth.”
Trust, readiness and capability
Perhaps the most pointed part of the keynote came when Cam spoke about the fragility of trust during tech deployment.
“You install a vision system and suddenly see 10 near misses in a day you thought could never happen,” he said. “If your first response is to fire people, you’ve just destroyed trust.”
This, he argued, is why organisational readiness – cultural as much as technical – is critical. Pilots without a roadmap can backfire, damaging credibility and slowing future adoption. He told the cautionary tale of a data centre contractor whose C-suite quietly introduced a hole-drilling robot on a weekend to avoid union pushback. The workforce discovered it anyway, the union shut it down, and the company’s wider automation program was set back indefinitely.
And then there’s the human capability elephant: deciding which jobs to automate, and when.
“If you automate junior workers’ low-level tasks, you remove their learning opportunities,” Cam said. “Give the tools to experienced people who can handle unintended consequences – and use the time saved to coach and mentor.”
Rage against the wrong machine
Cam wrapped up with a challenge to safety practitioners: stop being absent from the technology conversation.
“What I’m raging against isn’t AI or automation itself,” he said. “It’s careless, context-free adoption. It’s data without context. It’s taking the human out of the loop before it makes sense to do so.”
The real goal, he said, is “responsible innovation” – using emerging technologies in ways that genuinely improve safety and work design, with clear governance, strong communication, and a willingness to step into uncomfortable conversations.
“Get clear on your problem statements. Ask better questions. And get in the room – because if you don’t, someone else will shape how technology reshapes work, and you might not like the result.”
This article was written by Jamie Wade and featured on the The Rock Wrangler website on 28 September 2025.