Can an AI Agent Review Your SWMS? Yes – But Not Before You Do This First
Recently, we spoke with a construction supervisor who’d seen one of our talks on AI and safety. His question was simple:
“Could we build an AI agent that checks our SWMS for quality? I want to drop a SWMS in, have it do 80% of the review, and tell us what’s missing.”
In Australia, SWMS (Safe Work Method Statements) are a formal requirement for high-risk construction work.
For international readers: when we say SWMS, think of your equivalent documents – risk assessments, method statements, JHAs/JRAs, PTWs, RAMS, and similar. The same ideas apply.
It’s exactly the kind of question safety teams are starting to ask – and a good example of how to sensibly explore AI agents, rather than just “buying AI” and hoping for the best.
Here’s the short version of how we think about introducing AI for health and safety workflows:
1. Start with the questions people are already asking
Most organisations now have Microsoft Copilot enabled in their enterprise. Microsoft’s intent is to democratise experimentation – let people play, see what they ask, and learn from that.
These are the types of questions workers are asking CoPilot:
- “Check this SWMS against our standard.”
- “Find the latest working at heights procedure.”
- “Summarise this in five bullet points.”
- “Compare our critical controls to the client’s specification.”
Take the time to observe the questions your people are asking Copilot. That tells you where the pain really is – and where AI might actually help.
2. Decide what “good” looks like in your context
You can’t build a “SWMS quality review” agent until you know what “quality” means in your business.
That means some work outside the AI tools:
- Agree on what a gold-standard SWMS looks like in your organisation.
- Create a simple quality rubric or template (clarity, hazards, controls, alignment to standards, etc.).
- Be honest about what an agent can’t tell you:
- Did the work team actually have a conversation?
- Was the SWMS communicated, understood, and adapted as work changed?
The form (or PDF) won’t answer those questions – and neither will an agent. Some things still belong to supervisors and crews.
3. Use Copilot Studio as a low-cost laboratory
Once you know what “good” looks like, then it’s time to play with agents.
In Microsoft Copilot Studio, the pattern we recommend is:
- Create an agent – e.g. “SWMS Quality Review Agent”.
- Upload a focused knowledge base:
- One or two gold-standard SWMS examples
- Your SWMS template/quality rubric
- A key external reference (e.g. regulator code of practice)
- Set it to “only use specified sources”, so it doesn’t wander the open web.
- Start with one activity or risk (e.g. work at height), not the entire SWMS library.
- Ask targeted questions:
- “Assess this SWMS against our template.”
- “List any missing controls for working at height.”
- “Flag inconsistencies with our standard.”
You won’t get perfection. But you can get a solid 80% assist: a faster first pass, pattern spotting, and a more consistent baseline for human reviewers.
4. If the prototype works, then talk “product”
If your prototype agent is clearly saving time and lifting quality, that’s when it makes sense to invest.
We usually suggest:
Estimating the value:
- Time saved per SWMS (or risk assessment) review
- Reduction in rework/rejects
- Better alignment to critical controls
Deciding whether to:
- Hand it to your internal IT team to wrap in a simple, branded front-end (e.g. Power App + backend agent), or
- Partner with an external tech house if you don’t have internal capability.
Most organisations doing this at scale are:
- Running enterprise models (e.g. OpenAI) in their own environment, and
- Wrapping agents in simple tools (web forms, Power Apps, etc.) so the experience is straightforward for supervisors and engineers.
At PKG Safety Innovation, we typically work upstream of that build, right now: helping safety teams define the right use cases, build capability, and shape a strategy so IT (internal or external) has something clear and valuable to deliver.
5. Design an experiment, not an AI transformation
You don’t need a grand “AI transformation programme” to begin. You need a good experiment.
Start with a 6-week sprint:
- Let a pilot group play with Copilot and collect the questions they ask.
- Pick one or two high-value use cases (like SWMS quality review).
- Build a basic agent in Copilot Studio.
- Test it on real work and document:
- What it gets right
- Where it falls over
- How it changes effort and quality
- Decide whether to iterate, scale, or park it for now.
You learn fast, spend very little, and avoid the trap of “buying AI” without a clear job for it to do.
Get started.
An AI agent can help you review SWMS – or whatever your local equivalent of a risk assessment/method statement is – but only if you:
- Start with the problems and questions your people already have,
- Do the work to define what “good” looks like,
- Use tools like Copilot Studio as a safe, low-cost laboratory, and
- Treat early agents as experiments, not finished products.
That’s the space we work in at PKG Safety Innovation: helping organisations move from curiosity about AI to practical, problem-led experiments that actually improve safety and critical risk management.

