When should you tell someone that AI helped produce a piece of work?
The answers range from "AI was used to help write this post, with lengthy disclaimers" to "never, this article was 100% written by a human." Neither extreme sits right with us. So we've developed a position for AI disclosure in professional practice, grounded in two of our core values: honesty and rigour.
What We've Decided
For client work: We obtain consent for AI use and disclose it explicitly in our proposals. Our clients know that AI tools are part of how we work, and many engage us precisely because of that capability. We obtain ongoing consent during our engagements.
For visual content: We disclose when images, diagrams or frameworks have been substantially created by AI. Visual content carries different expectations than text.
For thought leadership: We generally don't append disclaimers to blog posts, LinkedIn content or articles. The ideas and positions are ours. The drafting process is incidental.
Learning From Peers
Recent high-profile failures show what happens when AI is used without proper oversight. In 2025, Deloitte delivered reports to both the Australian and Canadian governments containing fabricated citations and fictional sources. Those weren't failures of the technology; they were failures of professional responsibility.
We use AI differently. Every citation is verified against primary sources. Every claim is checked. Domain expertise directs the work, and professional accountability backs it.
AI makes us more efficient. Rigour ensures that efficient doesn't mean worse.
What's The Question We Should Be Asking?
We think the current AI disclosure debate is focused on the wrong thing. The question shouldn't be "was AI used?" It should be, "Does the work reflect genuine expertise, rigorous quality control, and professional accountability?" regardless.
In a few years, this whole conversation will probably seem as quaint as declaring a document was "handwritten, no word processor used." Until then, we've staked out a position that's honest, practical and aligned with how we work.
Read the Full Position Statement
We've published our complete thinking in PKG-POS-001: AI Disclosure in Professional Practice. It covers:
- Regulatory alignment (Australian Voluntary AI Safety Standard, NIST AI RMF, ISO 42001)
- Professional ethics and the AIHS Code of Ethics
- Our quality assurance commitments
- The rationale behind our disclosure framework
- Where we think disclosure genuinely matters and where it doesn't

download PKG-POS-001: AI Disclosure in Professional Practice.pdf
We welcome the conversation. If you're working through similar questions—whether as a safety professional, a client or a colleague in consulting, we'd be glad to hear how you're thinking about it.


