insights

I Help Companies Adopt AI and I Went to a Rally That Wants to Stop It

The Pause AI movement is making the same argument safety professionals make every day. Define what you don’t know before you scale.

22
April
2026

I attended a Pause AI rally on Capitol Hill in April 2026. And yes, I appreciate the irony.

I help organisations adopt AI. I use it every day. I build consulting frameworks around it. I evaluate AI-powered safety products for clients across all industry sectors. By most definitions, I am an AI advocate. So what was I doing standing on the Capitol lawn with people holding signs that read "Pause AI" and "AI is an existential threat"?

I went with Arianna Howard from Syncra Group, and we spent time in the crowd, talking to the people who showed up and listening to the speeches. I wasn't there to protest. I was there because you can't form a credible view on something this significant from a distance.

The crowd was more credible than the headlines suggest

The Pause AI movement has been easy for the technology industry to dismiss. It gets lumped in with doomsday thinking, Luddism and performative activism. Standing in the crowd, the reality was quite different. The mix was roughly equal parts concerned technologists, everyday citizens, and a peppering of activist energy. 

The signs told their own story. "Not Anti-AI, Pro-Human." "This Sign: More Regulated Than AI." "Treaties Paused Nukes, Treaties Can Pause AI." One large placard quoted Dario Amodei, the CEO of Anthropic, on the risks of AI systems we don't fully understand. There's something in the fact that the AI builders' own words are now appearing on protest signs outside the US Capitol.

A 24-hour train ride and the precautionary principle

I had a good conversation with a young guy called Simon, originally from the Fridays for Future climate movement, who had travelled 24 hours by train to be there. His argument was straightforward: pause what we don't fully understand, and make sure any use case benefits humanity.

That framing was hard to dismiss. It wasn't anti-technology. He was applying the precautionary principle to a novel, high-consequence, poorly understood technology being scaled at speed. Every safety professional in the world would recognise that logic. It's the same reasoning we apply when we say: don't proceed with a high-consequence activity until you understand the failure modes and have controls in place.

The Fridays for Future to Pause AI connection carries a generational signal I’ve been reflecting on. Young people who began by demanding accountability for climate decisions are now demanding accountability for AI decisions. Both are fundamentally about intergenerational risk and the feeling that powerful actors are making irreversible choices without adequate consent from the people who will live with the consequences. Makes sense. 

The speakers landed on familiar ground

The talks reinforced a single, consistent premise: we do not yet understand the unintended consequences of accelerated AI development, and we need guardrails before we need speed. The speeches acknowledged that AI is already delivering value in specific domains. The concern wasn't with the tools that exist today. It was with the trajectory: the race toward superintelligent systems being built without proven safety mechanisms, without public consent and without governance frameworks that match the pace of capability.

This is not a fringe position anymore. The Future of Life Institute's Superintelligence Statement, released in late 2025 and now carrying over 130,000 signatures, calls for a prohibition on superintelligence development until there is a broad scientific consensus that it can be done safely. The UK House of Lords debated an international moratorium in January 2026. What started as an open letter has become a parliamentary question.

It's also worth remembering that the Future of Life Institute coordinated a similar letter in 2023, calling for a six-month pause on training models more powerful than GPT-4. That pause was largely ignored. The labs kept building. The fact that the ask has now escalated from a temporary pause to a full prohibition tells you something about where the concerned community believes the trajectory is heading.

The nuclear analogy is powerful, but incomplete

One of the more provocative signs at the rally read: "Treaties Paused Nukes, Treaties Can Pause AI." It's a compelling rhetorical move. It reframes the debate from "should we slow down innovation" to "we've done this before with existential technology." As persuasion, it works.

But the analogy has limits that I’ve been thinking about. Nuclear weapons involved a small number of state actors, physical supply chains that could be monitored and materials that could be tracked and controlled. AI capability is far more distributed. The compute required for frontier models is concentrated in a handful of labs today, but the knowledge, the talent and increasingly the open-source tooling are diffuse. Enforcing a pause on nuclear development meant monitoring a small number of facilities. Enforcing a pause on AI development at the frontier level is a fundamentally different governance challenge.

That doesn't invalidate the principle. It does mean the mechanisms need to be designed for the specific characteristics of this technology, not borrowed wholesale from a different era.

The real gap for safety leaders is closer to the ground

The organisations we work with are not thinking about superintelligence. They're thinking about governance.

Across our client base at PKG, the pattern is consistent. Organisations are trying to close the gap between individual AI experimentation and governed, organisational adoption. It's the gap we measure through our AI Maturity Model: the distance between someone using Gemini or Claude on their phone at their desk and an organisation that has embedded AI into its workflows with clear rules, clear intent and clear accountability. Most sit at the early stages. The immediate risk for these organisations isn't runaway superintelligence. It's the ungoverned adoption of the tools that already exist.

The superintelligence debate matters for frontier labs and policymakers. But for the safety professionals, operational leaders and technology buyers we work with every day, the urgent work is closer to the ground. Responsible adoption. Clear governance. Making sure the tools already in the wild are deployed with intent. Technology doesn't care about your strategy if you haven't done the thinking first.

Before AI can solve anything, define what solved means

One of the most useful framings in the broader discourse around AI and futures thinking comes from the World Economic Forum. The argument: before superintelligent AI can solve major challenges, we need to agree on what "solved" means. If we can't align on these questions among humans, encoding them for AI systems is a much harder problem.

We've built an entire methodology around this principle. Our Question-First Data Strategy™ starts from the same premise: define the question before you reach for the tool. We've deployed it across energy utilities, resources companies and engineering firms. The lesson is always the same. If you don't know what you're trying to learn, no amount of data or technology will teach you. The fact that the World Economic Forum is now articulating the same logic at a civilisational scale tells you something about where the gap really sits. The principle scales. The discipline doesn't change.

The parallel that safety professionals should notice

The rally's macro argument is the same thing we do at the micro level every day. Define what you don't know. Identify the failure modes. Put controls in place before you scale. We do that for safety teams and operational technology. Protesters on Capitol Hill want it applied to frontier labs building superintelligence. Same principle. Different scope.

This is also why the safety profession has something valuable to contribute to the AI governance conversation, and why it’s frustrating to learn we’re largely absent from it. Safety professionals understand risk governance, the precautionary principle, the hierarchy of controls and the difference between theoretical safety and operational safety. These are exactly the capabilities and frameworks that frontier AI development is missing. The AI policy debate is dominated by technologists, ethicists and policymakers. Practitioners who spend their careers managing high-consequence, complex systems in the real world are conspicuously quiet.

Pro-adoption and pro-caution at the same time

There's a false binary in the current AI discourse. You're either an accelerationist who believes AI will solve everything, or you're a doomer who wants to pull the handbrake. Most thoughtful practitioners sit somewhere in between, and that middle ground is chronically underrepresented.

You can believe that current AI tools are delivering genuine value to safety practice. You can use them daily. You can help organisations adopt them responsibly. And you can simultaneously believe that the race toward superintelligence needs stronger guardrails, clearer governance and genuine public input before it accelerates further.

You don't have to agree with every position at a rally to take the underlying signal seriously. And this signal is worth paying attention to.

If you work in safety, holding both of those positions isn't contradictory. It's our whole job.