GUEST COLUMN: AI needs regulation, industry should lead way
· Toronto Sun

To my fellow chief executives, board members, and industry associations who shape the direction of AI development in Canada and globally: The era of vague safety frameworks has passed; the era of enforceable standards must begin.
Visit asg-reflektory.pl for more information.
The families of Tumbler Ridge deserve more than a meeting in Ottawa and press statements expressing concern. They deserve to know that the companies operating AI platforms and technologies have committed — publicly and with enforceable consequences — to standards that will prevent the same failure from occurring again.
Let me be clear that many concerns about the negative impacts of AI are overblown and non-factual. The long-term value that AI brings to society substantially outweighs the risks, provided those risks are managed. Thanks to its power and reach, this technology will bring value that many cannot even imagine at this point in human history.
That is precisely why serious efforts must be made to control and regulate it — which, by definition, addresses the companies behind the AI technologies. The federal government around Minister Evan Solomon is right to demand changes that increase safety for Canadians.
However, these rules and regulations should not come from the government — as was recently suggested in an op-ed published by the Globe and Mail . The companies designing these systems understand them better than any regulator. They are best positioned to design multi-layered guardrails that go far beyond simple filters without further delaying technological progress made in recent years.
This includes defining the technical thresholds for what constitutes a credible threat of violence, establishing escalation protocols that are both operationally sound and respectful of privacy, and determining the exact points where automated detection must transition to mandatory human review. By investing in these systemic safeguards, companies can ensure that safety is an architectural feature of the technology, not an afterthought.
Invest in safeguards
And it makes business sense. Companies that operate in regulatory vacuums invite the kind of blunt, reactive legislation that tends to follow tragedies. They also invite liability exposure, reputational damage, and erosion of the public trust that is, ultimately, the foundation on which their products depend.
OpenAI’s handling of the Tumbler Ridge shooter’s account — and its silence to B.C. officials in the meeting held the day after the shooting — has generated exactly the kind of scrutiny that no company seeking to expand its presence in Canada can afford.
Moreover, it creates negative consequences for the entire industry and, counterintuitively, for the safety of society itself. When we react to a crisis with blunt, performative legislation, we often prioritize signaling action over solving problems. Rash decisions that disregard technical nuance don’t just stifle our most transformative sector; they create a false sense of security while leaving the actual, complex loopholes wide open.
A serious, industry-designed code of conduct for AI safety — one that carries genuine force rather than serving as a public relations document — would need to address several core questions.
It must provide industry-wide standards to remove ambiguity and define clear and multi-layered guardrails to avoid one of the most troubling revelations from Tumbler Ridge: the decision not to contact police, which was made against the judgment of employees within the company who believed the content warranted it.
Additionally, clear and strict reporting structures and real accountability are essential. In practice, this means that if an automated system flags content, humans must step in and review detected content according to consistent criteria. Violating this should lead to serious investigations that lead to transparent and impactful outcomes.
Cross-border coordination required
Lastly, any such framework must be established in genuine cross-border coordination. The internet does not recognize national boundaries. A Canadian-only framework will be incomplete so long as major AI platforms are headquartered and governed elsewhere. This might be the most difficult step, but Canada has signaled repeatedly – think back no further than to Mr. Carney’s celebrated speech in Davos — its hunger to lead.
Pursuing such meaningful changes and safety requirements requires a level of cooperation that, historically, only dark and sad events like Tumbler Ridge can inspire. Let’s respond to this tragedy with the adequate speed — but also seriousness and intellectual honesty.
The question is not whether AI companies bear sole responsibility for what happened in Tumbler Ridge. They do not. The question is whether the industry has adequate, binding, and consistently applied standards for what to do when credible evidence of planned violence surfaces. It does not.
The private sector has an opportunity here that it would be unwise to squander: to demonstrate that technological innovation and public safety are not competing values, and that industry is capable of governing itself with the seriousness this moment demands. If the industry does not seize that opportunity, governments will act — and they will do so on a timeline and in a manner over which the technology sector will have far less influence and that would limit Canadians from benefiting from the value brought by this transformative technology.
Lead now or be led. The choice belongs to us.
– Sayan Navaratnam, founder and CEO of The Malar Group of Companies, is recognized as a visionary entrepreneur whose innovative investment approach has revolutionized multiple Canadian and US businesses.