Committee Briefing: The Internet’s Unregulated Water Supply
Why a decade of inaction on online harms reveals a fundamental misunderstanding of how digital platforms shape our reality.
In November 2024, the Standing Committee on Canadian Heritage tabled its fourteenth report, a document titled Harms Caused by Illegal Sexually Explicit Material Online. Like many government reports, it is filled with sober analysis, statistics, and recommendations. It documents the explosive growth of child sexual abuse material (CSAM), the rise of AI-generated deepfakes, and the profound harm inflicted on victims. Yet, beneath the formal language lies a narrative of profound systemic failure. The report captures a decade of parliamentary studies, expert warnings, and the pleas of victims, all culminating in a familiar feeling for many Canadians: a sense of helplessness in the face of a problem that grows exponentially while the proposed solutions move at a glacial pace.
This feeling of powerlessness is the core problem. You see the headlines, you understand the danger, but the scale of the issue feels insurmountable. The conversation often gets stuck in a frustrating loop of free speech debates and technical complexities, leaving you with the impression that nothing meaningful is being done. This article will deconstruct that frustration. Using the evidence presented to the Heritage Committee, we will analyze why our current approach has failed and reframe the solution around a single, critical principle. The goal is not to leave you with more anxiety, but with a clear understanding of the true battlefield, empowering you to see the path forward.
The Exponential Contamination
The first step is to grasp the scale of the problem, which has outpaced our social and legal infrastructure. The committee heard that the volume of sexually explicit material “grows exponentially day by day.” This is not hyperbole. The Sûreté du Québec noted a 295% increase in reports of CSAM received and processed between 2019 and 2024. Statistics Canada data is even more alarming, showing the rate of police-reported child pornography increased by 290% between 2014 and 2022.
This existing crisis is now being amplified by generative artificial intelligence. Witnesses explained that “lifelike deepfakes can now be generated using just a single photo of a person,” making everyone vulnerable. This technology is not a niche issue. One professor told the committee that the number of deepfake videos increased by 550% from 2019 to 2023, and that over 95% of these are pornographic. The gendered nature of this threat is impossible to ignore. A 2023 study of 95,000 deepfake videos found that 98% were sexually explicit, and of those, 99% targeted women. The problem is clear: the volume and velocity of this harmful content overwhelm any system designed to deal with individual instances of abuse.
Chasing Poisons, Ignoring the Pipes
Here’s the detail I find most revealing: our entire approach to online harms has been fundamentally misaligned with the nature of the internet. For this, I want to introduce an analogy: The Unregulated Water Supply.
Imagine the internet as a global water system. For decades, our legal framework has focused on identifying and punishing the individuals who dump poison into the water. This is the role of the Criminal Code. It is a necessary, but entirely reactive, function. We wait for the harm to occur, for the poison to enter the system, and then we try to track down the culprit. As witnesses pointed out, this is an “impossible fight.” The content, once uploaded, is nearly impossible to permanently remove. For survivors, this creates a state of constant re-traumatization, never knowing when or where the abusive material will resurface.
The core flaw in this approach is that it ignores the pipes. It ignores the infrastructure that delivers the water to our homes. The social media platforms, search engines, and hosting services are the water treatment plants of the digital world. They control the algorithms and the architecture through which all content flows. Yet, as Associate Professor Emily Laidlaw told the committee, these platforms have been “lightly regulated,” with “no minimum standards and no ways to hold companies accountable.” We have been trying to solve a systemic infrastructure problem with a criminal justice toolkit designed for individual actors.
A Decade of Reports Without Results
This disconnect between the problem and our approach is not new. It represents a chronic failure of governance that has persisted for over a decade, inflicting a severe human cost. The most powerful testimony on this point came from Carol Todd, mother of Amanda Todd, a teenager who took her own life in 2012 after relentless online exploitation and harassment. Her words to the committee should be read in full.
I have sat on six standing committees since 2012, on technology-facilitated violence, on gender-based violence, on exploitation against children and young people, on other ones on intimate images, and now this one. I could copy and paste facts that I talk about: more funding, more legislation, more education, more awareness. Standing committees then come out with a report. We see those reports, but we never know what happens at the end: Do these things really happen? ... We are harming Canadians, our children and our citizens when things don’t get passed.
This is the injustice that should trigger your indignation. The problem has been studied repeatedly. The solutions have been debated. Yet for twelve years, the situation has worsened. This points to a critical question: what needs to change to break this cycle of inaction?
From Helplessness to Architectural Accountability
The answer lies in shifting our focus from the content to the architecture. This is the central principle that offers a path from helplessness to agency. The government’s proposed Bill C-63, the Online Harms Act, represents the first legislative attempt at this shift. While the bill has generated wide debate, its core concept is what matters. Experts who advised the government told the committee that the bill’s most important feature is the creation of a “duty to act responsibly” for online platforms.
This is not simply a requirement to take down bad content faster. It is a systemic obligation. It forces platforms to analyze the design choices they have made and mitigate the risks that their own systems create. It moves the responsibility from the end-user, the parent, or the police officer, and places it on the corporation that built the environment in the first place. The creation of a Digital Safety Commission is designed to oversee this duty, providing a regulatory body with the power to inspect the infrastructure, not just police the content. This changes the entire dynamic. It mandates that our digital water treatment plants must install filtration systems.
The debate over online harms is not a simple binary between free speech and censorship. It is a question of public safety and corporate responsibility. For too long, we have allowed the companies that build our digital world to operate without the basic safety standards we expect of those who build our physical one. Understanding this allows you to cut through the noise and focus on the one question that truly matters when evaluating any proposed solution: does it address the underlying architecture of the platforms, or is it just another attempt to scoop poison out of an unregulated water supply?
Our focus must be on the blueprints of the digital world, for it is in that architecture where the foundations of a safer public square are truly laid.
In Other News...
Beyond this deep dive, you can find more analysis and commentary on the On Hansard site.
Sources:
Standing Committee on Canadian Heritage. (2024, November). Harms Caused by Illegal Sexually Explicit Material Online. 44th Parliament, 1st Session. House of Commons of Canada.





