Protecting Children From Dangerous Online Content

Protecting Children From Dangerous Online Content

It has been clear for much of the last 25 years that the internet, and social media in particular, pose serious risks to children. Parents and government officials alike have repeatedly attempted to regulate children’s access to potentially harmful content online. Yet a seemingly simple push for the safety of children is met with strong resistance due to underlying issues that make the endeavor less than clear-cutt — namely, debates of free speech vs. censorship and individual rights vs. government control.

Tech companies and free speech advocates maintain that cracking down on harmful content is an infringement on individual rights, including the right to free speech. Such arguments have frequently excused large technology companies – especially social media platforms – from being responsible for dangerous content. In the context of protecting/monitoring children’s access to such content,  governments, parents of children who have suffered online, and others maintain that the internet poses both physical and emotional threats to children, and must be regulated accordingly.

These threats include cyberbullying, sexual abuse, exposure to sexual content and/or extreme violence, and more. Additionally, studies show that there is an association between depression and social media use, and the US Surgeon General, Vivek Murthy, recently issued a warning that social media is a significant contributor to depression, anxiety, and other mental health issues in teens.

In the face of these accusations, tech companies and free speech advocates maintain their stance that the freedom of the internet should be protected. Despite this resistance, governments around the world are pushing forward legislation to protect children from harmful internet content. Most legislation attempts to walk a delicate tightrope between removing harmful content and protecting individuals’ rights to privacy and free speech.

The strategy for and degree of success of legislation varies across Western states responding to the need to make the internet safer for children. 

The US Federal Government

At the end of January, the Senate Judiciary Committee held a hearing on child internet safety with an audience of tech giants, including Mark Zuckerberg, and parents whose children have suffered (and some who have died) from online abuse. Members of the committee pressed the tech leaders on whether they supported different pieces of legislation regarding internet safety, and the senators pulled no punches. 

Several senators highlighted the disproportionate power of technology companies in the formation and lobbying of these bills. Republican Senator Lindsey Graham said regarding tech legislation, “If you’re waiting on these guys [the tech leaders] to solve the problem, we’re gonna die waiting.” Democratic Senator Amy Klobuchar said, “It’s been 28 years since the internet. We haven’t passed any of these bills…The reason they haven’t passed is because of the power of your [tech leaders’] companies, so let’s be really, really clear about that. What you say matters. Your words matter.” 

This tension between the push for regulation and the political power held by large technology companies mean that a significant percentage of child internet safety bills enacted have been on the state level in the US. Those that have reached the federal level have not been sufficiently comprehensive. 

New York City, US

In January of this year, New York City made headlines when Mayor Eric Adams declared social media a “public health hazard” due to its negative effects on young people. He compared social media to any other health hazard, like smoking and guns, and said regulators must take appropriate action. 

Adams said, “Companies like TikTok, YouTube, and Facebook are fueling a mental health crisis by designing their platforms with addictive and dangerous features. We cannot stand by and let Big Tech monetize our children’s privacy and jeopardize their mental health.” 

In line with his statements, several bills were recently introduced to the New York State Legislature which would allow parents of minor children to opt out of algorithm-driven social media, thought to provide a higher degree of protection from dangerous content. They would also enable a time-based lock on social media, allowing children access to social media platforms only at certain times of day (e.g. before or after school). Proponents argue that not only will children be protected from harmful online content but the risk of their personal information and location falling into the wrong hands is substantially minimized by these bills.

The UK

In October 2023, the British government passed the Online Safety Act to regulate harmful internet content, but it has drawn criticism for privacy infringement and its implementation is being rolled out slowly. 

In mid-February, ministers outlined new guidance for mobile phone and handset use for children, effectively barring their use during school hours. Concerned parents, however, are unsatisfied and continue to call for stronger internet safety measures. They fear that, after being forced away from their devices at school, kids will come home and immediately reach for their phones or devices. More pressing, the concern over exposure to both physical and emotional harm, including verbal and sexual abuse, bullying, etc. is insufficiently addressed by the current plan.


The Canadian government has proposed a bill called the Online Harms Act, which would require social media platforms to remove harmful posts from their platforms within 24 hours. The legislation includes seven types of harmful content but makes a point of noting that the law will not impact private messages. The bill now needs to be studied by a parliamentary committee and the Senate, both of which may introduce changes before the draft is finalized.


In June 2023, the French government approved a new law that requires social media platforms to verify users’ ages and obtain parental consent for anyone under 15 years old. The purpose of the law is to protect children from cyberbullying and other harmful effects of social media. However, according to Laurent Marcangeli, the law’s main advocate in the French National Assembly, further advancements and regulations are necessary. 

Similarly, a few days ago, members of the French National Assembly’s Law Committee unanimously approved a bill that calls for protecting children’s rights to their own images. The bill considers protecting a child’s privacy to be among parents’ legal duties. 


Australia passed the Online Child Safety Act in 2021, a comprehensive law that covers adult cyber abuse, image-based abuse, harmful content, cyberbullying, and more. Recently, Australian regulators updated the safety standards, which include requiring operators of cloud or messaging services to detect and remove child abuse and pro-terror material. At the same time, regulators are saying the new requirement will not compromise end-to-end encryption — if it does, they are sure to experience backlash from Apple and other messaging operators. 

A Global Commitment to Protecting Children from Harmful Internet Content

While there remains substantial opposition to regulating children’s access to the internet and social media, the increasing levels of inappropriate and dangerous content – whether it be sexual, violent, or otherwise – has caught the attention of a sizable lobbying base. The precise method and degree of protection remains hotly debated and uncertain, but leaders around the world are making an effort to address this issue. What is clear is that collaboration and information-sharing between global governments can help expedite regulations and find the right balance between individual rights and child protection.