Blog
Social media has played a key role in global events such as the Arab Spring, London riots, and the MeToo and Occupy Wall Street movements (to name but a few). It has also been important in giving voice to marginalized groups. Social media also has a dark side, leading to the spread of fake news and encouraging extremism and rampant bullying.
When Elon Musk acquired Twitter in October 2022, the dual nature of social media became an even hotter topic as some feared his policies would favor free speech over public safety. His tweet on November 29, 2022, seemed to support freedom of tweet: “This is a battle for the future of civilization. If free speech is lost even in America, tyranny is all that lies ahead.”
Critics of his policies felt that ending the free Twitter verification system might lead to the impersonation of government accounts and the spread of fake images. They said his decision to reinstate users previously suspended for violating the platform’s rules, including former President Donald Trump, could compromise national security.
This debate was further complicated by Musk’s turning over files from Twitter’s history to certain journalists, who documented the platform’s historical approach to limiting speech. The files revealed what some see as unnecessary censorship and others as sober filtering of potentially dangerous material. So. Who is right?
Those who think Twitter went overboard with limitations on free speech in the pre-Musk era often cite The First Amendment, which protects individuals from government censorship – despite the fact that social media platforms are private. Whether you think the First Amendment is relevant or not, many agree there is a danger in making tech companies the sole arbiters of free speech since they have a vested interest in promoting their businesses.
To truly understand these perspectives, we need to trace the history of freedom of speech online.
A Very Brief History of Free Speech Online
There were few checks on what could be publicized online until 1996, when Congress passed the Communications Decency Act, which criminalized “patently offensive” or “indecent” content on the Internet if it was plausible that adolescents and children could view it.
In 1997, the Supreme Court struck it down on First Amendment grounds. What remained was a bill with a section that offered a compromise in which Internet service providers were encouraged to limit the most offensive material from being posted but allowed for the free flow of information, including Internet pornography and later the ability to post freely on Facebook, Twitter, and Instagram.
By the mid-2010s, there were some instances when social media companies would limit what could be posted, but the general trend was to censor as little as possible. Algorithms were created to make the filtering less biased, but those caused problems by favoring clickbait over real news.
In the last half of 2015, with the rise of ISIS – which used Twitter, Facebook, and other social-media platforms to recruit and spread its propaganda – Twitter suspended over 125,000 accounts for threatening or promoting terrorist acts. In 2016, Russia meddled in the US presidential campaign, which opened the door for the federal government to take a much more active role in content moderation on social media.
Today, the trend is for social media platforms to limit free speech in ways that go beyond the First Amendment to protect individuals from bullying and to ensure security for nations and groups of people – a stance being challenged by Musk.
How it Impacts National Security?
The FBI and the Department of Homeland Security have personnel whose job is to alert social-media companies to foreign propaganda and terrorism. Twitter has long cooperated with the Pentagon to help the US government amplify accounts (often in Arabic or Russian) friendly to US interests.
While this government involvement may have its place, there is a potential downside. If individuals who all hold one worldview are shut out of a public forum, they will likely find another platform to meet with others like them. This can create breeding grounds for terrorists or other forms of unstable groups, which can pose a national security threat.
While some content moderation is necessary for social-media platforms, the free exchange of ideas, opinions and information provides us with the knowledge to make decisions. It is for this fundamental reason that many social media users want to be trusted to evaluate information on their own and to be part of the discussion about when to limit this trust.
For social media intelligence and threat monitoring, the Interfor team is here to help.