Another day and another massacre fueled by religious intolerance. As of this writing the United States is still reeling and trying to make sense of the attack on a Jewish community center in San Diego. Last week it was an attack by Islamic fundamentalists on Christians praying on Easter Sunday in Sri Lanka. A few weeks before, praying Muslims were gunned down by a lone wolf killer in New Zealand who broadcast his gruesome acts on Facebook Live. It’s easy for the cultural pundits on both sides to blame whole religions and ideologies, but ultimately the common denominator is that our blood is all the same color.
Common among the attackers was the spewing of hate on the Internet and social networks. Deeper challenges are at play – whether hate and misinformation should be allowed online under the democratic principles of free speech. As with many of the challenges Interfor covers, there are no easy answers, but when the stakes are so high that violence occurs, those questions need to be addressed.
Big Tech is growing up
When many of the Silicon Valley platforms started gaining traction (with teams led by then young millennials), the goal was to accumulate as many users as fast as possible. And while that is still the Silicon Valley ethos – explosive growth above everything else – many of the Bay Area elite have matured into (more) responsible business leaders. An example: the about-face Mark Zuckerberg made after denying Facebook had anything to do with the outcome of the 2016 presidential elections.
This change was also palpable at the turn of the decade when images of the “Arab Spring” and crowds throwing off decades of dictatorship turned to images of ISIS beheading journalists on Instagram. Stakes in the game changed with terrorists using social platforms to not only recruit but to broadcast images of extreme violence to terrorize their foes. That was the wakeup call for many platforms (though social media platforms such as Twitter and Reddit were the bullhorns for alt-right groups, and the tech companies were slow to censor them).
We perhaps tend not to care much about events on the other side of the world until they come into our living rooms and laptops. At home, we are still feeling the impact of mud-slinging campaigns of the 2016 elections and Brexit. Which brings us to the problem at hand: knowing there will be hate speech (it is allowed under free speech) on social media platforms, to what extent should it be regulated?
Yelling “Fire!” in a crowded theater
There is the age-old idea that just because you can yell “fire” in a crowded theater doesn’t mean you should. Most users on these platforms don’t have malicious intent. The worse some of us do is repost a piece of fake news (not knowing it was fake). The problem, however, is the use of these platforms to organize violence against specific groups.
A recent New York Times article shows a major rift opening between the United States and the rest of the world about regulating content on the social platforms. Many other countries impacted by violence (ex. New Zealand) want to restrict content. While the tech companies based in the United States who own the platforms have the right to censor what they see fit, they have chosen to lean very heavily towards the First Amendment. Silicon Valley libertarian views aside, they most likely have a financial incentive not to censor content. Of course this is not relevant for all content (nudity is not allowed on many platforms), but racist hashtags and dog whistling anti-Semitism seem to be allowed.
When faced with public and government pressure, platforms such as Facebook have begun self-policing using human input and AI. After the outcry of Russian interference in the 2016 Presidential elections and the concern about other elections being “hacked” (such as the recent Israeli elections), big tech has begun to clean up fake accounts (many of which spew fake news, hate and, misinformation). But it is not enough, and without threats from users and Congress nothing really changes dramatically.
Throughout history technologies created for non-threatening and collaborative reasons (such as linking your friends on a social network) have morphed into something ominous. No one government or organization has the answers; facts on the ground change quickly alongside platform upgrades. The intention of those seeking to harm us only grows stronger following each attack. Unplugging is not an answer – burying our heads in the sand will keep us behind the trends. The hard path is somewhere in the middle, but we need to ensure we’re not trampling on democracy on our way to staying safe.