Over the last decade social media has been put on trial by concerned parents, policymakers, influencers, and everyone in between for a plethora of reasons: the purposeful creation of addictive apps, the selling of private information to third parties, cyber bullying, and generally atrocious new updates.
However, the most recent charge in social media's overflowing casefile seems to have generated the most controversy from all parties and attracted the attention of the United States' highest court. The charge: a failure on the part of social networking services (SNS) to effectively patrol misinformation and the ever notorious "fake news" on their platforms.
Facebook boasts 3.14 billion users worldwide across its primary products, and Twitter rallies 145 million users each day. From healthy eating pages to farm-loving mothers of four and conspiracy theorists in basements, these platforms house all kinds of users who produce and digest a range of images, videos, beliefs and ideologies at a rate never before seen.
According to the Pew Research Centre, an estimated 12% of the US population receive their news from Twitter; that’s 40 million people. Reports by Statista have shown that more than 70% of British businesses use social media to promote their products and services. In Japan, further research by Statista highlights that three-quarters of the entire population use social media services to contact friends and family members. These statistics say one thing: social media platforms have become a part of our daily lives globally. Whether we are scrolling through Instagram when we first wake up, spending hours liking and retweeting or more mundanely interacting in group chats on Facebook Messenger, social media has permanently integrated itself into our routines. We feel comfortable connecting on these platforms. But that sense of comfort is a problem.
Users do not have the time or energy to filter through content. Dr Jorge Barraza, a Professor at the University of Southern California, explains that this is because "it is costly to question and critique every piece of information we encounter, so we rely on social cues to attribute validity to a message", a reliance which, in the case of social media, results in the mass distribution of (and belief) in misinformation. Comfort within online communities enables users to more readily believe the content they view on their feed without feeling the need to scrutinize it.
During the early months of the Coronavirus pandemic, a joint survey by Harvard and Northwestern University of 21,000 people in the United States indicated that the individuals who were getting their news from local television and radio showed the lowest level of misinterpretations and misinformation about the virus – a stark contrast to the 20-30% of those who received information from Facebook Messenger or WhatsApp and were more likely to encounter misleading information – it is clear that social media platforms are breeding grounds for conspiracy and misinformation. False facts are then amplified on the apps and made more dangerous by the new-age little bluebirds that dominate our lives and travel from the United Kingdom to Australia 18,000 times faster than the world’s fastest fighter jet. Nevertheless, the question remains, can we hold social media companies like Facebook and Twitter accountable?
COURT IS IN SESSION
In a 2018 senate hearing Facebook founder Mark Zuckerberg said, "It's not enough to just connect people…We need to make sure that voice isn't used to harm other people or spread misinformation." In 2020, Facebook finalised dealings in order to adhere to this statement, hiring a third-party service to fact check information, place warnings and restrict posts on both Facebook and Instagram (the photo and video sharing giant the company also owns). Whilst this action may have appeased policymakers concerned about the resurgence of conspiracies such as QAnon, this act, heretical to first amendment fundamentalists, also brought about more scrutiny for Facebook from free speech defenders as the question 'Who gets to decide what is misinformation?' was debated once again in public discourse.
"...if social media apps go from being Big Brother to the ultimate judge of fact or fiction, right and wrong, it cannot end well."
A prominent symptom of this problem arose in late 2020 when protesters of Nigeria's #EndSARS movement saw their Instagram posts about the demonstrations labelled by the app as misinformation, and so received fewer views due to Instagram’s algorithm. One response to this issue might be that we can’t expect Instagram's fact-checking function to always be accurate, it evolves as more information is made available. Another is that censorship is a slippery slope. This particular instance, whereby the app’s fail safes actually interfered with individuals trying to raise awareness for well documented, and even live streamed acts of civil rights abuse, further goes to show that if social media apps go from being Big Brother to the ultimate judge of fact or fiction, right and wrong, it cannot end well.
It is tempting to vilify social media companies for the fact that, having failed to regulate their platforms in the first instance, they then inconvenience many in the rehabilitation of these spaces. However, by doing this, we also overlook the positives of more permissive policies such as the uninterrupted inflow of up-to-date information, as was seen with the coverage of #EndSARS on Twitter, which quite literally saved lives by enabling the timely deployment of security, food and emergency medical services. Facebook and Twitter have been forces for some unimaginable good, a product of the unmitigated, free flow of information. So despite their flaws, these platforms can be harnessed as powerful tools for positive change. But in that statement lies the crux of the matter: social media is a tool.
Many would argue that Mark Zuckerberg and the elusive Jack of Twitter have no responsibility for user content because they do not control their users – each is an autonomous individual. However whilst this is true, unregulated free speech, as evidenced by the effects of Donald Trump’s tweets, can end in catastrophe. After being accused of inciting violence at the capitol, Trump was subsequently removed from Twitter, but then the question to you, dear reader, is this: what qualifies censorship and where does it end? The precedent has been set.
Facebook and Twitter did not invent conspiracy theories or closet anarchists. Events before the internet boom like the moon landing, JFK’s assassination and even the Salem Witch trials all had their own sceptics; whether via underground radio channels or hidden in the personals section, conspiracies have always found a way to spread. The gossiping old lady at 401 Drive wasn't born with Facebook; she simply levelled up from knitting circles to smartphones (and Trump moved from TV to Twitter). Misinformation, as all other aspects of life, has simply adapted to the technological age of 5G internet. It is not Facebook’s duty to ensure that everything we read is true, but rather ours, just as in real life, to be vigilant and critical when encountering new information, especially when it comes from individuals with whom we have no established relationship.
AN UNSATISFYING VERDICT?
The court of public opinion seems to have administered its verdict, but is there a need for a retrial? Tools hold no power on their own. You could say that blaming fake news, misinformation and their effects on social media is a bit like blaming murder on a knife – social networks cannot be charged with committing the crime. However, social media is not just a tool that is wielded, it is one that also empowers, and more perilously, emboldens. We all have a stake in the virtual world, it bleeds into reality, and as such this trial requires that we make a unanimous decision about our rights and liberties versus our wellbeing and accustomed ease of rapidly consuming information without interrogating it.
Given the special capability of this tool the question becomes, will social networking creators be brave enough to do what most parents fear: make the tough decisions. Will they defend their mission and the wellbeing of the world it sought to connect by censoring perceived malintent, at the risk of upsetting some, being wrong occasionally or becoming estranged?
If the answer is no, shall the defendant plead guilty as accessory to the crime.
How Migration Myths have Infiltrated European Political Debate
The EU stands out as the perfect target for conspiracy theories: an overly complex system distanced from the citizens it serves and established by the European elite. The legacy of the 2015 Refugee Crisis' politicisation by the far right has funnelled migration conspiracies into legitimate political discourse...
by Christoffer Nielsen