Wikileaks Founder Blasts Twitter’s Soft Censorship
Julian Assange, the rogue editor of Wikileaks who is living as a refugee in the Ecuadorian embassy in London, has blasted Twitter over its increasing use of soft censorship. He highlighted a growing trend within mainstream media companies and platforms where dissenting viewpoints are labeled as offensive, or “fake news”.
Assange’s ‘Sensitive Material’ Hidden From View
Assange noted to Twitter CEO Jack Dorsey that some of his tweets were mysteriously market “sensitive”:
— Julian Assange ⌛ (@JulianAssange) October 6, 2017
“Fake news” was the most conspicuous moniker to arise out of the ashes of dumpster fire that was the 2016 U.S. federal election. Utilized by candidates and the media alike, the term referred to any published story that was subjectively biased, or not based in truth, in order to gain views.
Wikileaks and Assange were at the front and center of the 2016 political inferno, thanks to the waves of publicity accompanying the slow release of emails leaked from Clinton campaign chair John Podesta and the DNC organization.
Assange is no stranger to criticism, and has faced a barrage of scorn for years: from the press and politicians calling for his assassination, to his having to appeal for bitcoin donations after Wikileaks’ regular funding channels were blocked.
The latest event seems to be an inconspicuous change in his account security settings; modifying them without his consent to ensure the images and tweets he posts are labeled as offensive or disturbing — and therefore not shown. In the era of clicks and eyeballs, this dramatically reduces the reach of his content.
That's what is seen from a browser that isn't logged in.
Twitter added this to my account without my authorization ("Mark media you tweet as containing sensitive information"). pic.twitter.com/P7gLR89Et5
— Julian Assange ⌛ (@JulianAssange) October 6, 2017
Censorship Spreading to All Forms of Media
This kind of censorship has taken hold recently, adapting from the mainstream news sites where it initially began.
Since the victory of Donald Trump, the controversial trash-talking New York billionaire, there has been an onslaught of reporting focusing on the validity of the election result — and the role Russia may have played in influencing candidate Trump and the American people overall.
From an objective viewpoint, far removed from U.S. politics, the constant focus on Russia seems frivolous. It is basically saying voters are easily influenced, and blames the former U.S.S.R. nation for treating the election like a SEO split test.
Similarly, President Trump seems to label anything critical of his actions as fake news, despite being caught lying on multiple occasions — for example when he claimed that China stopped manipulating its currency during the election. The Washington Post countered with:
“China had not devalued its currency for about two years prior to his election. In fact, as recently as 10 days before … [he] falsely blamed China for being a ‘world champion’ of devaluing the yuan”.
The problem here is that when authority figures publish a false statement, the initial story gains much more traction than the apology. The public is conditioned to trust authority figures, and the short attention spans of the Internet generation means focus soon moves on to the next outrage.
Tweet w/ incorrect info that goes against Trump: 9,600+ RTs.
Tweet correcting first tweet's wrong accusation: 263 RTs.
Every single time. pic.twitter.com/LsivYiAyjL
— Josh Jordan (@NumbersMuncher) October 1, 2017
Social Media Platforms Are Policing Opinions
Now, as the ripple of a wave emanates from the center, we are now seeing the world’s biggest media platforms — Facebook, Twitter and YouTube — implement policies they tout as being designed to stop the spread of content they decide is “hate speech” or “fake news”.
Social media platforms are more nuanced than the print or television media, who seem to produce large amounts of content that can only brute force their audience into belief. But it faces its own problems in determining the status of users content.
It may be a noble undertaking, but it is certain that some content creators are having their videos demonetized simply for discussing topics that raise algorithmic red flags.
Upon announcing its updated policy, YouTube wrote:
“There’s a difference between the free expression that lives on YouTube and the content that brands have told us they want to advertise against … we will be implementing broader demonetization policies around videos that are perceived to be hateful or inflammatory.”
This shows YouTube seeming to bow to the pressure from companies upset with their ads appearing before videos they took issue with. It sets a potentially dangerous precedent: what happens if an advertiser wants YouTube to minimize content that affects their business interests? Will YouTube put profits first again?
Self-Censorship a Big Cultural Problem Too
UCLA professor Sarah Roberts sees this as being problematic, commenting “I’m not sure they fully apprehend the extent to which this is a social issue and not just a technical one”. It seems that putting the use of many terms and phrases into context across cultural and geographic divides is a bridge too far for the big media platforms.
The fear is that censorship which starts out well-meaning eventually leads to self-censorship and, at its most extreme point, 1984-like Orwellian control over the information that is consumed by the public.
The problem here lies with perspectives. As much as Julian Assange may be an asshole, there is much to admire about his conviction in informing the public of the truth, at such personal cost. Without the likes of Assange holding truth to power, who gets to determine which news is credible or not?
The veracity of a story is of the utmost importance but the interpretation of it is not, leaving the reader to determine the way in which they absorb it.
Should social media platforms police what their users say? Let’s hear your thoughts.
Images via Google Trends, Twitter, DemocracyNow!