We have been discussing the wrong thing about social media and censorship

Bozhidar Bozhanov
4 min readMay 31, 2020

Trump attacked twitter for fact-checking him. And he used his position of power to question a 90s law shielding online platforms from liability when they moderate content.

And discussions are circling around one issue — social networks and their role in public discourse. Are they impartial “infrastructure”; are they “arbiters of truth”; are they public places where freedom of speech trumps the fact that they are privately owned; should they moderate, fact-check, cleanup bots; is there such a thing as “hate speech” and should they decide where the boundary lies; is is humanly possible to police content produced by hundreds of millions of users or algorithmic censorship should be applied.

Those are all interesting and tough debates to have, with arguments drawn from philosophy, history, economics, common sense and political biases. But they are all missing the most important point — why are we having these discussions in the first place.

The thing is — it’s not about freedom of speech, it’s about the attention economy.

The core of social media platforms business models is their ability to draw the attention of their users for as much as possible. In order to show them ads and get paid for those ads.

This means all content recommendations (be it news feed algorithms, suggested pages, friend suggestions, who to follow, “Play next” videos, etc.) are focused on maximizing the time that the user spends on the site.

How is that relevant? We’ve always had idiots, assholes, racists, conspiracy theorists, wackos, extremists, fascists and stupid people. And they’ve always had platforms —town squares, pubs, Hyde park corners, newspapers, pamphlets, IRC channels, websites and blogs. And there has barely been an issue with how we regulate that. Because they remained niche, fringe, marginal.

Until they got platforms that promoted them. Not as a result of conscious decision of social network management, but as a result of algorithms optimizing for attention.

An ex-YouTube employee created a website that tries to investigate how YouTube recommendations work. The results are not unexpected: the most recommended words in titles are “dismantles”, “screams”, “obliterates”, “shreds”.

Facebook’s internal research apparently showed that “64 percent of people who joined an extremist group on Facebook only did so because the company’s algorithm recommended it to them”.

Twitter moved to a non-linear news feed only recently, but it probably has the same effect.

It seems that social media platforms are actively pushing outrageous, often false or misleading content to users that would otherwise not look for it. The fringe conspiracies that social networks are “censoring” require censoring because they rose to prominence thanks to the same social networks.

So before we get to discuss the issues around online discourse, freedom of speech and media vs platform, we should first address the reason why these issues exist in the first place.

Some advocate for splitting social networks just like the US split many monopolies in the early 20th century. That doesn’t seem technically feasible, however, as even a federated network should act as one.

What we should do is regulate the recommendation algorithms; regulate the attention economy. We normally regulate businesses that have negative long term effect on society despite some short-term positive effect. And the extremization of society seems a good reason to act.

But how? Are we sure that what I wrote above is not just some rumors and assumptions that are actually not an issue to the scale that I presented it? Well, that’s the first step of regulation — algorithmic transparency. We need to force social networks to show how they are recommending stuff. We need to know the data. We need to understand how many viewers did Alex Jones get because he was shoved up a viewer’s throat after the unsuspecting viewer watched a video a crazy uncle had sent them. We need to know how many extremist Facebook groups became popular not because of a natural viral effect, but because of active algorithmic recommendation. We need to know if Twitter promoted fringe hashtags. And we need to know how the flaws of these algorithms were abused by bots.

After that, we’ll have to debate how to regulate content recommendation. But I believe we have to go down that path, otherwise we’ll always be trying to find and enforce some possibly non-existent lines.

We can afford to have fake news, conspiracy theorists and even fascists, as long as they are not being promoted as a side effect of some online business model.

--

--

Bozhidar Bozhanov

Software engineering. Linguistics, algorithmic music composition. Founder at LogSentinel.com