Home
Articles

Accountability and Centralized Internet

Published: March 10, 2021
Written by: Vít Černý

Who are the gatekeepers today? Who stands between you and the people you listen to? Today, the answer is mostly quite simple — nobody, internet platforms don't have editors. That comes at a price, though: “Who is responsible for the platforms?” “How do you set and enforce some basic rules there?” Of course, this all comes down to accountability.

Banning Is Not Always the Same

To demonstrate the problem more clearly, I compare these two cases.

The first case: Twitter permanently suspending Donald Trump's account. While I don't like Donald Trump, I think this was a bad decision, only highlighting Twitter's dishonest judgment in similar cases. I find there are two relatively common objections to this opinion. In arguing against them I will explain my point.

  1. “Twitter is a private company, they can do whatever they want on their platform.”

    Well, that's technically true, but let's take a closer look. Twitter states that they are not responsible for any content posted on their platform, which in the end means that even when ISIS posts on Twitter, Twitter denies accountability, though they admit they should have acted sooner.

    So Twitter deletes only posts violating the law, such as posts promoting terrorism. Other than that, they don't care, they are just a platform. Or at least you'd think. There are cases, such as the Trump's, where Twitter decides to delete accounts and posts without the content being unlawful or harming the platform. (Trump is not held legally accountable for the US Capitol riot, I mean, you have every right to think he is, but he was neither impeached nor convicted)

    Twitter can do that, but it is dishonest given that, on one hand, ISIS just slips through their hands, and on the other hand, they feel compelled to suspend the at-the-time president's account. It's weird.

  2. “Trump violated the Twitter Rules .”

    The Twitter Rules were made to prevent people from posting illegal content and harming the platform. (By “harming the platform” I mean deleting spam, bots, phishing, etc.) This they do quite well. But enforcing them anywhere beyond that gets you always into a heated debate.

    While you could argue that Twitter has every right to ban Trump on account of making misleading claims about the outcome of the elections, these are still claims by the POTUS, which were/are a major part of the public conversation. I think Twitter should have just ignored it (more on that later).

    Also, keep in mind that it's rather easy (with proper rhetorics) to suspend one's account due to a violation of the rules. Let alone the dozens of cases of clearly violating the rules where Twitter did not act.

For further reading, I highly recommend reading this thread by Navalny:

1. I think that the ban of Donald Trump on Twitter is an unacceptable act of censorship (THREAD)

— Alexey Navalny (@navalny) January 9, 2021


The second case: Amazon removing a book critical to the transgender movement. I don't agree with their decision, but I respect it. Amazon is a private company responsible for what it sells (just imagine ISIS would have a book on Amazon) so it can decide what it wants to sell. Of course, Amazon went further than just removing this particular book so in this case I agree with Paul Graham.

Amazon can (by a bit of a stretch) claim that refusing to sell a certain book is just curating their selection, not banning it. But when they prevent used booksellers from selling you copies on Abebooks, that really is banning it in the strict sense of the word.

— Paul Graham (@paulg) February 27, 2021


Although Twitter and Amazon are vastly different services, there's a noteworthy finding: Accountability makes you more trustworthy, thereby giving you the right to assert more control over what you do, even when that is running a social media platform.

Should You Ban the “Fake News”?

The problem has very interesting social implications among many disinformations that spread easier than ever.

Some people and organizations suggest social media platforms fight more disinformation that spread on their services. I don't question the motives of the organizations, nor do I deny that there are purposefully spread disinformations (after all I'm not an expert on the topic), but I think trying to “defend” the public from disinformation by simply deleting it and banning the people spreading it, is counterproductive in the long-term and unsustainable.

Counterproductive, because even though you stop the spread of disinformation, you are betraying the trust of the people believing it. (that is because most people rationally suppose that if your version of the story is right, you have good enough arguments to convince them. But when you just shut their voices, because you need the disinformation to stop spreading now, and don't have time for debate, they can rightfully think you are just deliberately trying to silence them to protect your interests)

That said, you can still fight it. Not by 'calling out', 'debunking' the people, or by simply deleting their posts, but by either engaging them in a rational debate (which I consider preferable in most cases) or just by ignoring them. The long-term solution is culture and education. Noam Chomsky has a good, practical take on this matter:


Because the position of the 'censor' can (and will) be easily abused, it is unsustainable. Meaningful and sustainable censorship is still possible, though.

The solution

So we face these questions:

I think the problem can be solved neither by regulation nor by some kind of activism. Instead, the solution is rather pragmatic:

  1. The process of removing an account(/post) should be open to the public. Imagine an index you could search to find out why an account was removed. I don't think the banning should be done all by humans, although in the case of important public figures (like politicians), I'd recommend it, but there should always be a reason for the ban. It doesn't have to be anything fancy — just a link to a Twitter rule. Making the process more transparent will make it more trustworthy (and debatable).
  2. A confident and thoughtful consumer. If you really don't like what Amazon does, just don't use the service. If you don't like what Twitter does, just don't use the service. Don't tweet, buy. After all, the market is free (to some degree) and it's your money that changes it. The activism against these big corporations is mostly pathetic. Decide with your money and time.

I don't say the problem will completely disappear, but following just the two paragraphs above can mitigate the problem and make platforms more usable.

The alternative solution

The solution presented above was based on changing the behavior of both the platforms and their users. But what about a solution based on changing the technology behind the platforms? That's viable as well. Decentralized platforms.

How does it work?

In the case of Twitter, Instagram, etc. each platform manages its network — the platform is closed (you cannot follow someone on Twitter from Facebook). There's only one provider for the network. This means that, for example, to get into the Twitter network, you have to use Twitter. There's no other way to engage (like, reply,...) with the posts inside the network.

But it doesn't have to be like that. Like the internet, which works everywhere because it uses standardized protocols (HTTP), platforms can function similarly. (It's a bit more complicated, given the more complex nature of social media platforms in particular) At the core of a decentralized platform is a shared protocol. A single backend with multiple frontends. (the currently most-used protocol is ActivityPub)

This means that while there is one network (the one currently most popular is called the “fediverse”), there are many ways to enter it. What one would call a platform (Mastodon, Peertube, etc.) is just software that makes a custom UI for the network. The software gets implemented (=hosted) on what is called an instance. Each instance can have a slightly different design, set its own rules, etc. With enough technical knowledge, you can even set up your own instance.

a graph of interconnected fediverse

A graph of an interconnected fediverse (source: Wikipedia)

There are some decentralized platforms already. The most popular ones include Mastodon, Peertube, Steemit, Minds, Pixelfed and others. (a list of the ActivityPub-based ones)

These are just the basics, there's a lot more to it, you can look it up.

The benefits

  1. Each instance sets its own rules.

    This is great because it means instances can adhere to their morals (however they define them), and the network is still open to different voices via different instances. Everyone is happy and no one is being silenced.

  2. The network is durable.

    Let's say a terrorist organization is strongly present on a particular instance. The law-enforcement can just shut down this instance, but the network lives on. Decentralized networks are much more durable.

  3. They don't rely on massive data collection.

    Free centralized platforms require massive data collection to keep their attention-based business models functioning. Decentralized platforms don't!

  4. Mostly open-source

    You know what’s really going on in the background. (for some examples look here )

So what?

Sadly, the platforms remain unknown to the general public. I think two things must happen in order to make the platforms mainstream, or, at least, known.

First, there must be at least one big instance that can financially support itself (ideally by donations and freemium business model) and is enough user-friendly. Second, the platform simply has to get known via existing platforms. (or viral, as they say)

You might now rightfully ask why am I still using Twitter and not present in the fediverse. Well, I don't really care that much about social media altogether. Most of the people I follow are exclusively present on Twitter and the fediverse is mostly just _weebs and extreme right/left (it's instance to instance, some instances might be quite interesting actually) and that's nothing I'm really interested in.

How have I dealt with social media?

I use an adblocker everywhere (Brave browser or uBlock Origin on Firefox, sometimes DNS level blocking) and 3rd party clients for social media (or rss feeds), because I (1) don't like being spied on, (2) find ads and tracking extremely annoying, (3) believe that the internet would be a better place without ads (at least in their current form).

Also, I wanted independence so I made this website. Domains are cheap, hosting is basically free for small sites (I've got 100GB/month bandwidth for free with Netlify) and this HTTP network is already decentralized and used by quite a lot of people.

---

Just copy this link to share.
To get updated, subscribe to the RSS feed.
Go up