On January 8, 2021 Twitter permanently suspended Donald Trump’s account, joining Facebook, Instagram, and Twitch in the censorship of the President. Many prominent voices stated this is a dangerous encroachment on freedom of speech, sometimes making comparisons to China’s government censoring the people. Having operated communities of millions of users, I believe Twitter’s biggest failure was not applying its rules consistently to all users, enabling abuses to increase in magnitude and eventually requiring the drastic response of a permanent suspension. Further, a social platform that does not censor, where complete freedom of speech is guaranteed, is an idealistic vision, but would have questionable viability and is likely unwanted in practice.
I’ll start with the basics, First Amendment rights to freedom of speech prohibits the government from limiting this speech, it does not require citizens or companies to provide the same freedom. When a person or company shuts down discussion from someone on their property or platform, that person or company is exercising their freedom of speech. For the most part, nobody has an obligation to let someone else use their property so that the other person can exercise freedom of speech.
But just because companies have the right to censor people, should they? This is a more complicated question. In theory, I want unlimited free speech, a world in which censorship doesn’t happen, because inevitably those in power, the censor, now controls access to ideas and information and will likely support their preferred narrative. In practice, I’ve learned that lack of moderation will likely destroy a platform, and moderation (a softer way to say “censorship”) is actually desired by communities, both online and in society in general.
Moderation is Necessary
Many platforms on the Internet start open and free and eventually become moderated, and a strong driver for that moderation is the abuse of the open platform destroys the value for others. Email started off great, with an inbox filled with relevant communications and eventually turned into a signal to noise ration of about 1:150, with fake Viagra and Nigerian princes rendering email nearly useless until filtering (moderation) eliminated SPAM. Message boards and social networks become unusable when SPAM and bots infiltrate, so in addition to community moderation, there is an ongoing, continually escalating battle to validate real users vs. bots. Even friendly actors can destroy a platform – when games were popular on Facebook and developers were heavily exploiting the feed for viral growth (hey, Zynga), the real social value declined as a majority of updates were about cows from your friend’s farm, and Facebook built tools to limit this game SPAM. There is always value in exploiting these open systems at the detriment of the other users, so abuse is the natural outcome.
This community desire for moderation, whether explicit or implicit, isn’t unique to online, we see it every day in society. No matter how much freedom we want for everyone, if somebody is singing in a theater during a movie, we want them to shut up or leave. We support one’s right to share their ideas, but if they are on a bullhorn outside of our house at 4:30 AM, we want them to go away. We set our own rules for private property and have laws for public property to support this moderation.
So when Twitter took action against Trump’s accounts, this was Twitter finally enforcing its policies on a user that had consistently abused the rules they established for their platform. They finally said, “like all other users, you can’t use the bullhorn at 4:30 AM either”. I am a strong supporter in our elected officials being held to the same rules that apply to regular citizens, especially since they are often the ones imposing these rules on the citizens (anyone that has been subject to a COVID shelter in place lockdown only to see their elected officials indoor dining or world traveling understands the rage-inducing hypocrisy). The editorial decision Twitter made was not the suspension of Trump’s account, it was years and years of allowing him to violate the terms they set for their platform, allowing a slow progression to eventually becoming a tool for organizing an attack on our government. It is impossible to know what would have happened if Twitter had enforced its policies consistently years ago, but generally problems are easier to manage when you address them early instead of letting them grow in magnitude and force.
Creating an Platform Without Censorship is Difficult
But won’t censoring just drive these users to build another, more powerful network, or to hidden communities where they can’t be reached? Maybe, but it isn’t that simple. A large, functional community requires the support of many companies that are effectively gatekeepers, and they have restrictions on abuses of their platforms. If you want mobile apps, you need Apple and Google’s platforms. If you decide to be web only, you still need hosting for your servers, a CDN (how content is cached and distributed at scale) and DDOS (distributed denial of service, when people kill your servers by flooding them with traffic) attack protection, companies like Microsoft, Google, Amazon, Akamai, and Cloudflare. Cloudflare is a great example of a company that has shown extreme and sometimes controversial support against censoring any site (even some pretty horrible ones), but eventually shut down protection for a site that was organizing and celebrating the massacre of people. Each of these platforms has the ability to greatly limit the viability of a service they believe is abusive, which is exactly what happened to Parler when Apple and Google determined their lack of moderation was unacceptable. There are other possible technology solutions like decentralized networks that might be able to reduce the dependency on these other platforms, but this isn’t just a technology problem.
Beyond technology requirements, what about the financial viability of a completely open platform? Monetization introduces another set of gate keepers, from payment processors, to advertisers, and legal compliance. While there will always be some level of advertiser willing to place ads anywhere (yes, dick pills for the most part), most major advertisers don’t want to be associated with content that is considered so abusive that no major platform wants the liability of supporting it. Depending on the activities on the site, banks can be prevented from providing services to the platform, and even with legal but edgy content (e.g. porn), there is a huge cut that goes to payment processors as they take a risk in providing money exchanges. Crypto can provide some options, but it is largely not understood by the average user and, depending on the content of the site, there can be legal requirements to KYC (know your customer), and liability for profiting on the utility of the site if the content is illegal. There are potential solutions for each of these, but it gets increasingly more difficult to achieve any scale.
Building on dark web is a possibility, although still vulnerable to many of the platform needs for scale. The dark web is also the worst dark alley of the Internet, difficult to discover and navigate, and the lack of moderation would mean many abuses, from honeypots (fake sites likely setup by law enforcement to have an easy way to track suspicious behavior) to scams and exploits preying on the average user that doesn’t understand the cave they’ve wandered into.
So while Trump certainly has a large base of followers and the financial resources (well, maybe) to have one of the best chances of being a catalyst for a new platform, there are many forces outside of that platform’s control that challenge its viability.
So, What’s Next?
If I had to guess, a few of the “alternative” networks will make a land grab for the users upset by the Presidential bans. The echo chamber of everyone having the same belief may not provide the dopamine response they get from a network with extreme conflict, so it may seem less interesting for the users. I also assume the environment is ripe for people to go after the next big thing, decentralized, not subject to oversight. Ultimately, societal norms will likely limit the scale and viability of these networks, and those limitations will likely be proportional to the lack of moderation.
So, all we have to do is ensure societal norms reinforce individual liberty while not enabling atrocities on humanity. It’s that simple. 😟
Update: in the 10 hours since I wrote this, AWS (Amazon’s web hosting) decided to remove Parler from their service, which will likely take the site offline for at least several days.
Update January 10, 2021: Dave Troy (@davetroy) published a Twitter thread with the challenges specific to Parler, with details about their lack of platform options.