by Lil Tuttle

Think Twitter is the new Wild West of complete free speech? Think again.  James O’Keefe’s Project Veritas has released a new video exposé of social media “shadowbanning” that is well worth the few minutes of time it takes to watch it.

Twitter claims to “give everyone the power to create and share ideas and information instantly, without barriers.” In reality, however, Twitter kicks out speech, ideas, and symbols its engineers don’t like, can’t relate to, or don’t agree with, and it promotes speech, ideas, and symbols its engineers do like, relate to or agree with.

The practice isn’t obvious. Twitter doesn’t delete accounts or block accounts.  Users notice when their accounts are deleted or blocked.  Users don’t notice, however, when their accounts are shadow banned.

Shadowbanning

In the video, former Twitter software engineer Abhinav Vadrevu explains how shadowbanning works:

One strategy is to shadow ban so that you have ultimate control. The idea of a shadow ban is that you ban someone but they don’t know they’ve been banned, because they keep posting, but no one sees their content.  So they just think that no one is engaging with their content, when in reality, no one is seeing it.

Shadowbanned users’ tweets still appear to their followers, explains the Project Veritas video, but the tweets don’t show up in search results or anywhere else on Twitter.

Algorithms created to block, mute or prioritize tweets contain filters that decide, for example, what tweets are kicked out of the stream of tweets that the world sees. In the video, Pranay Singh, a Twitter direct messaging engineer, explains:

Just go to a random (Trump) tweet and just look at the followers. They’ll all be, like, guns, God, ‘Merica, like, and with the American flag and, like, the cross. Something. Like who says that?  Who talks like that? It’s for sure a bot.

You just delete them, but, like, the problem is there are hundreds of thousands of them, so you go to, like, write algorithms that do it for you.

You look for Trump, or America, or any of, like, five thousand, like, key words to describe a redneck. And then you look and you, like, parse all the messages, all, like, the pictures, and then you look for, like, stuff that matches, like, that stuff. And … so you, like, assign a value to each thing, so like Trump would be, like, .5, a picture of a gun would be, like, 1.5, and … if it comes up … the total comes up above, like, a certain value, then it’s a bot.”

Maybe none of Twitter’s engineers have ever encountered a real person who actually speaks positively about Trump, guns, God, and America. Or they’ve never met a person who embraces the American flag or the cross as a positive symbol.  If so, their bigotry in assuming such speech is produced by robots might be innocent.  But that’s highly unlikely, especially if Twitter engineers are seeing—and obliterating—“hundreds of thousands” of such tweets in myriad forms.

“Digital Heroin Hits”

So what’s really going on? How about a new kind of censorship using addiction psychology.

Social media networks, for the vast majority of those who populate them, offer a new system of chemically induced meaning currency. Popularity, social stature, and most importantly, self-worth are defined by likes and badges.  In such an economy, those who control the pipeline control meaning and worth itself.

Among Millennials and younger generations, social media engagement is a status maker. Popularity among peers is measured by social media engagement.  “Likes,” retweets and badges offer the delivery and proof of status…  As a result, teens will often erase posts that don’t get enough “likes.” …

But this Attention Economy is driven by something chemical, dopamine. We have seen the rise of a dopamine-based meaning economy where shots of dopamine—delivered digitally via likes, retweets, hearts, and badges—conflate with meaning.  Facebook founder Sean Parker confirmed that this was intentional:

“The thought process that went into building these applications, Facebook being the first of them, … was all about: “How do we consume as much of your time and conscious attention as possible?” And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that going to get you … more likes and comments.  It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.  The inventors, creators—it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people—understood this consciously.  And we did it anyway.”

 In choking off the currency of engagement via shadowbans, Twitter ensures that people begin to censor themselves in an attempt to feed their addiction and garner more social media engagement…

If conservative social media users value “digital heroin hits” more than freedom of speech and expression, they’ll quickly learn to keep positive thoughts about “Trump, guns, God, and America” (or countless other conservative issues) to themselves. In short, they will self-censor their ideas and surrender the public debate to their ideological opponents.

What a victory that would be to their opponents!


Update:  Angelo M. Codevilla, professor emeritus of international relations at Boston University, delves more deeply into corporate and social media free speech abuses:

Facebook and Twitter have become the overwhelming medium of communication among Americans, especially the younger generations, and have instituted algorithms that allow for the suppression of  opinions with which their highly opinionated management disagrees.

The social media giant’s first defense—we’re not “shadowbanning”—are actually admissions. Twitter’s argument: “We do take actions to downrank accounts that are abusive, and mark them accordingly so people can still to click through and see this information if they so choose… This makes content less visible on Twitter, in search results, replies, and on timelines. Limiting tweet visibility depends on a number of signals about the nature of the interaction and the quality of the content.”

That’s as good a definition of “shadowbanning” as one might want. Banning what? Objectionable to whom? A glance at Google’s corporate prejudices suffices to describe what the U.S. corporate mind tolerates and does not.

But social media and other corporations are private, not “state actors,” right? Yes. But they trammel our First Amendment rights nevertheless, even more than “state actors” do. What then is to be done about that?

Read on in Can the First Amendment Protect Us from the Ruling Class? to consider two solutions he offers.