The Future of AI Censorship

Artificial intelligence is among the emerging technologies that are rapidly changing the world. Now the largest internet companies, including Facebook, Twitter, Google and YouTube which is owned by Google, are adopting AI methods to censor certain types of content. Where is all this leading?

AI is already widely used by these platforms to proactively detect, suppress or remove terrorism-related content, or anything related to violent extremism, even without human intervention. On YouTube’s official blog, they claimed in October of 2017, “over 83 percent of the videos we removed for violent extremism in the last month were taken down before receiving a single human flag, up 8 percentage points since August.” Similarly, Facebook has said that its AI technology can take down 99 percent of ISIS and Al Qaeda terrorist content even before any users flag it.

In his testimony to Rep. Brooks during the recent personal data hearings to Congress, Facebook CEO Mark Zuckerberg said that their AI censorship technology can be used as a model for other types of content besides terrorist activity. But when Rep. Richard Hudson asked how Facebook could discern things like hate speech from merely contentious speech, Zuckerberg said, “That is something that we struggle with continuously. It’s nuanced.”

Twitter is addressing this nuanced subject by putting IBM’s Watson to work. “Watson is really good at understanding nuances in language and intention,” a Twitter VP said. “What we want to do is be able to identify abuse patterns early and stop this behavior before it starts.” They’re using Watson’s Tone Analyzer to flag “abusive language,” but it’s unclear what that actually means.

Both sides of the political divide are already feeling the effects of overzealous or biased censorship. On the left, liberals have complained about Google’s suppression of sites like Alternet, Democracy Now!, Truth-Out and Counterpunch.

Conservative critics have also complained about censorship. Sen. Ted Cruz, demanded an answer from Zuckerberg about suspected liberal censors at Facebook suppressing content “including stories about CPAC, including stories about Mitt Romney, including stories about the Lois Lerner IRS scandal, including stories about Glenn Beck,” blocking posts from a Fox News reporter and taking down over two dozen Catholic pages.

Censorship on these platforms, whether done by humans or AI, can take the subtle form of deprioritizing or hiding certain content. More severe censorship measures include account suspensions and content takedowns, often without an explanation beyond form letters describing “community standards” using generic language.

Can these companies be held liable for implementing overzealous, unmonitored or biased AI censors? That’s an open question, and in the past, they’ve attempted to blame the algorithm.

AI is very far from understanding the subtleties of human discourse and resolving conflicts equitably, so keeping humans in the loop in this area will be crucial to maintaining balance and control. If people and their expressions are to be treated fairly, this is a process that must continue to involve employing lots of people to evaluate those expressions.