So often, policing speech on the Internet is a fine balancing act. You can’t just let people do whatever they want, because as any unmoderated comment section shows, the forum will rapidly turn into an open sewer. However, going too far can endanger the sort of open discourse that, ideally, you want to engender.
So, the news that Google, Facebook, and possibly others are moving toward adapting the same automation that watches for copyright violations to looking for hate speech is a bit worrisome on the face of it. After all, where this automation is used on YouTube, it has a history of causing problems for content creators—sending takedown notices for videos that use excerpts that should be protected by fair use, and the process to request a human review of these takedowns tends to be arcane and time-consuming. This has led to a movement on the part of YouTube producers to demand copyright reforms to make it easier for their videos to stay up. (At the same time, a number of old-media producers are insisting that the current state of copyright doesn’t go far enough to remove illicitly-posted content from YouTube, so it’s clear this is an issue that’s going to take some debate and discussion.)
Will the same automation cause problems for people whose opinions are improperly labeled hate speech? Will there be a means for people who feel they were censored unjustly to request a human review of their removal? Will these automated removal methods be capable of taking context into consideration before deciding whether to remove a post? Will there be any human oversight of this removal at all?
Of course, plenty of automated systems for regulating speech have been in use for some time, though not in quite the same context. I use two different spam filters on my inbox, and yet plenty of spam still makes it through. But I also see some emails wind up in the spam folders that I actually did want to see.
On the one hand, by the traditional definition you can’t really call it “censorship” if it wasn’t done at the government’s behest. Private and corporate institutions have the right to decide what speech they do and do not wish to permit on their own platforms. The First Amendment only restricts governments from regulating public speech—not businesses from regulating it in their own facilities (though it does also affect civil lawsuits in some cases).
But on the other hand, when privately-owned platforms become so ubiquitous that they effectively become the new venue for public discourse, private restrictions on those platforms can take on some of the same worrisome qualities as government censorship of old media. Few would argue in favor of allowing Facebook or YouTube to become a cesspool (more than they already are, at least) but if nothing else, it’s worth reflecting that there is always the danger of sliding down a slippery slope toward prohibiting any speech that the owners or shareholders don’t like. It doesn’t necessarily have to happen, but it could. What if Facebook were to decide you couldn’t talk about the latest thing Mark Zuckerberg did?
No matter how you feel about hate speech, this is an issue that’s worth keeping an eye on.
(Found via The Verge.)
(Photo by John S. Quarterman, used under a Creative Commons Attribution 2.0 Generic license.)
For examples, Amazon discussion boards already have robot moderation. There are some words you can’t use (though Scunthorpe is OK) so I have sometimes had to add * and even alter URLs to shorter ones with no extraneous wordage.
LikeLike