Header Ads

Facebook Is Too Big to Be Controlled




Once again, Mark Zuckerberg was back on Capitol Hill to defend his company.
The Facebook founder was grilled Wednesday by members of the House Financial Services Committee in a wide-ranging hearing that touched on election interference, the tech behemoth’s planned cryptocurrency, and misleading political ads. It’s that last issue — whether to allow politicians to continue to pay for ads that may have misleading or false content in them — which has proven to be the most pressing controversy for Facebook in recent weeks.
“Do you see a potential problem here with a complete lack of fact checking of political advertisements?” Rep. Alexandria Ocasio-Cortez asked in a viral clip of the hearing.
Ocasio-Cortez’s questioning points to a serious issue that leaves users in a terrible spot no matter what the tech giant decides: Should it take down more content at the risk of damaging free speech, or should it play a hands-off role, even if doing so allows nefarious posts to flourish? (I generally side with free expression, even if Zuckerberg is being disingenuous.)
But such an argument ultimately misses the point. The real issue not how Facebook should moderate its content, it’s that Facebook is too big to fix in the first place.
First, take the idea of fact checking political advertisements, a proposal that’s been endorsed by the likes of Elizabeth Warren and other Democratic presidential candidates. While the idea sounds laudable on the surface, it’d be near-impossible to actually implement. Stories about the Pope endorsing Trump are easy to spot as fictitious, but it’s the shades of grey cases — misleading claims, half-truths, and subtle lies — that dominate political discourse on an almost unimaginable scale. As writer Julian Sanchez noted on Twitter, Facebook would need to police “facts” in countless local elections, and in hundreds of different languages around the world. That’s an impossible responsibility to place on the shoulders of an army of contractors.
Just look at the recent controversies over fact checking at the Washington Post, a paper whose primary checker Glenn Kessler has staged a bizarre, years long campaign against true statements made by Bernie Sanders. Fact checking not only take significantly more effort than spotting hate speech or pornography, but it can easily be manipulated to favor the ideological leanings of the people assigned to do it. Fact-checkers at Facebook can just as easily disfavor progressives: One of Facebook’s biggest investors, after all, is Trump advisor Peter Thiel who has hired many GOP operatives in recent years.
Yet the alternative is still terrifying: allowing politicians to serve ads with potentially blatant lies to millions of people in an effort to tilt an election. It’s a depressing situation no matter which way Facebook turns.
If toxic speech is the issue that people want the government to step in on, it’s much more of a constitutional problem than a regulatory one.
An increasing number of commentators have proposed an altogether different accountability mechanism for Facebook: getting rid of Section 230 of the Communications Decency Act, which gives tech companies civil immunity for a lot of content that users post. But this “solution” is precarious as well. Advocates of removing Section 230 claim doing so would hold Facebook accountable for propaganda or fake news or right-wing hate on the site, but this “solution” wouldn’t actually solve the problem. If anything, it would threaten the legitimate free speech rights of many of the same progressive activists who propose it.
Whether people like it or not, deplorable speech of a variety of types (yes, even many lies) is protected by the First Amendment. Facebook isn’t breaking any laws by hosting it. If toxic speech is the issue that people want the government to step in on, it’s much more of a constitutional problem than a regulatory one.

Soon Facebook will implement a supposedly independent oversight board, a sort-of “Supreme Court of Facebook” to bring further transparency and accountability to its content moderation decisions. The idea is getting positive reviews and it’s a laudable goal in theory. But this board too will run into the same intractable problems — after all, the problems at Facebook are existential, not bureaucratic.
What if the board makes a decision that would require Facebook to hire 100,000 new fact checkers? Will Facebook ignore its decision, leaving the board useless? Or what if, for example, the board rules that a post written by an activist in India must stay up that infuriates the Indian government to where they will block Facebook? What happens when a hate-filled post is ruled okay, creating a rule that leads people to leave the site in droves?
Content moderation at a global scale is impossible. Even if there is 99% support for any of its decisions that still leaves tens of millions of people who can loudly express their displeasure. Yet social media without any content moderation will lead to a cesspool that would make any social network virtually unusable to the vast majority of the population.
Facebook is just too enormous for content moderation at a global scale to work, and we’re all going to suffer for it in the end.

No comments

Powered by Blogger.