Tech Leopards Have Tech Spots

Content moderation was always a bit of a bluff to appease regulators and a baying public. Mark Zuckerberg moved ahead of changing winds.

History is an unbroken chain of great bluffs. Hannibal hid his army at Lake Trasimene, Scipio baited the war elephants at Zama, Arminius trapped Rome's legions at the Teutoberg Pass - two thousand years later, cardboard spitfires were fooling German bombers in the south of England. Deception is the norm, so it's little wonder that our relatively recent transition from war economies to (mostly) peace economies shifted the trickery from generals to CEOs. Big tobacco, sugar and lead were infamously built with lies and obfuscation, and your favourite politicians are being lobbied by corporate interests right now (probably). Silicon Valley is no stranger to such skulduggery economics.

Some of you might remember when there were many search engines, all of them obvious advertising platforms. You couldn't start a web browsing session back then without being so bombarded by adverts and links that the search box could be difficult to see. Google changed that by making its search page not just entirely ad-free but entirely content-free - today, the Google homepage is little more than a crisp, white background with a search box in the middle. It is almost perfect design, and it's probably the main reason we all switched to Google twenty-odd years ago.

But aesthetics and usability aside, nobody now considers Google to be anything other than an advertising juggernaut. We all know that ads are how the search behemoth makes its money, and we all know that harvesting massive amounts of data is how it makes the ads work. Google managed to obfuscate that with clean design for many years - and other tech companies followed - but all curtains must drop eventually. It shouldn't have come as any surprise last month, that Mark Zuckerberg ripped off his "Zuck" mask to reveal that he's actually been Mark Zuckerberg all along.

Allow me to step back in time. When "Zuck" built Facebook in 2004, it wasn't a new idea - there were competitors and incumbents - but through his intellect, nous and aggression - and partnership with Peter Thiel - Facebook's rise was meteoric. Zuckerberg loudly boasted that he was connecting the world - and he did, frankly - while he also quietly harvested user data and sold advertising. So far, so tech-normative. But seeds of criticism were already growing. Eli Pariser coined the term filter bubble in 2010 to describe how tech giants were constraining the information each individual saw. Jaron Lanier wrote Who Owns the Future in 2013 (a must-read). And Shoshana Zuboff started her work on surveillance capitalism in 2014.

These criticisms took a while to breach into mainstream public consciousness, but in 2018, British law enforcement stormed the offices of a political-bond-villain firm called Cambridge Analytica, which had acquired (easily!) the personal data of 87 million Facebook users. Tech was suddenly in the dock, and Zuckerberg was hauled before the US Senate for a grilling that, in the event, backfired and revealed just how little the senators knew about technology companies.

Somehow, in the years that followed, the public debate cooled on data harvesting and privacy - which have solutions detrimental to technology companies (did I mention your favourite politicians are being lobbied?) - and moved on to what technology companies would do about harmful content on their platforms. The answer to anyone who works in tech was obvious - nothing comprehensive. It takes just a few seconds to write the most horrific harmful content, but much longer for it to be seen, reported, assessed and taken down. And that's just human-generated harmful content. I could whip up bot accounts to cause harm en masse with a tiny amount of code, and there are hundreds of thousands of people with my skill set. Harmful content is a tidal wave and moderation can only tackle the absolute worst of it - the “lesser” bulk runs riot.

That didn't stop tech platforms from hiring content moderators, paying them too little, and claiming to be taking corporate responsibility. If you've used much social media over the past few years, you'll know plenty of harmful content still gets through. It boggles my mind that this corporate responsibility bluff held for so long and kept regulators at bay. But it couldn't hold forever, of course.

Reports of working conditions among moderators were shocking - these guys have to sift through the most vile material (the worst harmful content would make anyone puke) and it takes an emotional toll - and the sheer volume of misinformation around the pandemic made shortcomings obvious to see. I'd wager that tech bosses have been looking for a way out of their content moderation ruse for some time, and the anti-DEI changes of the Trump presidency have given it to them.

I think Mark Zuckerberg took the opportunity to suffer the lesser backlash of ending content moderation in line with politics before suffering the bigger shock of seeing the ruse crumble uncontrollably. It's just business.

Big tech has never been about connecting the world - it's about harvesting data and publicly saying the right things to keep getting away with it. The tech norm.

Thanks for reading. If you’re looking for other interesting things to read, check out this handy list.

Daily News for Curious Minds

Be the smartest person in the room by reading 1440! Dive into 1440, where 4 million Americans find their daily, fact-based news fix. We navigate through 100+ sources to deliver a comprehensive roundup from every corner of the internet – politics, global events, business, and culture, all in a quick, 5-minute newsletter. It's completely free and devoid of bias or political influence, ensuring you get the facts straight. Subscribe to 1440 today.