• Ben J. Clarke
  • Posts
  • We Make Terrible Predictions About the Future of Tech

We Make Terrible Predictions About the Future of Tech

If you take a proper look at your world, you might catch a glimpse of its near future, and AGI isn’t in it

Tourist in a spacesuit, sunbathing on the moon.

AI-generated image | DALL-E

I’m not sure what a data scientist is in 2024, but I am one, so I probably shouldn’t say that data-driven actions are quite often bad ideas. But they quite often are.

For one thing, data is always past tense, meaning it happened, and the world is present tense, meaning it’s happening after your data did, and your data is sometimes “meaningfully” outdated.

But there is another reason that nobody talks about — everyone’s got data now, and everyone’s analysing it, so it no longer confers the competitive advantage it once did. It could even be harmful.

That possible harm is nailed by the advertising guru Rory Sutherland’s third rule of alchemy: “It doesn’t pay to be logical when everybody else is being logical.” Because if we all follow logic then we all end up in the same boring place. I doubt I’m alone in thinking that many areas of our cultural lives have reached Boredomville already.

Hollywood, for instance, has been boring for years. We’re deluged with superheroes, sequels that come fast and furious, and remakes that turn delightful Disney characters into candlesticks from anxiety nightmares. So fixated is Hollywood on chasing logic that it’s reached logic’s data-driven rock bottom — mindlessly rehashing what’s worked in the past.

So, thank God for John Wick. I watched the first three films on a flight to Hong Kong and got so absorbed that I just sat there as the over-the-top quirks mainlined Keanu Reeves into my eyeballs. Some things work because of the plot holes. Sometimes logic needs to be suspended.

And it is another Keanu Reeves plot hole that has me thinking about the techno-hype-cum-phobia we’re living through. In the 1995 film Johnny Mnemonic, the titular Johnny acts as a futuristic data courier. He uploads sensitive data into his hardware-augmented brain, travels to whoever purchased it, and then downloads it from his brain for them to keep. Presumably, this circumvents any hacking threat from sending the data over the internet.

But, here’s the thing — encrypted external drives already existed in the mid-nineties. Johnny didn’t need to upload anything into his head; he could have just popped it all onto a hard drive, popped that into a pocket, and then gone on his jolly way.

So, why would an audience buy that human USB sticks were a plausible future for secure data transfer? Instead of, say, better-encrypted drives?

For that matter, why did anyone think flying cars might happen when helicopters already existed? Or that people would spend their holidays on the inhospitable Moon instead of the sun-drenched Bahamas? Why do people, including me, you, and your auntie Sue, make ludicrous assessments about the future when just a little reasoning about the present would tell us how nonsensical they are?

I don’t know. But we are no better at assessing the future than Johnny Mnemonic’s audiences were in the mid-nineties. We modern folk make all sorts of kooky predictions about the years ahead.

Admittedly, some of it is conspiracy stuff, like billionaires injecting trackers into our blood when we’ve already got their smartphones in our pockets. But there have also been plenty of mainstream predictions that have failed to materialise as expected.

Self-driving cars and unregulated cryptocurrencies, for example. The former technology has been reduced to a shadow of its promise, and the latter to an investment asset for Twitter bros. And those fates ought to have been, but weren’t, obvious.

In the case of self-driving cars, for instance, they’ll always need to drive “properly” to avoid lawsuits bankrupting their manufacturers, which will make them useless on most real roads. Just try driving “correctly” in London, and you’ll see that it’s hopeless — you have to drive aggressively and behave quite badly just to get out of junctions. Self-driving cars will never be allowed to do that.

As for cryptocurrencies, I don’t know how to put this, but how did anyone think that unregulated assets would become anything more than risky investments for dodgy finance influencers and their hapless acolytes?

And then there’s AGI, or human-level artificial intelligence, our current hype-fear malady. I don’t want to sound dismissive — I’m very bullish on the benefits of AI — but let’s be real.

In 2021, the most advanced mainstream AI were chatbots. You gave them input and they gave you output, and you could “talk” to them in ways that made it feel like a conversation.

Then, in November 2022, OpenAI dropped ChatGPT on the public, and no matter how you look at it, including its ability to generate images, it is fundamentally just a better chatbot that takes input, gives output, and lets you “talk” to it.

Since then, we’ve been treated to yet better chatbots with greater and greater computing power, which is all good. But how does that make the next step of AI development a human-level reasoning machine instead of just an even better chatbot?

It almost certainly doesn’t, and the next iterations of AI will almost certainly do the same stuff the current ones do, only bigger, faster and more accurately.

I’m realising now that there is an implied corollary to Rory Sutherland’s third rule — if lots of people are being illogical, then it’s probably a good idea to step back and have a good think.

That might help us to escape the hype-fear bubbles a bit faster.