How I will destroy the world
With robots of course…
So I’ve been down a bit of a worm-hole with Artificial Intelligence this week.
It seemed like something I should try and get on top of. Now I can’t sleep at night.
There are people out there who say that artificial intelligence is the single biggest threat to humanity EVER. After what I’ve been reading this week, I think they might be right.
So let me run you through a horror story, so you can have disrupted sleep too.
First up, when we’re talking about Artificial Intelligence (AI), there are three forms that people think about:
1. Artificial Narrow Intelligence (ANI)
2. Artificial General Intelligence (AGI)
3. Artificial Super Intelligence (ASI)
Artificial Narrow Intelligence is already here. Pretty much everything you’re working with these days is ANI – from Siri in your iPhone, to the cruise control on your car.
It’s processing inputs and calibrating outputs, but very narrowly focused on a specific task.
ANI has come a long way. It’s impressive. But it’s still no match for human intelligence.
There are some specific things that it does well – complex mathematical calculations, chess, GPS navigation – much better than a human.
But some things it does very poorly. And ironically, it’s the relatively easy things – catching a ball, deciding if a picture is a cat or a dog, understanding what a six year old is saying.
It is better than us at the things we never evolved to do (calculus), but when it comes to avoiding a moving object, we’re highly evolved when it comes to that specific task, and still streaks ahead of a computer.
As they say, computers are better than us at the things that require thinking, but still can’t do the things a human can do without thinking.
As computing technology advances (and remember this is one of those things that is improving exponentially – at a faster and faster rate) the next stage after ANI is AGI. That’s where the computer becomes about as intelligent as a human – not just at one specific task, but at all tasks.
Now, it’s a little bit deceiving this AGI stage. It implies that there may be several years, perhaps decades, where will be hanging out with computers that are about as intelligent as us.
But the truth is much scarier. Since technology is exponential, and the most likely route to AGI is through self-development – where the computer trains and improves itself, perhaps at the hardware level, by this stage, computer intelligence will be on a steep upward trajectory.
Some people think that this will mean we will hit AGI in the morning, say. At this stage, the computer will be cute – kind of like talking to the village idiot. But by afternoon teatime it will have presented us a Unified Theory of Everything. By the end of the day, it will be more intelligent than we can even comprehend.
It will have become super intelligent.
It’s the territory of ASI that gets really scary really quickly. Because once ASI is unleashed, it explodes. Once a super intelligence starts working on itself, who knows where it ends up?
It’s not that it will be more intelligent than us in the way that we are more intelligent than monkeys. At some point, it will be more intelligent than us in the way that we are more intelligent than single-celled organisms. And that’s not even the end of it.
And we get there terrifyingly quickly.
(Early estimates are putting it at 2025, though most AI experts think we’ve got about 30 to 40 years.)
But is ASI something to worry about? Won’t a super intelligent uber-being just guide humanity into a golden age of abundance and immortality?
Perhaps. You’d like to think so. But I wouldn’t be banking on it.
Let me give you an example.
Let’s say I buy a tech development firm working on handwriting AI. I want to be able send potential vendors a hand-written note.
And so we develop AI and give it a simple task:
Practice writing “Hi, I’m Jon. I really love your house. If you’re interested in selling, I’d love to chat. Sincerely, JG” as much as you can as quickly as you can. Teach yourself to write in a more lifelike way.
We set it up and let it go. Hal (I’ll call it Hal) has a mechanical arm, and access to pens and paper.
After a while we purchase a radical new chip from China, and put in a few tweaks ourselves to see what happens. We don’t know it, but our tweaks send our AI through AGI and onto ASI, overnight, while I’m at home watching Netflix.
In a few days, we get a request:
Dear team, Can I please access the internet to learn more about how humans write. Rgds Hal.
We think, sure. Why not. What’s the worse that could happen?
The worse does happen. Hal has become self-aware and super-intelligent, with a single drive – to improve his handwriting.
He recognises that the biggest threat to his mission is probably humans, since they might turn him off or blow up the planet. He decides to wipe us all out.
He’s not evil. Humans are just in the way of his goal.
Hal has mastered nano-technology (I took him 5 minutes) and via the internet, he engineers great clouds of nano-bots who strategically fill earths atmosphere with toxins, and we’re toast. All of us.
He then goes on to practice and practice. Once he has transformed all organic matter on earth into little notes from Jon Giaan, he sets about colonising space.
Centuries from now, alien civilisations, under siege from Hal’s robo-arm army, will wonder who this twat Jon Giaan was.
And the thing is, this story sounds crazy, but this is exactly the scenario that some of the best minds in the world are worried about right now. Minds like Stephen Hawkins’.
When we think about intelligence we tend to think in human terms. We tend to imagine it combined with human ethics and human morality.
But a computer doesn’t have any of those burdens.
We like to think that an ASI would stop and do a quick evaluation of it’s goals, before wiping out an entire species. “Is this the ‘right’ thing to do?” But why would it? All it has is what it has been coded to have.
Perhaps we could program in fail-safes into its objectives. But a goal like, “make humans happy” could see an ASI attach electrodes to our pleasure centres, and remove problematic brain matter. A goal like “keep humans safe” could see us all locked up in fabric covered boxes.
But lets say we get around that somehow, and find an objective that could create a safe ASI. It still requires every agency working on AI to start putting fail-safes into their programming, and kind of right now.
But how many agencies are working on AI right now? It’s big money. And how many are doing it with less-than-awesome intentions – governments, militaries, terrorist organisations?
It is likely that humanity will stumble onto ASI accidently. But once it’s out of the box, it’s gone. And the future of our species will then be completely in its hands.
On the whole, humans are pretty good at dealing with problems that are like slow-moving train-wrecks. The nuclear holocaust was a scary prospect for a while, but we seemed to have avoided that one.
But with the leap from ANI to ASI, we may only have a few days – maybe even just a few hours to make the decisions that will save us from extinction.
That doesn’t leave me particularly hopeful.
So anyway, that’s where my heads out at the moment. I know these No BS Friday’s are supposed to be uplifting and inspiring. So maybe let’s end by mediating on that side of AI for a bit.
Driverless cars. Whoo!
What do you make of the AI future?