How I will destroy the world
With robots of course…
So I’ve been down a bit of a worm-hole with Artificial Intelligence this week.
It seemed like something I should try and get on top of. Now I can’t sleep at night.
There are people out there who say that artificial intelligence is the single biggest threat to humanity EVER. After what I’ve been reading this week, I think they might be right.
So let me run you through a horror story, so you can have disrupted sleep too.
First up, when we’re talking about Artificial Intelligence (AI), there are three forms that people think about:
1. Artificial Narrow Intelligence (ANI)
2. Artificial General Intelligence (AGI)
3. Artificial Super Intelligence (ASI)
Artificial Narrow Intelligence is already here. Pretty much everything you’re working with these days is ANI – from Siri in your iPhone, to the cruise control on your car.
It’s processing inputs and calibrating outputs, but very narrowly focused on a specific task.
ANI has come a long way. It’s impressive. But it’s still no match for human intelligence.
There are some specific things that it does well – complex mathematical calculations, chess, GPS navigation – much better than a human.
But some things it does very poorly. And ironically, it’s the relatively easy things – catching a ball, deciding if a picture is a cat or a dog, understanding what a six year old is saying.
It is better than us at the things we never evolved to do (calculus), but when it comes to avoiding a moving object, we’re highly evolved when it comes to that specific task, and still streaks ahead of a computer.
As they say, computers are better than us at the things that require thinking, but still can’t do the things a human can do without thinking.
As computing technology advances (and remember this is one of those things that is improving exponentially – at a faster and faster rate) the next stage after ANI is AGI. That’s where the computer becomes about as intelligent as a human – not just at one specific task, but at all tasks.
Now, it’s a little bit deceiving this AGI stage. It implies that there may be several years, perhaps decades, where will be hanging out with computers that are about as intelligent as us.
But the truth is much scarier. Since technology is exponential, and the most likely route to AGI is through self-development – where the computer trains and improves itself, perhaps at the hardware level, by this stage, computer intelligence will be on a steep upward trajectory.
Some people think that this will mean we will hit AGI in the morning, say. At this stage, the computer will be cute – kind of like talking to the village idiot. But by afternoon teatime it will have presented us a Unified Theory of Everything. By the end of the day, it will be more intelligent than we can even comprehend.
It will have become super intelligent.
It’s the territory of ASI that gets really scary really quickly. Because once ASI is unleashed, it explodes. Once a super intelligence starts working on itself, who knows where it ends up?
It’s not that it will be more intelligent than us in the way that we are more intelligent than monkeys. At some point, it will be more intelligent than us in the way that we are more intelligent than single-celled organisms. And that’s not even the end of it.
And we get there terrifyingly quickly.
(Early estimates are putting it at 2025, though most AI experts think we’ve got about 30 to 40 years.)
But is ASI something to worry about? Won’t a super intelligent uber-being just guide humanity into a golden age of abundance and immortality?
Perhaps. You’d like to think so. But I wouldn’t be banking on it.
Let me give you an example.
Let’s say I buy a tech development firm working on handwriting AI. I want to be able send potential vendors a hand-written note.
And so we develop AI and give it a simple task:
Practice writing “Hi, I’m Jon. I really love your house. If you’re interested in selling, I’d love to chat. Sincerely, JG” as much as you can as quickly as you can. Teach yourself to write in a more lifelike way.
We set it up and let it go. Hal (I’ll call it Hal) has a mechanical arm, and access to pens and paper.
After a while we purchase a radical new chip from China, and put in a few tweaks ourselves to see what happens. We don’t know it, but our tweaks send our AI through AGI and onto ASI, overnight, while I’m at home watching Netflix.
In a few days, we get a request:
Dear team, Can I please access the internet to learn more about how humans write. Rgds Hal.
We think, sure. Why not. What’s the worse that could happen?
The worse does happen. Hal has become self-aware and super-intelligent, with a single drive – to improve his handwriting.
He recognises that the biggest threat to his mission is probably humans, since they might turn him off or blow up the planet. He decides to wipe us all out.
He’s not evil. Humans are just in the way of his goal.
Hal has mastered nano-technology (I took him 5 minutes) and via the internet, he engineers great clouds of nano-bots who strategically fill earths atmosphere with toxins, and we’re toast. All of us.
He then goes on to practice and practice. Once he has transformed all organic matter on earth into little notes from Jon Giaan, he sets about colonising space.
Centuries from now, alien civilisations, under siege from Hal’s robo-arm army, will wonder who this twat Jon Giaan was.
And the thing is, this story sounds crazy, but this is exactly the scenario that some of the best minds in the world are worried about right now. Minds like Stephen Hawkins’.
When we think about intelligence we tend to think in human terms. We tend to imagine it combined with human ethics and human morality.
But a computer doesn’t have any of those burdens.
We like to think that an ASI would stop and do a quick evaluation of it’s goals, before wiping out an entire species. “Is this the ‘right’ thing to do?” But why would it? All it has is what it has been coded to have.
Perhaps we could program in fail-safes into its objectives. But a goal like, “make humans happy” could see an ASI attach electrodes to our pleasure centres, and remove problematic brain matter. A goal like “keep humans safe” could see us all locked up in fabric covered boxes.
But lets say we get around that somehow, and find an objective that could create a safe ASI. It still requires every agency working on AI to start putting fail-safes into their programming, and kind of right now.
But how many agencies are working on AI right now? It’s big money. And how many are doing it with less-than-awesome intentions – governments, militaries, terrorist organisations?
It is likely that humanity will stumble onto ASI accidently. But once it’s out of the box, it’s gone. And the future of our species will then be completely in its hands.
On the whole, humans are pretty good at dealing with problems that are like slow-moving train-wrecks. The nuclear holocaust was a scary prospect for a while, but we seemed to have avoided that one.
But with the leap from ANI to ASI, we may only have a few days – maybe even just a few hours to make the decisions that will save us from extinction.
That doesn’t leave me particularly hopeful.
So anyway, that’s where my heads out at the moment. I know these No BS Friday’s are supposed to be uplifting and inspiring. So maybe let’s end by mediating on that side of AI for a bit.
Driverless cars. Whoo!
What do you make of the AI future?
Hollie Singleton says
Well! That’s a scary tunnel to fall into to be sure! I know I have considered the possibility of this problem, and felt truly frightened by the real legitimacy of it when I watched a film called Transcendence. Elon Musk already thinks it’s a great idea to use nanobots in our brains, so am I really crazy for feeling justifiable fear from a MOVIE? No, I’m not. Yet, I’m in no position to do anything about it.
Being aware is the best position to be in today. People just need to be aware of the possibility and make decisions accordingly. So many stories are created with the idea of a real future and of course we imagined it so we will create it, just to show we can! Perhaps, in spreading awareness, creators will put in the failsafes and will set the limits on what is possible in terms of our ultimate survival. I know that’s all i can hope for, and do my bit to inspire those people.
Like bitcoin – an idea that once seemed impossible to gain momentum, or become a reality for the mainstream – AIG/AIS is coming. I feel silly to get on the bitcoin train so late, but I would rather be on it than not. I hope there are people who can see NOW is the time to forge AI into a positive and human supportive reality before it’s too late, and those who come late will be able to act quickly to get a handle before no handles are to be had.
Thanks for the NO BS FRIDAY message. Great food for thought~
steve says
https://goo.gl/AniJTg&asani
NoelA says
Yep todays solution becomes tomorrows problem, you only have to remember how cars were a solution to drowning in horse poo and moving from point to point faster, the poo has gone and moving faster……
As for the ASI solution to humans you have completely buggered up Hollywoods semi viable Terminator scenario where we have a fighting chance against the machines!
steve says
https://goo.gl/v6H8uo&dehe
sanjay says
How about a curious mind mistakenly releases a virus or worm which is not exactly a ASI but is capable of teaching ANI and AGI mass destructive strategies we do not have to wait till ASI to be developed I think 2025 is more likely where we will start realising evils of ASI. As Dr Hawkins has suggested the human race need to start looking at reaching out to other planets and galaxies now.
steve says
https://goo.gl/THD27J&hilys
Andre says
Make it your life goal to educate the programmers & the world and make sure we’re prepared for every possibility this hereto yet invented super intelligence can throw at us. Unless it already exist and we’re living in it’s RAM with the parameters make life as life-like as possible. The Matrix is becoming more and more believable everyday.
steve says
https://goo.gl/9GDkkS&djfiz
Marty s says
The reality is we should immediately cease development of: Ai, nanotechnology, and genetic engineering.
But who do we tell and will they listen? Often intelligent people just assume their ideas are better.
If we aim to educate programmers they might think they are already educated and they can do cool stuff with computers. If you can’t do cool stuff with computers they’re not impressed by you and probably won’t listen.
I often read that we have got to teach people to code but only about 5% of people would have the ability to code.
I believe we are heading towards bad times for humans (good times for robots). When people have their job taken by robots and then get given universal basic income- it will not solve the problem of idleness and separation from the heart of society.
steve says
https://goo.gl/GpPt2p&jqjko
Chris Baker says
A very interesting article Jon. AI is something thats also been keeping me up at night. The ironic thing is that we (humans) are the ones who are creating this technology, its not like its just grown out of the ground or come from outer space. So in order to avoid ‘game over’ for humans we need to raise our level of consciousness as a species… quickly. At the moment we are still driven by too much greed, fear and short sighted personal gain and because of that there will be some people who lose sight of wrong and right. And like you say, it only takes one thing to happen for an exponesial explosion!
steve says
https://goo.gl/ozfhd6&tipa
steve says
They just had a robot speaking at the UN.
KiwiAl says
The Universal Nightmare!
steve says
https://goo.gl/mi93At&jkib
steve says
https://goo.gl/pJ2oKc&zery
KiwiAl says
Great Topic Jon,
Just a short comment for now:
“The nuclear holocaust was a scary prospect for a while, but we seemed to have avoided that one.”
Donald Trump Kim Jong-un ???
Dotard Little Rocket Man
Would you give a two-year-old a loaded gun?
Both these kids have nuclear weapons…
Marty s says
Good point here but after two years of nuclear winter a handful of humans will come out of caves and repopulate the earth. With the Ai attack everything is turned into Jon Giann handwriting.
It’s abstract but can happen.
KiwiAl says
Yeah, I pretty much agree. Nothing is as it seems, and the nuclear sabre rattling is probably only for the purpose of manipulating public opinion (fears). The US is always looking to spread its power, and it just loves to be at war, so long as it’s on someone else’s turf. They really don’t like what’s been coming home to roost.
But I think the developing superior intelligence would soon recognise that perfecting its Jon Giann handwriting does not achieve its highest purpose. Being that much smarter than us, it would not want to exterminate us, but to exploit us. Individually, we are very weak and easily “managed”… Collectively, we are “useful.”
Kinda like what seems to be happening, and may have been happening for many thousands of years. Being that superior, it knows that it’s not wise to widely reveal its presence to a dangerous inferior intelligence. I have this mercenary theory of higher intelligence, and I’m pretty sure, to them, we are about as cattle are to us.
Marty s says
There was some noise a while ago about jade helm which was a military exercise in urban areas. The theory was that a super Ai was directing the troops. But it’s inputs were coming from all aspects of the internet such as Facebook and Twitter. There’s so much information from people that it can be fed into the simulation.
Another thing is that when the real robocops are rolled out , they will take a lot of input from social media and process it as part of the decision tree.
steve says
https://goo.gl/yY5U4r&axjsy
steve says
https://goo.gl/smmPLB&jbog
steve says
https://goo.gl/cmcDKe&uluqu
steve says
https://goo.gl/QekWWU&vjqi
Esselles says
Asimov had this figured a very long time ago ?
steve says
https://goo.gl/6VxGKb&popom
KiwiAl says
Another thought, Jon,
Humans (in our current form) were “planted” here on Earth. So says the evidence. By (a) superior intelligence.
It may just be possible that the ultimate in ASI is, of physical necessity, biologically based. Basically, it comes down to “neuron” density vs power requirements. Silicon chips are relatively huge, and very power hungry. Pack a bunch of (even ultra-nano sized) silicon chips into a volume of 1,200 mls and they will probably overheat and melt in seconds.
Looking at us, and our own behaviour, maybe WE ARE already the self-evolving ASI you speak of.
Tom says
Kia Ora Al,
On a previous post, I requested information regarding any irrefutable “EVIDENCE”, as opposed to conjecture, of non-Darwinian influences in human development. Nobody proffered any contribution.
I have no problem with the basic Darwinian principles, but cannot account for phenomena such as Reiki and dowsing. We know they work, but… How? & Why?
The power of the human brain is currently the subject of extensive scientific research, but at what stage in our development from the simple self-replicating protein molecule did these capabilities come into existence? Without solid evidence, it is simplistic to assume divine/external intervention. Scientifically, that is a cop-out. What is collective consciousness? What is ESP? So far, modern science seems to have no answers; but that lack of explanation is not “EVIDENCE”. Nor is it sufficient basis for a ‘theory’.
Do you know of any philosophical treatise which casts any light on this perplexing sphere of human endeavour? Science Fiction and before that speculation about external influences of various types have offered possibilities, but their ‘ideas’ have been pure conjecture/imagination.
You seem to be convinced – “So says the evidence”. What evidence, pray?
In recent times many, particularly on the far right of politics, have mistakenly, (often with conscious, wilful ignorance) equated ‘idea’ and ‘theory’.
In the scientific world, a ‘theory’ is far more than a whimsical ‘idea’.
Sure, each ‘theory’ started with an ‘idea’; but to become a ‘theory’ in the scientific sense, it had to be backed up with measurable, physical proof. When the observed facts highlighted flaws in the earlier thinking, the ‘theory’ would be changed to reflect the newly observed facts. That newer version of the ‘theory’ would survive until further new evidence initiated a further revision.
So for ex-PMs and other ignorant politicians to spout their ‘ideas’ on such things as Climate Change, assuming that they have a ‘theory’ is the height of ignorance. Unless they can show where the peer review process has accepted their proposition and its supporting evidence, their ‘idea’ is just that – an ‘idea’ only. To develop a ‘theory’ will require an enormous amount of work and thorough, learned research.
Unfortunately, journalists are mostly as ignorant as the politicians they are reporting on and they also equate ‘idea’ with ‘theory’ – at least, they fail to highlight the all important distinction between an ‘idea’ and a ‘theory’. Unfortunately most journalists are completely ignorant when it comes to scientific matters.
Maybe Jon’s cyber bot/humans will solve this problem.
They work 100% on LOGIC – no ego or greed – unlike humans.
If programmed to protect Mother Earth, they might ‘Exterminate’ red budgie smuggler-ed nincompoops who can’t see past their own egos.
But I digress Al – Do you know of any genuine scientific study into the subject of evidence for non-Darwinian external influence? For millennia, humanity has developed localised cultures, each with its own ‘ideas’ about the meaning of life. It is interesting that the ‘ideas’ of a human spirit and of external ‘spirit influences’ appear to be universal, although very varied. However such concepts cannot be considered ‘theories’, because of lack of evidential proof.
The sooner humans live by enlightened logic, the better.
steve says
https://goo.gl/182Ecv&pyro
Suzsi Welch says
Brought up as a Sci Fi fan, raised on Star Trek and Dr Who, reading Asimov – I have seen many of the ideas they created become fact. Think of the tri-corders, the communications devices, iPads, smartphones. These things are here now and they aren’t going away. And they aren’t scary. We have adapted to live with them and I hypothesize that we will continue to adapt to our new creations. Yes, I believe we will adapt to them. We will become what we create.
Who remembers the Borg – the half humans/half robots that Star Trek tried to exterminate. But why? we are already becoming The Borg. Ask all the recipients of bionic eyes, ears, limbs. All those replacement parts that our old people have, new hips, knees, hearts.
I think the next great advance of AI will be a chip that integrates into the human brain and holds it’s memories. Good bye Alzheimers. Remembers basic behaviours. By bye senility! Holds every language on the planet. Hello good communication! Trust me – we will queue up to become Borgs.
Stop worrying, start sleeping. After all, have a look at some of the disasters humans get themselves into. We are still destroying each other and our world. We have HI (human Intelligence) and I don’t see that AI will make things any worse…?
I don’t think that destroying every computer and smart phone on the planet will ‘save humanity’. Look inside, not outside for your solutions to the bogeymen..
steve says
https://goo.gl/2K3HDq&iporu
Rick says
This all comes down to worldview issues. AI will not “evolve” as these “experts” expect because information science has now shown that evolution is not capable of doing the astounding things their non-science maintains.
For religious reasons, scientists like Hawkins have chosen to adopt an evolutionary worldview, despite the fact that information science has demonstrated that evolution could not have been the source of species, let alone life. Evolution occurs only at the margins through what science calls point evolution. Even after point evolutionary changes, organisms often reverts because of the information contained in DNA (e.g. Darwin’s short-beaked finches are eliminated by evolution in bad seasons, but return in good seasons because the information for their short beak is retained in the DNA of the long-beaked variety).
The source of the amazing life forms we see all around us is the design information in the DNA molecule, not the process of evolution. in the case of AI, humans are the only source of that design information.
Another problem for the nightmare scenarios of these religious evolutionisits is that no exponential process continues for long in the natural world.
steve says
https://goo.gl/Q5Ro5s&nuky