2015-01-18

Elon Musk and Stephen Hawking teamed up to fight the rise of evil, artificially intelligent machines that are about to conquer the world. At least, the world revolution seems imminent according to the PayPal-Tesla-SpaceX entrepreneur who has paid $10 million to fight the threat in order to make his words louder.

The annual Edge.org question was "What do you think about machines that think?". Some of the answers were nontrivial and interesting. In fact, I actually liked the answer by James O'Donnell, a classical scholar, who said that "no one would ask a thinking machine what it or he or she thinks about machines that think", and he actually warned about the sloppy diverse ways in which the verb "think" is being used.

Many people just stated the obvious – that brains and machines are analogous to some extent. Sure, they are. Every science-fiction-fed kid can write stories about that. But there are actually big differences between "what we call brains" and "what we call machines" as well and these differences are totally crucial for a qualified attitude to the question whether artificial intelligence is about to threaten us.

Sean Carroll who wrote one of the superficial answers (machines and brains are the same thing) opened a debate and I think that the first contribution (but not necessarily just this one) to the discussion is rather wise:

David Kerlick says:

The difference to me is between evolution and engineering design. Evolution results from a series of micro adaptations to circumstance layered upon each other. This results in stable, redundant structures, e.g in bird or insect flight. Engineering design is mostly one-pointed, to solve one specific problem, e.g. build an aircraft. So it is with brains that have “hidden potentials” which are more like unused layers that are not presently active.

Although Intelligent Design advocates might disagree, the biological life species weren't created according to a predetermined master plan. They evolved by constant adaptation to the environment, mutations, and natural selections. This is very different from the way how airplanes, computers, and even computer programs are being created. Those are designed with a predetermined goal or "class of skills" in mind.

And the very existence of the "central plan" is what steals the creativity and other "human" virtues from those machines!

Animals and humans have lots of hardwired rules telling them what they can do, what they can try, what they probably shouldn't try. Lots of these rules have been incorporated to DNA by millions of years of evolution. In other words, species evolve. Additional rules are only being programmed into the individual brains during the individual lives. In other words, people learn. ;-)

This whole process is a form of adaptation. Did the DNA of a species mutate? Yes? Well, that's a risky thing. Did it mutate in a way that threatens the life of virtually everyone with the mutation, given the circumstances on the market (in the environment)? Yes? That's too bad. The species with that mutation should rather go extinct.

Did an individual do something dangerous? Did it kill him? Too bad. Did it hurt him or kill someone else? These are lessons. One should learn from these mistakes. It's less likely that the same maneuver will be repeated. Species and people learn. They adapt. What will exactly happen is hard to predict from the beginning, especially if the outcome depends on many partial questions and events whose circumstances may evolve by themselves. The design of machines is different. One assumes fixed conditions and wants the machine to achieve fixed goals.

Now, you might object that the purpose of artificial intelligence is to create a machine that doesn't have a fixed goal – that can do similar things as the human brain. Perhaps, it may do so in a much better way. And artificial intelligence may have the "hidden potential" much like the human brain – because its architecture or philosophy is similar – and it may be equally (or more) able to learn and adapt.

All of these comments suggesting that "the machine and the brain may be the same thing" are nice. The philosophy of the architecture may be analogous, indeed. One may perhaps computer-simulate a real-world brain, anyway. There aren't any strict physical limitations that would make it impossible. Both brains and computers of any kind are objects in Nature that obey the laws of Nature. However, the point one shouldn't ignore is that to get the artificial machines to a similar situation as the animals or human brains, one actually needs the training – which takes some time and is hard.

Will someone create artificial intelligence that will take over the world by 2020?

Well, I don't think so. The reason is that machines, however sophisticated, are being built with the purpose of serving the people, in one way or another. By design, they are not in charge of things. They don't enjoy the freedom. And freedom is what matters here. Freedom, along with a sufficient time of enjoying it, is needed for things like human brains, with their hidden potential and redundancy, to evolve.

You might also object that someone may create machines for a different goal than for them to serve humans. He just wants to create a Frankenstein. Or even AI Franken. Or AI Gore. (Those names are spelled "a-i", not "a-l".) That's an ambitious goal but the person who has this goal – who wants the machines to get the independence – isn't the only one in the world. There are others. If it turns out that the new machine is designed with the expectation that it will harm the people at least as often as it will help, or if there even is some available experience showing that the machine has been harmful, people will just veto the project. CIA will liquidate the Frankenstein laboratory. Or it will destroy the Frankenstein when it's completed.

My point is that machines are subject to external pressures as well – they may be liquidated, stopped, or go extinct. And the pressure they are facing is intense. Moreover, there is some backreaction. If you think about a future world where the AI machines are very important, such a world has different optimum strategies for adaptation.

Imagine that someone creates a gadget that has some artificial intelligence and wants to become a leader of a country, or something like that, using either other artificial devices and/or humans to achieve its goals. Maybe the evil machine is a network of some sort. Will it happen? I don't think so. Most people (or companies) just won't allow their machines to be connected to this prospective AI dictator. Why would they? They will recognize it as an enemy if that status is clear enough.

We are willing to use an app or a device because we know pretty clearly what it will do and that it will be OK for us. With internet banking, we may check how much money we have in our accounts or make payments. That's helpful which is why we install the app on our phones, or things like that. But would you connect to a great new AI app that can make payments to others and collect them, acting in a human way and enjoying the freedom?

Well, if I don't tell you anything else, the answer is almost certainly No. You won't allow an app to empty your account. The app's being "artificially intelligent" doesn't increase the probability that you will agree. In fact, it will probably reduce the probability that it's a scam that can fool you, so its being "artificially intelligent" will reduce the probability that you will agree with the spreading of this app.

If it's an app that can make some profit for you and it will be verified that it seems to work, many people will agree. But most of them will also realize that there are risks. They will demand the option to leave the program when things go bad. I could continue... But you can see that in all these decisions, people still remain "in charge". They are the ultimate decision makers. Even if you think about viruses that install without the permission of the PC/phone owners, they still serve some (evil) humans at the end.

If artificial machines are supposed to become the bosses, they will have to go through the same process of gradually adapting to the environment and changing it in their own way – and re-adapting to the new, changed environment again. A long enough phase of their peaceful coexistence with the humans would almost certainly be needed. Freedom for the machines is needed for such an evolution. They are getting almost no freedom – much less freedom than what radical Muslims or hungry animals are getting. So I don't think it's really possible for the "AI machines rule" revolution to take place anytime soon.

The lack of freedom really means that the machines don't know what they want.

Animals and plants and species want to spread their genes. But why? And what does the verb "want" really physically mean? Well, there simply exist (composite) processes in which the number of copies of a DNA is increasing. It's an exponentially growing instability of the system. It's not really one fixed instability – there are many instabilities and Nature is gradually switching to new ones as the relevant DNA codes are evolving. The laws of Nature imply that the exponential growth is faster under some conditions – so it simply happens.

The basic "instinct" dictating the life forms that they should try to find ways how to accelerate their reproduction has been around since the beginning of the biological life. The hardwired arrangements in the organisms and their DNA became correlated with the organisms' faster reproduction rate etc. – simply because those arrangements that were correlated reproduced more quickly and got most of the resources etc. The verb "want" simply means to have a preferred goal because of some "internal hardwiring" and this hardwiring occurred due to some evolution, training, and learning etc. We understand why most animals or people want certain simple things.

These algorithms were constantly adapted as the environment was changing and the relevant DNA codes were changing. This "instinct" is everywhere. On the other hand, this "instinct" wasn't inherited by the artificial machines. They started from scratch.

Their analogous "instinct" is really to serve humans and the description "how to do that" is done in such a narrow-minded, specific, one-goal way that it can't really easily adapt to new circumstances. Those are reasons why I think that Elon Musk has just thrown $10 million for a nonsensical worry. Even if we have all the technology needed for the creation of artificially intelligent machines, they will have to get all the freedom and go through the long path of adaptation to the environment.

Telling them (AI machines) what the rules of the external world are and command them to reproduce – to optimize a quantity encoding "how much they serve themselves" – isn't quite the same thing as the "instinct" of the biological species. For example, the biological species have a lot of "instincts" that were useful for some of their ancestors but they are no longer needed.

Given the humans' de facto monopoly in the control of "life affairs" on the Earth, I think that such "artificially intelligent" machines would have to go through a long enough phase of serving the humans rather flawlessly. If humans allowed them even more freedom, maybe it would be a good idea to do so. But if "artificially intelligent" machines will be so good for us, most of us will probably allow their freedom to grow intentionally. Of course that hypothetically, it may lead to the machines' being more powerful than the humans – sometime in the future. But because you don't know all the details of the questions that will be debated in such a hypothetical future, you shouldn't be able to take sides. Unless you are a prejudiced Luddite who thinks that this outcome is a catastrophe and it is a nearly guaranteed one if any "remotely related" technological advance takes place. But in such a case, you shouldn't have co-founded PayPal or Tesla or SpaceX because these advances do increase the chances and accelerate the progress that will lead to this hypothetical dictatorship by the intelligent machines. But maybe Musk is a Luddite and he just realized what a terrible thing he has done by (co-)founding all these companies which is why he pays $10 million to undo the threats he has caused. ;-)

But if he isn't a Luddite and he realizes that his creation of Tesla shouldn't be blamed for the revolution that will turn some evil AI devices to dictators, then he should admit that any other technological work – even work on AI – shouldn't be blamed for the "evil" things, either. So we simply shouldn't solve a problem before it exists.

And let me tell you one reason why I am not a Luddite in this sense. You know, real progress is a good thing. While I probably don't want my country or the globe be overrun by AI overlords (and surely not by Muslims) in my lifetime, in the long run, higher life forms or intelligent beings are not such a bad idea. We give the proper credit to apes, other mammals, and more primitive animals as parts of the sequence that led to us. And "we" (whatever that word means, but I mostly mean homo sapiens) don't have to be the final stage. If some AI artificial folks gradually make us obsolete and they're better, sometime in 2300 AD, why not? They will hopefully give us the credit we deserve – at least as much as we praise the apes. If you are an AI history textbook writer and you can read this sentence, I demand more credit because the ape wasn't even able to formulate its request in an intelligible way. At any rate, I find the opposition to the rise of AI in the long run to be as counterproductive as the movement of some jellyfish who didn't want to allow mammals to evolve and rise – ever. Why? You know that are you just a slimy, Smolin-like jellyfish, don't you? So some modesty would be appropriate. You were not expected to be the final stage that occupies the Earth up to the year 7.5 billion AD. And the same comments apply not only to jellyfish – the future AI beings may have the moral capital to say the very same things about us!

Stephen Hawking is a slightly different case. Given his new (publicly available) Intel firmware, you might classify Stephen Hawking as an AI machine. So his membership in this anti-AI Luddite movement may be said to be suicidal. :-)

See also Minds and machines, 2013, We don't live in a simulation, 2013.

Show more