2016-12-12

Jerry Kaplan is widely known as an Artificial Intelligence expert, technical innovator, serial entrepreneur, and bestselling author. He is currently a Fellow at The Center for Legal Informatics at Stanford University and a visiting lecturer in the computer science department, where he teaches social and economic impact of Artificial Intelligence. Kaplan founded several technology companies over his 35-year career, two of which became public companies. As an inventor and entrepreneur, he was a key contributor to the creation of numerous familiar technologies including tablet computers, smart phones, online auctions, and social computer games. Kaplan is the author of three books: the best-selling classic Startup: A Silicon Valley Adventure; Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence (2015); and Artificial Intelligence: What Everyone Needs to Know (2016). In 1998, Kaplan received the Ernst & Young Emerging Entrepreneur of the Year Award, Northern California. He has been profiled in The New York Times, The Wall Street Journal and Forbes, among others. He received a BA degree from the University of Chicago and a PhD in Computer and Information Science from the University of Pennsylvania.

Jerry will be speaking at the Gigaom AI Now in San Francisco, February 15-16th. In anticipation of that, I caught up with him to ask a few questions.

Byron Reese: Do you remember when you first heard about AI or when you first got interested in it?

Jerry Kaplan: The first time I heard about artificial intelligence was in 1968 when I saw the movie “2001: A Space Odyssey.”

And it was HAL that got your mind thinking.

That’s right.

When did you start writing about artificial intelligence?

Well, I went into the field, so I am not sure that counts. That was thirty-something odd years ago. Most recently I started writing about the field for general audiences, I should say, as opposed to technical papers, probably about four years ago.

And how would you describe the state of the art? Where are we in this great arc of artificial intelligence development? How would you sum it up?

That’s a very good question. I think there is a myth that we are on an arc, which is to say that we are somehow building increasingly intelligent machines that are ever-more general and are making their way up some kind of hypothetical ladder of intelligence towards human intelligence, and that is really not the case. Artificial intelligence is a collection of tools and techniques for solving certain classes of problems, and for a variety of reasons, as with other areas of computer science, we are constantly expanding the class of problems and solving new and different types of problems using those techniques.

Do you believe that we are heading towards building an AGI?

I see no compelling evidence that we are on the path toward building machines that have what you correctly called “artificial general intelligence.” I think this is a wild extrapolation from the current state of the art and there is no real reason to believe that the current set of techniques are going to get us there. With that said, there is a great deal of value in the utility of the problems that are currently being solved by the current generation of AI technology and it will have a significant impact in improving our lives and increasing automation and affecting people’s employment and raising a lot of interesting issues for what kinds of ethical and social controls we want to put on the deployment of this technology.

Well, that’s interesting. So if someone posed to you the question that Alan Turing posed to himself, which is “Can a machine think?” What would you say to that?

In the human sense, I would say no. I encourage you to read the original paper by Turing entitled “Computing Machinery and Intelligence”, because it is really interesting,. It’s not a technical paper. It was mostly him just speculating a little bit about a couple of far-out ideas. It is very readable, and it is quite interesting to see his point of view and his analysis on this particular subject, but what he said is, “I regard the question that can machines think to be too meaningless to deserve serious discussion.” That is actually what he said in the paper. He goes on to say, “However, I believe that in fifty years time we will be comfortable using words like ‘thinking’ or ‘intelligence’ as applied to machines.” So what he was talking about was the use of language, and interestingly enough, he was very close to being correct. He proposed a Turing test and he said, “I believe that when machines can pass this Turing test, that is when people will be willing to use this kind of terminology to describe them,” but he was not saying that the machines were intelligent. He was really just talking about the way that we would describe the devices that we were creating.

But doesn’t he go on in the same paper to say to the effect that we are going to have to broaden our understanding of what the word thinking even means because a computer may do something radically different than the way we do it, but we still in all fairness should say that it is thinking. Would you agree with that?

Yes, we are violently agreeing, I think. You said he was expanding the use of the term. That’s exactly right. He was not saying that it would mean in fifty years that machines would have achieved human-level intelligence. He just meant that we would expand the use of the term, and that is true. Historically, there are many examples of this type of expansion of the use of language. It is perfectly normal. The most interesting one that I have run across is the expansion of what music means. Before the invention of the phonograph, like Thomas Edison, people believed that music was meant something that a person created and played, like an instrument. That was making music. When the phonograph came around, that was considered an odd curiosity, but many people did not consider it to be music. It was something new and different, and obviously over time, you and I talk about listening to music and it seems almost silly to say that is not music, but that is another example of the same kind of expansion of the use of the term. The thing that I think we need to be careful not to confuse is to think that those people back then were wrong or that they didn’t mean something by what they said than is different by what we mean by it. The general public today, particularly on this issue of, “Can machines think,” believe that what Turing predicted was that machines would be thinking in a human sense and that when they passed the test, they were intelligent. That is really not at all what he was saying. Just what you said is exactly correct. He thought it was a corollary or similar kind of behavior or activity that machines are engaged in and we probably just expand the use of the term because it was the closest description for those ideas that would be at hand at the time, and I think that is true.

You wrote a very well-received book called “Humans Need Not Apply.” What was your thesis in that book?

My basic reason for writing that book is that the advances in artificial intelligence are going to create certain kinds of social problems or make them worse, and at least when I wrote the book, nobody was talking about those social problems. Today they definitely are, and I wanted to point out that AI was a force that would make these particular social problems worse and that we needed to think about policy issues and social issues to address those concerns at the timeThe two issues are technological unemployment and income inequality. So I explained what artificial intelligence was and I argued at length that the effects of that technology would be to make these two problems worse and that we needed to have more thoughtful policy approaches in how to address them.

So Keynes is the one who came up with the term “technological unemployment,” and for the most part, we have had growing economic growth in the west and full employment for two hundred years. So to argue that there is going to be some kind of change in that, you would have to make the case that somehow this time it’s different. Do you agree with that, and if so, what is different this time around?

I think that ever since writing that book that I have come around much more to the position that you mentioned. I think AI is a force in that direction, but when you look at all the forces and sum them up, what we are seeing is a continuation of a process that has gone on continuously since the start of the industrial revolution. Perhaps it will accelerate somewhat by virtue of new technology and artificial intelligence in particular, but the dominant forces suggest that it is probably not going to be the kind of labor apocalypse that some people think about or write about. The threats are that fifty percent of all human jobs might go away in the next thirty years, pick your number. It sounds terrible until you really understand the way labor markets work and realize that it is probably true that fifty percent of the work they did thirty or forty years ago has gone away, or at least some significant percentage. What happens is that we automate certain tasks and that either makes some people more productive or that puts other people out of work in given professions. In some professions, it puts a large number of people or almost everybody out of work, but the result is usually due to elastic demand for products and services, a significant increase in demands for those products and services that either compensates in that industry or more often compensates in other industries. To put that more plainly, people have got more money to spend, and so what they do is they buy things and that increases employment in other areas. So what is likely to happen in the future are two things. One is that as these automation technologies come into use, we are going to see an increase in demand for other kinds of products and services that will employ other people in other industries. So you are going to see increases in employment in certain areas as a result, and in addition, we are going to see new kinds of professions come up that didn’t exist previously, and those will employ people as well. When you layer that on top of basic demographic trends in the workforce, at least here in the United States, it is unlikely that we are going to have a very big problem. Now, with all of that said, there are some people who are going to lose their jobs. This is a lot of what drove the recent election. People that are under-employed or are not as well-off as their parents, and we need to have good means for supporting them while they are retrained or otherwise retire them in some fashion from the labor force.

Is your prognosis for the next twenty years in the United States we will see falling wages and rising unemployment or the opposite, full employment at good wages?

The next twenty years in the United States? The problem is going to keep shifting. When I wrote the book that you mentioned, “Humans Need Not Apply,” unemployment was very high, it was a big problem in the United States, and it wasn’t budging, even though the economy was moving ahead. So there was a lot of blame on technology, which I think was logically the case. Today, I think by all reasonable measures that we have full employment. Now, the skills of the work force don’t necessarily match well with the jobs that are available. A lot of jobs are going begging and a lot of people can’t get jobs or they are under-employed, meaning it is driving down wages. The answer to your question is really going to be answered by our public policies for economics and growth and all of that, so it is very hard to project this. I would have given you one answer two weeks ago and a different answer today because of the election. Whether or not we will see falling wages and unemployment or rising wages and more employment is more a function of government economic policy than it is anything to do with artificial intelligence or technology.

What would you say is the belief that you have about AI that is the most controversial or the most uncommon?

One that is most surprising is I believe there is a very large disconnect or gulf between the public perception of what artificial intelligence is and what it means and the reality that is occurring actually inside the field. To put that in a quick summary, AI has a big PR problem, and this is potentially going to cause trouble and we need to do something about it, but there is nobody on point to try and recognize and fix this particular problem. So it is a bit of a replay of the tragedy of the commons. Everybody wants attention and accolades to their work, and the way you get that is by reinforcing and supporting a lot of wacky and crazy ideas, that we are summoning the beast and we are building these machines that are somehow going to reach human level intelligence, look at us eye to eye, and then maybe decide they are going to kill us. That is the public perception, and it is difficult for me to exaggerate how universal I find this opinion. I just did an AMA (ask me anything) on Reddit and that’s mostly what you get questions about. That is the universal view, and it is being driven by a series of forces in the press and the entertainment media and pundits who benefit from promoting this ridiculous proposition, and it is simply not the case. So the evidence for it is negligible at best and I think it is misleading people. People are concerned about ethics of self-driving cars and maybe we should put controls on these developments before they somehow come alive and take over. I mean, this is all misguided. It is sucking the oxygen out of the real discussions we should have, which is what you and I were just talking about. How does this affect labor? How does this affect unemployment? What does it mean for income inequality? We haven’t talked about that, but I think that is a real factor. Those are the things that are going to make a difference in our lives. The rest of it are just flights of fancy.

I would say the reason people are concerned is that you have incredibly smart people, Elon Musk, Bill Gates, and Stephen Hawking, giving dire, catastrophic warnings. I mean, you paraphrased Musk when he was talking about summoning the demon.

Well, therein lies the problem. Let me give you the real truth. These are very smart guys, there is no question about it. But none of them are experts in this field, and like the whole fake news problem that you are probably very well aware of, they are repeating, I assume with good intentions, the questionable warnings that they are reading about and seeing from other people. So they don’t have any direct involvement in this or any significant deep understanding of the technology. They are just pointing out and are reflecting things that other people have said. The problem is that like the fake news, these statements get far, far more attention than they deserve, so it just reinforces the idea and how could they possibly be wrong? We have this idea in our culture, of course, that anybody who is really rich or really famous doesn’t make mistakes. Well, I am sorry, but this is an idea that does not stand up to scrutiny any more than the idea that global warming is a hoax perpetrated by China. So I respect all three of the gentleman that you mentioned, but in this case, I respectfully disagree. Now, if you spend time with workers in the field, or go to a university and you ask this question, “Do you think these things are correct,” it is really surprising. You can walk from office to office through the artificial intelligence lab at Stanford and ask this question and you will get almost universally the following answer: “Well, I read that stuff. I don’t see how it relates to what I am doing. Personally, I don’t see it, but they are smart guys. Maybe they know something I don’t.” So I think this is a question within the field of the silent majority thinking that this is nonsense or at least there is no real concern or genuine support for it.

I have watched this movie for thirty years. That’s why I feel pretty confident in expressing some of these opinions. I have seen two previous waves of AI technology where it was exactly the same pattern. You had a couple of widely-quoted prominent people making over-reaching claims for what was going on in the field and what would happen based upon the dominant technology of the day, and none of it came to pass, and in fact, the technology today is significantly different. The approaches that the people thought would be the basis of generally intelligent machines has largely been discredited. So we have got the same words, “artificial intelligence,” for a whole bunch of different technologies. So right there on the face of it the idea that we are making progress is silly. The basic problem, and you are probably well aware of this, is people overgeneralize from a series of what are very different examples. So every time you read the press, “God, now a machine can do this. Now a machine can do that.” The analogy in people’s minds is that this is like a child growing up. Now he learned to ride a bike. Now he can eat with a spoon. Now he can do this, but it is not the same technology that is being used to solve all of those different problems. It is a little bit like drawing the conclusion that we are going to have a home kitchen robot that can do anything from the fact that I have got a toaster and an oven and a microwave and refrigerator. It’s like, “Oh my God, what will technology do next and conjuring up Rosie the robot?” It is just not the case. So that is the main point that I try to communicate is that we have to try to sober up about this. There is real value in what is going on, but it is not that we are making ever-more general versions of the programs that we had five or ten or twenty years ago. That is not at all what is happening on a technological level.

So talk to me about social issues for a minute. We have had a period over the last sixteen years since 2000 of stagnant wages and rising corporate profits. Certainly the distribution of the financial benefits from technology have been accrued primarily to the wealthy. What do you believe are the mechanisms that bring that about? Why does that happen?

Well, I will give you a theory that I think is possibly pretty strong, and I do cover this in both of my books. Automation is the substitution of capital for labor. So Karl Marx was fundamentally correct when he said that in the struggle between capital and labor, capital has got the upper hand, ultimately. So the people with the money are the ones that can afford to build the automation. Therefore, they are the ones who will gain the benefits of that automation. So the rich get richer and everybody else gets left behind. I see this in detail in my own life all the time. That is why it is just not some kind of philosophical or theoretical thing. I mean, I can point to stuff that goes on in my own life and why it is that the people that I deal with and you deal with continue to get wealthy while “disrupting industries,” which basically means putting people out of work. Now, there is nothing inherently wrong with that if it is layered within a system that distributes the benefits more widely. That is just not what we have got, so I really get worried. Let me give you a very current example of this. Everybody agrees that our country needs to invest in infrastructure. We have got decaying infrastructure, and we need new infrastructure, and we have starved it for a couple of decades. Alright, that is in the past. We do need to fix it. Everybody on both sides of the proverbial aisle agrees that needs to happen. Now, the latest proposal that I have seen, and this is just of course people floating stuff in the paper is that they want to privatize this. The idea is to privatize it and get people to actually get tax breaks for improving it. Now, that sounds good to an committed capitalist, but it doesn’t actually solve the problem for two reasons. A lot of the infrastructure that we need to invest in does not have a return on capital, so nobody is going to bother to take up that challenge. It has much more distributed benefits that are real and economic, but there is no way to capture the return on that in terms of a specific return on capital. The second thing is that the infrastructure winds up being owned by private interests, and that is a bad thing because we lose control over it and it won’t necessarily be managed in a way that benefits society or can be used by everybody to help spread equal opportunity and improve their lives.

What would you suggest for a policy remedy? You are using aspirational statements like share better and improve instructional, but how do you think we should do it?

I don’t want to pretend that I know the one true answer. I don’t, and I could easily be wrong, but I do study these issues, and I expect that you do, too. There is a difference between fact and fiction, practical policies and a bunch of abstract theory, and there are two basic problems that I see. We don’t make a distinction or public discussion between government investment and government spending, but there is a very big difference between those two things just as there is in your personal life. A bigger budget doesn’t necessarily mean that the government is giving away money on current consumption. It may mean that we are doing things which have significant economic benefits in the future. So the first thing is that we need to get that into the national conversation. It is perfectly reasonable for the government to run major deficits and to spend and to borrow to build infrastructure, as long as we are doing it hopefully in a reasonably smart way. Now, the second thing is that today interest rates are near zero. It makes no sense not to borrow. We should be borrowing hand over fist and running up the deficit precisely because the problem isn’t the deficit, the problem is the cost of servicing that deficit, and that is also missing from the public discussion. We shouldn’t be looking at how much money the country owes. We should be looking at what is the cost to service our debt and how is that likely to change in the future? So there are sensible policies that can be put in place, which will really have positive benefits for society. Everybody agrees on what we want to do, there just is not a sensible public discussion about the techniques for doing that.

So tell us about the new book.

My new book is potentially relevant to a wide swath of your audience. It is called “Artificial Intelligence: What Everyone Needs to Know.” It is part of a series from Oxford University Press called “What Everyone Needs to Know.” The book is a concise explanation of most of the questions and issues around artificial intelligence in a straightforward FAQ (Frequently Asked Questions) format. So it is a set of questions and answers that people often ask about artificial intelligence. It covers the nature of the technology, the intellectual history of the field, what the ideas are and how they developed, a lot about current applications, and then covers many of the ways that AI will affect people in terms of labor, the economy, Etc. It also covers subjects such as how AI will affect legal theory and thinking and the administration of justice. Then I get into exploring many of the common myths that people hear about, like, “Will I be able to upload my mind into a machine in the future?” “Is there going to be a singularity?” All the highly-visible issues that people tend to be concerned about are in the book. So if you want to get a brief, easy to read introduction to many or most of the key issues both technological and societal that surround the field of artificial intelligence. This is a quick and easy way to do it. You should be able to read the book in less than two hours.

It is written for an intelligent reader. This is not dumbed down, but it is not technical and it requires no particular background. If you can read the New York Times or The Economist, you can read this book, and I hope that you will come away with a much better, and frankly, more sober understanding of the values and the risks associated with artificial intelligence technology.

But I assume, when you say risks, you are optimistic about the technology. You are just concerned about peoples’ irrational fears of it.

Well, I think that irrational fears are overblown. However, there are real risks because the technology is so powerful that there are going to be areas where we are going to want to put controls in place that are very real in order to avoid some highly negative outcomes.

Such as?

Well, for example, artificial intelligence can enable new classes of very efficient killing machines for war, and just like the invention of chariots and machine guns and bombs dropped from airplanes, this may transform the nature of military conflict. Unfortunately, in ways that don’t necessarily advantage wealthy societies like the United States. ISIS might not have a nuclear weapon, but they may very well be able to build a machine that simply shoots every living thing in sight and pop that up in the middle of a shopping mall in such a way that it is extremely difficult to disable. Now, I am not talking about The Terminator. That is not the image I am talking about, but there is technology in your cell phone combined with some fairly straightforward robotics to operate and basically to shoot bullets that could devastate a public space, and organizations that today can not afford these kinds of destructive capabilities will be able to do so in ways that we really can’t foresee today. So that is an example of how AI technology may transform warfare. There are many others.

We may need to place controls on when and how AI systems can act as an individual’s agent. Today, for example, if you ever try to buy tickets to a popular concert from a service like Ticketmaster, you may notice that the minute they go on sale everything is snapped up and you have got two seats up in the rafters when you are willing to hit the button as fast as you could and you wanted to get better seats. Well, the reason that is happening is not that there are thousands of other people buying tickets, but there are thousands of other robots buying tickets in many instances. If people could see that they were fighting against machines to get these tickets against robotic devices that are working on behalf of ticket scalpers, they would be up in arms and the entire practice would be outlawed. Now, as we move to a future of what I might call “flexible robotics,” this is going to be far more visible. If our sidewalks are crowded with little gadgets making deliveries so that you can’t safely walk or your self-driving car is parking itself and stealing parking spaces from cars that have people in them, these are significant social issues that are going to come to the forefront and are going to cause us to engage in various types of regulation or controls over the deployment and the use of this technology.

The big challenge for artificial intelligence for the next few decades is how to ensure that the systems and machines that we build will integrate with human society and abide by the commonly accepted social conventions.

Join us at Gigaom AI Now in San Francisco, February 15-16th where Jerry Kaplan will speak more on the subject of AI.

Show more