Architects of Intelligence
上QQ阅读APP看书,第一时间看更新

Chapter 3. STUART J. RUSSELL

Once an AGI gets past kindergarten reading level, it will shoot beyond anything that any human being has ever done, and it will have a much bigger knowledge base than any human ever has.

PROFESSOR OF COMPUTER SCIENCE, UNIVERSITY OF CALIFORNIA, BERKELEY

Stuart J. Russell is widely recognized as one of the world’s leading contributors in the field of artificial intelligence. He is a Professor of Computer Science and Director of the Center for Human-Compatible Artificial Intelligence at The University of California, Berkeley. Stuart is the co-author of the leading AI textbook, Artificial Intelligence: A Modern Approach, which is in use at over 1,300 colleges and universities throughout the world.

MARTIN FORD: Given that you co-wrote the standard textbook on AI in use today, I thought it might be interesting if you could define some key AI terms. What is your definition of artificial intelligence? What does it encompass? What types of computer science problems would be included in that arena? Could you compare it or contrast it with machine learning?

STUART J. RUSSELL: Let me give you, shall we say, the standard definition of artificial intelligence, which is similar to the one in the book and is now quite widely accepted: An entity is intelligent to the extent that it does the right thing, meaning that its actions are expected to achieve its objectives. The definition applies to both humans and machines. This notion of doing the right thing is the key unifying principle of AI. When we break this principle down and look deeply at what is required to do the right thing in the real world, we realize that a successful AI system needs some key abilities, including perception, vision, speech recognition, and action.

These abilities help us to define artificial intelligence. We’re talking about the ability to control robot manipulators, and everything that happens in robotics. We’re talking about the ability to make decisions, to plan, and to problem-solve. We’re talking about the ability to communicate, and so natural language understanding also becomes extremely important to AI.

We’re also talking about an ability to internally know things. It’s very hard to function successfully in the real world if you don’t actually know anything. To understand how we know things, we enter the scientific field that we call knowledge representation. This is where we study how knowledge can be stored internally and then processed by reasoning algorithms, such as automated logical deduction and probabilistic inference algorithms.

Then there is learning. Learning is a key ability for modern artificial intelligence. Machine learning has always been a subfield of AI, and it simply means improving your ability to do the right thing as a result of experience. That could be learning how to perceive better by seeing labeled examples of objects. That could also mean learning how to reason better by experience—such as discovering which reasoning steps turn out to be useful for solving a problem, and which reasoning steps turn out to be less useful.

AlphaGo, for example, is a modern AI Go program that recently beat the best human world-champion players, and it really does learn. It learns how to reason better from experience. As well as learning to evaluate positions, AlphaGo learns how to control its own deliberations so that it more effectively reaches high decision-quality moves more quickly, with less computation.

MARTIN FORD: Can you also define neural networks and deep learning?

STUART J. RUSSELL: Yes, in machine learning one of the standard techniques is called “supervised learning,” where we give the AI system a set of examples of a concept, along with a description and a label for each example in the set. For example, we might have a photograph, where we’ve got all the pixels in the image, and then we have a label saying that this is a photograph of a boat, or of a Dalmatian dog, or of a bowl of cherries. In supervised learning for this task, the goal is to find a predictor, or a hypothesis, for how to classify images in general.

From these supervised training examples, we try to give an AI the ability to recognize pictures of, say, Dalmatian dogs, and the ability to predict how other pictures of Dalmatian dogs might look.

One way of representing the hypothesis, or the predictor, is a neural net. A neural net is essentially a complicated circuit with many layers. The input into this circuit could be the values of pixels from pictures of Dalmatian dogs. Then, as those input values propagate through the circuit, new values are calculated at each layer of the circuit. At the end, we have the outputs of the neural network, which are the predictions about what kind of object is being recognized.

So hopefully, if there’s a Dalmatian dog in our input image, then by the time all those numbers and pixel values propagate through the neural network and all of its layers and connections, the output indicator for a Dalmatian dog will light up with a high value, and the output indicator for a bowl of cherries will have a low value. We then say that the neural network has correctly recognized a Dalmatian dog.

MARTIN FORD: How do you get a neural network to recognize images?

STUART J. RUSSELL: This is where the learning process comes in. The circuit has adjustable connection strengths between all its connections, and what the learning algorithms do is adjust those connection strengths so that the network tends to give the correct predictions on the training examples. Then if you’re lucky, the neural network will also give correct predictions on new images that it hasn’t seen before. And that’s a neural network!

Going one step further, deep learning is where we have neural networks that have many layers. There is no required minimum for a neural network to be deep, but we would usually say that two or three layers is not a deep learning network, while four or more layers is deep learning.

Some deep learning networks get up to one thousand layers or more. By having many layers in deep learning, we can represent a very complex transformation between the input and output, by a composition of much simpler transformations, each represented by one of those layers in the network.

The deep learning hypothesis suggests that many layers make it easier for the learning algorithm to find a predictor, to set all the connection strengths in the network so that it does a good job.

We are just beginning now to get some theoretical understanding of when and why the deep learning hypothesis is correct, but to a large extent, it’s still a kind of magic, because it really didn’t have to happen that way. There seems to be a property of images in the real world, and there is some property of sound and speech signals in the real world, such that when you connect that kind of data to a deep network it will—for some reason—be relatively easy to learn a good predictor. But why this happens is still anyone’s guess.

MARTIN FORD: Deep learning is receiving enormous amounts of attention right now, and it would be easy to come away with the impression that artificial intelligence is synonymous with deep learning. But deep learning is really just one relatively small part of the field, isn’t it?

STUART J. RUSSELL: Yes, it would be a huge mistake for someone to think that deep learning is the same thing as artificial intelligence, because the ability to distinguish Dalmatian dogs from bowls of cherries is useful but it is still only a very small part of what we need to give an artificial intelligence in order for it to be successful. Perception and image recognition are both important aspects of operating successfully in the real world, but deep learning is only one part of the picture.

AlphaGo, and its successor AlphaZero, created a lot of media attention around deep learning with stunning advances in Go and Chess, but they’re really a hybrid of classical search-based AI and a deep learning algorithm that evaluates each game position that the classical AI system searches through. While the ability to distinguish between good and bad positions is central to AlphaGo, it cannot play world-champion-level Go just by deep learning.

Self-driving car systems also use a hybrid of classical search-based AI and deep learning. Self-driving cars are not just pure deep learning systems, because that does not work very well. Many driving situations need classical rules for an AI to be successful. For example, if you’re in the middle lane and you want to change lanes to the right, and there’s someone trying to pass you on the inside, then you should wait for them to go by first before you pull over. For road situations that require lookahead, because no satisfactory rule is available, it may be necessary to imagine various actions that the car could take as well as the various actions that other cars might take, and then decide if those outcomes are good or bad.

While perception is very important, and deep learning lends itself well to perception, there are many different types of ability that we need to give an AI system. This is particularly true when we’re talking about activities that span over long timescales, like going on a vacation. Or very complex actions like building a factory. There’s no possibility that those kinds of activities can be orchestrated by purely deep learning black-box systems.

Let me take the factory example to close my point about the limitations of deep learning here. Let’s imagine we try to use deep learning to build a factory. (After all, we humans know how to build a factory, don’t we?) So, we’ll take billions of previous examples of building factories to train a deep learning algorithm; we’ll show it all the ways that people have built factories. We take all that data and we put it into a deep learning system and then it knows how to build factories. Could we do that? No, it’s just a complete pipe dream. There is no such data, and it wouldn’t make any sense, even if we had it, to try to build factories that way.

We need knowledge to build factories. We need to be able to construct plans. We need to be able to reason about physical obstructions and the structural properties of the buildings. We can build AI systems to work out these real-world problems, but it isn’t achieved by deep learning. Building a factory requires a different type of AI altogether.

MARTIN FORD: Are there recent advances in AI that have struck you as being more than just incremental? What would you point to that is at the absolute forefront of the field right now?

STUART J. RUSSELL: It’s a good question, because a lot of the things that are in the news at the moment are not really conceptual breakthroughs, they are just demos. The chess victory of Deep Blue over Kasparov is a perfect example. Deep Blue was basically a demo of algorithms that were designed 30 years earlier and had been gradually enhanced and then deployed on increasingly powerful hardware, until they could beat a world chess champion. But the actual conceptual breakthroughs behind Deep Blue were in how to design a chess program: how the lookahead works; the alpha-beta algorithm for reducing the amount of searching that had to be done; and some of the techniques for designing the evaluation functions. So, as is often the case, the media described the victory of Deep Blue over Kasparov as a breakthrough when in fact, the breakthrough had occurred decades earlier.

The same thing is still happening today as well. For instance, a lot of the recent AI reports about perception and speech recognition, and headlines about dictation accuracy being close to or exceeding human dictation accuracy, are all very impressive practical engineering results, but they are again demos of conceptual breakthroughs that happened much earlier—from the early deep learning systems and convolutional networks that date right back to the late ‘80s and early ‘90s.

It’s been something of a surprise that we already had the tools decades ago to do perception successfully; we just weren’t using them properly. By applying modern engineering to older breakthroughs, by collecting large datasets and processing them across very large networks on the latest hardware, we’ve managed to create a lot of interest recently in AI, but these have not necessarily been at the real forefront of AI.

MARTIN FORD: Do you think DeepMind’s AlphaZero is a good example of a technology that’s right on the frontier of AI research?

STUART J. RUSSELL: I think AlphaZero was interesting. To me, it was not particularly a surprise that you could use the same basic software that played Go to also play chess and Shogi at world-champion level. So, it was not at the forefront of AI in that sense.

I mean, it certainly gives you pause when you think that AlphaZero, in the space of less than twenty-four hours, learned to play at superhuman levels in three different games using the same software. But that’s more a vindication of an approach to AI that says that if you have a clear understanding of the problem class, especially deterministic, two-player, turn-taking, fully-observable games with known rules, then those kinds of problems are amenable to a well-designed class of AI algorithms. And these algorithms have been around for some time—algorithms that can learn good evaluation functions and use classical methods for controlling search.

It’s also clear that if you want to extend those techniques to other classes of problems, you’re going to have to come up with different algorithmic structures. For example, partial observability—meaning that you can’t see the board, so to speak—requires a different class of algorithm. There’s nothing AlphaZero can do to play poker, for example, or to drive a car. Those tasks require an AI system that can estimate things that it can’t see. AlphaZero assumes that the pieces on the board are the pieces on the board, and that’s that.

MARTIN FORD: There was also a poker playing AI system developed at Carnegie Mellon University, called Libratus? Did they achieve a genuine AI breakthrough there?

STUART J. RUSSELL: Carnegie Mellon’s Libratus poker AI was another very impressive hybrid AI example: it was a combination of several different algorithmic contributions that were pieced together from research that’s happened over the last 10 or 15 years. There has been a lot of progress in dealing with games like poker, which are games of partial information. One of the things that happens with partial-information games, like poker, is that you must have a randomized playing strategy because if, say, you always bluff, then people figure out that you’re bluffing and then they call your bluff. But if you never bluff, then you can never steal a game from your opponent when you have a weak hand. It’s long been known, therefore, that for these kinds of card games, you should randomize your playing behavior, and bluff with a certain probability.

The key to playing poker extremely well is adjusting those probabilities for how to bet; that is, how often to bet more than your hand really justifies, and how often to bet less. The calculations for these probabilities are feasible for an AI, and they can be done very exactly, but only for small versions of poker, for example where there are only a few cards in a pack. It’s very hard for an AI to do these calculations accurately for the full game of poker. As a result, over the decade or so that people have been working on scaling up poker, we’ve gradually seen improvements in the accuracy and efficiency of how to calculate these probabilities for larger and larger versions of poker.

So yes, Libratus is another impressive modern AI application. But whether the techniques are at all scalable, given that it has taken a decade to go from one version of poker to another slightly larger version of poker, I’m not convinced. I think there’s also a reasonable question about how much those game-theoretic ideas in poker extend into the real world. We’re not aware of doing much randomization in our normal day-to-day lives, even though—for sure—the world is full of agents; so it ought to be game-theoretic, and yet we’re not aware of randomizing very much in our day-to-day lives.

MARTIN FORD: Self-driving cars are one of the highest-profile applications of AI. What is your estimate for when fully autonomous vehicles will become a truly practical technology? Imagine you’re in a random place in Manhattan, and you call up an Uber, and it’s going to arrive with no one in it, and then it will take you to another random place that you specify. How far off is that realistically, do you think?

STUART J. RUSSELL: Yes, the timeline for self-driving cars is a concrete question, and it’s also an economically important question because companies are investing a great deal in these projects.

It is worth noting that the first actual self-driving car, operating on a public road, was 30 years ago! That was Ernst Dickmanns’ demo in Germany of a car driving on the freeway, changing lanes, and overtaking other vehicles. The difficulty of course is trust: while you can run a successful demonstration for a short time, you need an AI system to run for decades with no significant failures in order to qualify as a safe vehicle.

The challenge, then, is to build an AI system that people are willing to trust themselves and their kids to, and I don’t think we’re quite there.

Results from vehicles that are being tested in California at the moment indicate that humans still feel they must intervene as frequently as once every mile of road testing. There are more successful AI driving projects, such as Waymo, which is the Google subsidiary working on this, that have some respectable records; but they are still, I think, several years away from being able to do this in a wide range of conditions.

Most of these tests have been conducted in good conditions on well-marked roads. And as you know, when you’re driving at night and it’s pouring with rain, and there are lights reflecting off the road, and there may also be roadworks, and they might have moved the lane markers, and so on ... if you had followed the old lane markers, you’d have driven straight into a wall by now. I think in those kinds of circumstances, it’s really hard for AI systems. That’s why I think that we’ll be lucky if the self-driving car problem is solved sufficiently in the next five years.

Of course, I don’t know how much patience the major car companies have. I do think everyone is committed to the idea that AI-driven cars are going to come, and of course the major car companies feel they must be there early or miss a major opportunity.

MARTIN FORD: I usually tell people a 10-15-year time frame when they ask me about self-driving cars. Your estimate of five years seems quite optimistic.

STUART J. RUSSELL: Yes, five years is optimistic. As I said, I think we’ll be lucky if we see driverless cars in five years, and it could well be longer. One thing that is clear, though, is that many of the early ideas of fairly simple architectures for driverless cars are now being abandoned, as we gain more experience.

In the early versions of Google’s car, they had chip-based vision systems that were pretty good at detecting other vehicles, lane markers, obstacles, and pedestrians. Those vision systems passed that kind of information effectively in a sort of logical form and then the controller applied logical rules telling the car what to do. The problem was that every day, Google found themselves adding new rules. Perhaps they would go into a traffic circle—or a roundabout, as we call them in England—and there would be a little girl riding her bicycle the wrong way around the traffic circle. They didn’t have a rule for that circumstance. So, then they have to add a new one, and so on, and so on. I think that there is probably no possibility that this type of architecture is ever going to work in the long run, because there are always more rules that should be encoded, and it can be a matter of life and death on the road if a particular rule is missing.

By contrast, we don’t play chess or Go by having a bunch of rules specific to one exact position or another—for instance, saying if the person’s king is here and their rook is there, and their queen is there, then make this move. That’s not how we write chess programs. We write chess programs by knowing the rules of chess and then examining the consequences of various possible actions.

A self-driving car AI must deal with unexpected circumstances on the road in the same way, not through special rules. It should use this form of lookahead-based decision-making when it doesn’t have a ready-made policy for how to operate in the current circumstance. If an AI doesn’t have this approach as a fallback, then it’s going to fall through the cracks in some situations and fail to drive safely. That’s not good enough in the real world, of course.

MARTIN FORD: You’ve noted the limitations in current narrow or specialized AI technology. Let’s talk about the prospects for AGI, which promises to someday solve these problems. Can you explain exactly what Artificial General Intelligence is? What does AGI really mean, and what are the main hurdles we need to overcome before we can achieve AGI?

STUART J. RUSSELL: Artificial General Intelligence is a recently coined term, and it really is just a reminder of our real goals in AI—a general-purpose intelligence much like our own. In that sense, AGI is actually what we’ve always called artificial intelligence. We’re just not finished yet, and we have not created AGI yet.

The goal of AI has always been to create general-purpose intelligent machines. AGI is also a reminder that the “general-purpose” part of our AI goals has often been neglected in favor of more specific subtasks and application tasks. This is because it’s been easier so far to solve subtasks in the real world, such as playing chess. If we look again at AlphaZero for a moment, it generally works within the class of two-player deterministic fully-observable board games. However, it is not a general algorithm that can work across all classes of problems. AlphaZero can’t handle partial observability; it can’t handle unpredictability; and it assumes that the rules are known. AlphaZero can’t handle unknown physics, as it were.

Now if we could gradually remove those limitations around AlphaZero, we’d eventually have an AI system that could learn to operate successfully in pretty much any circumstance. We could ask it to design a new high-speed watercraft, or to lay the table for dinner. We could ask it to figure out what’s wrong with our dog and it should be able to do that—perhaps even by reading everything about canine medicine that’s ever been known and using that information to figure out what’s wrong with our dog.

This kind of capability is thought to reflect the generality of intelligence that humans exhibit. And in principle a human being, given enough time, could also do all of those things, and so very much more. That is the notion of generality that we have in mind when we talk about AGI: a truly general-purpose artificial intelligence.

Of course, there may be other things that humans can’t do that an AGI will be able to do. We can’t multiply million-digit numbers in our heads, and computers can do that relatively easily. So, we assume that in fact, machines may be able to exhibit greater generality than humans do.

However, it’s also worth pointing out that it’s very unlikely that there will ever be a point where machines are comparable to human beings in the following sense. As soon as machines can read, then a machine can basically read all the books ever written; and no human can read even a tiny fraction of all the books that have ever been written. Therefore, once an AGI gets past kindergarten reading level, it will shoot beyond anything that any human being has ever done, and it will have a much bigger knowledge base than any human ever has.

And so, in that sense and many other senses, what’s likely to happen is that machines will far exceed human capabilities along various important dimensions. There may be other dimensions along which they’re fairly stunted and so they’re not going to look like humans in that sense. This doesn’t mean that a comparison between humans and AGI machines is meaningless though: what will matter in the long run is our relationship with machines, and the ability of the AGI machine to operate in our world.

There are dimensions of intelligence (for example, short-term memory) where humans are actually exceeded by apes; but nonetheless, there’s no doubt which of the species is dominant. And if you are a gorilla or a chimpanzee, your future is entirely in the hands of humans. Now that is because, despite our fairly pathetic short-term memories compared to gorillas and apes, we are able to dominate them because of our decision-making capabilities in the real world.

We will undoubtedly face this same issue when we create AGI: how to avoid the fate of the gorilla and the chimpanzee, and not cede control of our own future to that AGI.

MARTIN FORD: That’s a scary question. Earlier, you talked about how conceptual breakthroughs in AI often run decades ahead of reality. Do you see any indications that the conceptual breakthroughs for creating AGI have already been made, or is AGI still far in the future?

STUART J. RUSSELL: I do feel that many of the conceptual building blocks towards AGI pieces are already here, yes. We can start to explore this question by asking ourselves: “Why can’t deep learning systems be the basis for AGI, what’s wrong with them?”

A lot of people might answer our question by saying: “Deep learning systems are fine, but we don’t know how to store knowledge, or how to do reasoning, or how to build more expressive kinds of models, because deep learning systems are just circuits, and circuits are not very expressive after all.”

And for sure, it’s because circuits are not very expressive that no one thinks about writing payroll software using circuits. We instead use programming languages to create payroll software. Payroll software written using circuits would be billions of pages long and completely useless and inflexible. By comparison, programming languages are very expressive and very powerful. In fact, they are the most powerful things that can exist for expressing algorithmic processes.

In fact, we already know how to represent knowledge and how to do reasoning: we have developed computational logic over quite a long time now. Even predating computers, people were thinking about algorithmic procedures for doing logical reasoning.

And so, arguably, some of the conceptual building blocks for AGI have already been here for decades. We just haven’t figured out yet how to combine those with the very impressive learning capacities of deep learning.

The human race has also already built a technology called probabilistic programming, which I will say does combine learning capabilities with the expressive power of logical languages and programming languages. Mathematically speaking, such a probabilistic programming system is a way of writing down probability models which can then be combined with evidence, using probabilistic inference to produce predictions.

In my group we have a language called BLOG, which stands for Bayesian Logic. BLOG is a probabilistic modeling language, so you can write down what you know in the form of a BLOG model. You then combine that knowledge with data, and you run inference, which in turn makes predictions.

A real-world example of such a system is the monitoring system for the nuclear test-ban treaty. The way it works is that we write down what we know about the geophysics of the earth, including the propagation of seismic signals through the earth, the detection of seismic signals, the presence of noise, the locations of detection stations, and so on. That’s the model—which is expressed in a formal language, along with all the uncertainties: for example, uncertainty in our ability to predict the speed of propagation of a signal through the earth. The data is the raw seismic information coming from the detection stations that are scattered around the world. Then there is the prediction: What seismic events took place today? Where did they take place? How deep were they? How big were they? And perhaps: Which ones are likely to be nuclear explosions? This system is an active monitoring system today for the test-ban treaty, and it seems to be working pretty well.

So, to summarize, I think that many of the conceptual building blocks needed for AGI or human-level intelligence are already here. But there are some missing pieces. One of them is a clear approach to how natural language can be understood to produce knowledge structures upon which reasoning processes can operate. The canonical example might be: How can an AGI read a chemistry textbook and then solve a bunch of chemistry exam problems—not multiple choice but real chemistry exam problems—and solve them for the right reasons, demonstrating the derivations and the arguments that produced the answers? And then, presumably if that’s done in a way that’s elegant and principled, the AGI should then be able to read a physics textbook and a biology textbook and a materials textbook, and so on.

MARTIN FORD: Or we might imagine an AGI system acquiring knowledge from, say, a history book and then applying what it’s learned to a simulation of contemporary geopolitics, or something like that, where it’s really moving knowledge and applying it in an entirely different domain?

STUART J. RUSSELL: Yes, I think that’s a good example because it relates to the ability of an AI system to then be able to manipulate the real world in a geopolitical sense or a financial sense.

If, for example, the AI is advising a CEO on corporate strategy, it might be able to effectively outplay all the other companies by devising some amazing product marketing acquisition strategies, and so on.

So, I’d say that the ability to understand language, and then to operate with the results of that understanding, is one important breakthrough for AGI that still needs to happen.

Another AGI breakthrough still to happen is the ability to operate over long timescales. While AlphaZero is an amazingly good problem-solving system which can think 20, sometimes 30 steps into the future, that is still nothing compared to what the human brain does every moment. Humans, in our primitive steps, use motor control signals that we send to our muscles; and just typing a paragraph of text is several tens of millions of motor control commands. So those 20 or 30 steps by AlphaZero would only get an AGI only a few milliseconds into the future. As we talked about earlier, AlphaZero would be totally useless for planning the activity of a robot.

MARTIN FORD: How do humans even solve this problem with so many calculations and decisions to be made as they navigate the world?

STUART J. RUSSELL: The only way that humans and robots can operate in the real world is to operate at multiple scales of abstraction. We don’t plan our lives in terms of exactly which thing are we going to actuate in exactly which order. We instead plan our lives in terms of “OK, this afternoon I’m going try to write another chapter of my book” and then: “It’s going to be about such and such.” Or things like, “Tomorrow I’m going to get on the plane and fly back to Paris.”

Those are our abstract actions. And then as we start to plan them in more detail, we break them down into finer steps. That’s common sense for humans. We do this all the time, but we actually don’t understand very well how to have AI systems do this. In particular, we don’t understand yet how to have AI systems construct those high-level actions in the first place. Behavior is surely organized hierarchically into these layers of abstraction, but where does the hierarchy come from? How do we create it and then use it?

If we can solve this problem for AI, if machines can start to construct their own behavioral hierarchies that allow them to operate successfully in complex environments over long timescales, that will be a huge breakthrough for AGI that takes us a long way towards a human-level functionality in the real world.

MARTIN FORD: What is your prediction for when we might achieve AGI?

STUART J. RUSSELL: These kinds of breakthroughs have nothing to do with bigger datasets or faster machines, and so we can’t make any kind of quantitative prediction about when they’re going to occur.

I always tell the story of what happened in nuclear physics. The consensus view as expressed by Ernest Rutherford on September 11th, 1933, was that it would never be possible to extract atomic energy from atoms. So, his prediction was “never”, but what turned out to be the case was that the next morning Leo Szilard read Rutherford’s speech, became annoyed by it, and invented a nuclear chain reaction mediated by neutrons! Rutherford’s prediction was “never” and the truth was about 16 hours later. In a similar way, it feels quite futile for me to make a quantitative prediction about when these breakthroughs in AGI will arrive, but Rutherford’s story is a good one.

MARTIN FORD: Do you expect AGI to happen in your lifetime?

STUART J. RUSSELL: When pressed, I will sometimes say yes, I expect AGI to happen in my children’s lifetime. Of course, that’s me hedging a bit because we may have some life extension technologies in place by then, so that could stretch it out quite a bit.

But given that we kind of understand enough about these breakthroughs to at least describe them, and that people certainly have inklings of what their solutions might be, suggests to me that we’re just waiting for a bit of inspiration.

Furthermore, a lot of very smart people are working on these problems, probably more than ever in the history of the field, mainly because of Google, Facebook, Baidu, and so on. Enormous resources are being put into AI now. There’s also enormous student interest in AI because it’s so exciting right now.

So, all those things lead one to believe that the rate of breakthroughs occurring is probably likely to be quite high. These breakthroughs are certainly comparable in magnitude to a dozen of the conceptual breakthroughs that happened over the last 60 years of AI.

So that is why most AI researchers have a feeling that AGI is something in the not-too-distant future. It’s not thousands of years in the future, and it’s probably not even hundreds of years in the future.

MARTIN FORD: What do you think will happen when the first AGI is created?

STUART J. RUSSELL: When it happens, it’s not going to be a single finishing line that we cross. It’s going to be along several dimensions. We’ll see machines exceeding human capacities, just as they have in arithmetic, and now chess, Go, and in video games. We’ll see various other dimensions of intelligence and classes of problems that fall, one after the other; and those will then have implications for what AI systems can do in the real world. AGI systems may, for example, have strategic reasoning tools that are superhuman, and we use those for military and corporate strategy, and so on. But those tools may precede the ability to read and understand complex text.

An early AGI system, by itself, still won’t be able to learn everything about how the world works or be able to control that world.

We’ll still need to provide a lot of the knowledge to those early AGI systems. These AGIs are not going to look like humans though, and they won’t have even roughly the same abilities across even roughly the same spectrum as humans. These AGI systems are going to be very spiky in different directions.

MARTIN FORD: I want to talk more about the risks associated with AI and AGI. I know that’s an important focus of your recent work.

Let’s start with the economic risks of AI, which is the thing that, of course, I’ve written about in my previous book, Rise of the Robots. A lot of people believe that we are on the leading edge of something on the scale of a new industrial revolution. Something that’s going to be totally transformative in terms of the job market, the economy and so forth. Where do you fall on that? Is that overhyped, or would you line up with that assertion?

STUART J. RUSSELL: We’ve discussed how the timeline for breakthroughs in AI and AGI is hard to predict. Those are the breakthroughs that will enable an AI to do a lot of the jobs that humans do right now. It’s also quite hard to forecast which sequence of employment categories are going to be at risk from machine replacement and a timeline around that.

However, what I see in a lot of the discussions and presentations from people talking about this, is that there’s probably an over-estimate of what current AI technologies are able to do and also, the difficulty of integrating what we know how to do into the existing extremely complex functionality of corporations and governments, and so on.

I do agree that a lot of jobs that have existed for the last few hundred years are repetitive, and the humans who are doing them are basically exchangeable. If it’s a job where you hire people by the hundred or by the thousand to do it, and you can identify what that person does as a particular task that is then repeated over and over again, those kinds of jobs are going to be susceptible. That’s because you could say that, in those jobs, we are using humans as robots. So, it’s not surprising that when we have real robots, they’re going to be able to do those jobs.

I also think that the current mindset among governments is: “Oh, well then. I guess we really need to start training people to be data scientists, because that’s the job of the future—or robot engineers.” This clearly isn’t the solution because we don’t need a billion data scientists and robot engineers: we just need a few million. This might be a strategy for a small country like Singapore; or where I am currently, in Dubai, it might also be a viable strategy. But it’s not a viable strategy for any major country because there is simply not going to be enough jobs in those areas. That’s not to say that there are no jobs now: there certainly are, and training more people to do them makes sense; but this simply is not a solution to the long-term problem.

There are really only two futures for the human economy that I see in the long run.

The first is that effectively, most people are not doing anything that’s considered economically productive. They’re not involved in economic exchange of work for pay in any form, and this is the vision of the universal basic income: that there is a sector of the economy that is largely automated and incredibly productive, and that productivity generates wealth, in the form of goods and services, that in one way or another ends up subsidizing the economic viability of everyone else. That to me does not seem like a very interesting world to live in, at least not by itself, without a lot of other things needed to go on to make life worth living and provide sufficient incentive for people to do all of the things that we do now. For example, going to school, learning and training, and becoming experts in various areas. It’s hard to see the motivation for acquiring a good education when it doesn’t have any economic function.

The second of the two futures I can see in the long run is that even though machines will be doing a lot of goods and basic services like transportation, there are still things that people can do which improve the quality of life for themselves and for others. There are people who are able to teach, to inspire people to live richer, more interesting, more varied and more fulfilling lives, whether that’s teaching people to appreciate literature or music, how to build, or even how to survive in the wilderness.

MARTIN FORD: Do you think we can navigate as individuals and as a species towards a positive future, once AI has changed our economy?

STUART J. RUSSELL: Yes, I really do, but I think that a positive future will require human intervention to help people live positive lives. We need to start actively navigating, right now, towards a future that can present the most constructive challenges and the most interesting experiences in life for people. A world that can build emotional resilience and nurture a generally constructive and positive attitude to one’s own life—and to the lives of others. At the moment, we are pretty terrible at doing that. So, we have to start changing that now.

I think that we’ll also need to fundamentally change our attitude about what science is for and what it can do for us. I have a cell phone in my pocket, and the human race probably spent on the order of a trillion dollars on the science and engineering that went into ultimately creating things like my cell phone. And yet we spend almost nothing on understanding how people can live interesting and fulfilling lives, and how we can help people around us do that. I think as a race that we will need to start acknowledging that if we help another person in the right way, it creates enormous value for them for the rest of their lives. Right now, we have almost no science base for how to do this, we have no degree programs in how to do it, we have very few journals about it, and those that are trying are not taken very seriously.

The future can have a perfectly functioning economy where people who are expert in living life well, and helping other people, can provide those kinds of services. Those services may be coaching, they may be teaching, they may be consoling, or maybe collaborating, so that we can all really have a fantastic future.

It’s not a grim future at all: it’s a far better future than what we have at present; but it requires rethinking our education system, our science base, our economic structures.

We need now to understand how this will function from an economic point of view in terms of the future distribution of income. We want to avoid a situation where there are the super-rich who own the means of production—the robots and the AI systems—and then there are their servants, and then there is the rest of the world doing nothing. That’s sort of the worst possible outcome from an economic point of view.

So, I do think that there is a positive future that makes sense once AI has changed the human economy, but we need to get a better handle on what that’s going to look like now, so that we can construct a plan for getting there.

MARTIN FORD: You’ve worked on applying machine learning to medical data at both Berkeley and nearby UCSF. Do you think artificial intelligence will create a more positive future for humans through advances in healthcare and medicine?

STUART J. RUSSELL: I think so, yes, but I also think that medicine is an area where we know a great deal about human physiology—and so to me, knowledge-based or model-based approaches are more likely to succeed than data-driven machine learning systems.

I don’t think that deep learning is going to work for a lot of important medical applications. The idea that today we can just collect terabytes of data from millions of patients and then throw that data into a black-box learning algorithm, doesn’t make sense to me. There may be some areas of medicine where data-driven machine learning works very well of course. Genomic data is one area; and predicting human susceptibility to various kinds of genetically-related diseases. Also, I think, deep learning AI will be strong at predicting the potential efficacy of particular drugs.

But these examples are a long way from an AI being able to act like a doctor and being able to decide, perhaps, that a patient has a blocked ventricle in the brain that’s interfering with the circulation of cerebral spinal fluid. To really do that, is more like diagnosing which part of a car is not working. If you have no idea how cars work, then figuring out that it’s the fan belt that’s broken is going to be very, very difficult.

Of course, if you’re an expert car mechanic and you know how it all works, and you’ve got some symptoms to work with, maybe there’s a kind of a flapping noise and that the car’s overheating, then you generally can figure it out quickly. And it’s going to be the same with human physiology, except that there is a significant effort that must be put in into building these models of human physiology.

A lot effort was already put in to these models in the ‘60s and ‘70s, and they have helped AI systems in medicine progress to some degree. But today we have technology that can in particular represent the uncertainty in those models. Mechanical systems models are deterministic and have specific parameter values: they represent exactly one completely predictable, fictitious human.

Today’s probabilistic models, on the other hand, can represent an entire population, and they can accurately reflect the degree of uncertainty we might have about being able to predict, for example, exactly when someone is going to have a heart attack. It’s very hard to predict things like heart attacks on an individual level, but we can predict that there’s a certain probability per person, which might be increased during extreme exercise or stress, and that this probability would depend on various characteristics of the individual.

This more modern and probabilistic approach behaves much more reasonably than previous systems. Probabilistic systems enable us to combine the classical models of human physiology with observation and real-time data, to make strong diagnosis and plan treatments.

MARTIN FORD: I know you’ve focused a lot on the potential risks of weaponized AI. Could you talk more about that?

STUART J. RUSSELL: Yes, I think autonomous weapons are now creating the prospect of a new arms race. This arms race may already be leading towards the development of lethal autonomous weapons. These autonomous weapons can be given some mission description that the weapon has the ability to achieve by itself, such as identifying, selecting, and attacking human targets.

There are moral arguments that this will cross a fundamental line for artificial intelligence: that we’re handing over the power over life and death to a machine to decide, and that is a fundamental reduction in the way we value human life and the dignity of human life.

But I think a more practical argument is that a logical consequence of autonomy is scalability. Since no supervision is required by an individual human for each individual autonomous weapon, someone could launch as many weapons as they want. Someone can launch an attack, where five guys in a control room could launch 10,000,000 weapons and wipe out all males between the age of 12 and 60 in some country. So, these can be weapons of mass destruction, and they have this property of scalability: someone could launch an attack with 10, or 1,000, or 1,000,000 or 10,000,000 weapons.

With nuclear weapons, if they were used at all, someone would be crossing a major threshold which we’ve managed to avoid so far as a race, by the skin of our teeth. We have managed to avoid crossing that threshold since 1945. But autonomous weapons don’t have such a threshold, and so things can more smoothly escalate. They are also easily proliferated, so once they are manufactured in very large numbers it’s quite likely they’ll be on the international arms market and they’ll be accessible to people who have less scruples than, you know, the Western powers.

MARTIN FORD: There’s a lot of technology transfer between commercial applications and potential military applications. You can buy a drone on Amazon that could potentially be weaponized...

STUART J. RUSSELL: So, at the moment, you can buy a drone that’s remotely piloted, maybe with first-person vision. You could certainly attach a little bomb to it and deliver it and kill someone, but that’s a remotely piloted vehicle, which is different. It’s not scalable because you can’t launch 10,000,000 of those unless you’ve got 10,000,000 pilots. So, someone would need a whole country trained to do that, of course, or they could also give those 10,000,000 people machine guns and then go and kill people. Thankfully we have an international system of control of sanctions, and military preparedness, and so on—to try to prevent these things from happening. But we don’t have an international system of control that would work against autonomous weapons.

MARTIN FORD: Still, couldn’t a few people in a basement somewhere develop their own autonomous control system and then deploy it on commercially available drones? How would we be able control those kinds of homemade AI weapons?

STUART J. RUSSELL: Yes, something resembling the software that controls a self-driving car could conceivably be deployed to control a quadcopter that delivers a bomb. Then you might have something like a homemade autonomous weapon. It could be that under a treaty, there would be a verification mechanism that would require the cooperation of drone manufacturers and the people who make chips for self-driving cars and so on, so that anyone ordering large quantities would be noticed—in the same way that anyone ordering large quantities of precursor chemicals for chemical weapons is not going to get away with it because the corporation is required, by the chemical weapons treaty, to know its customer and to report any unusual attempts that are made to purchase large quantities of certain dangerous products.

I think it will be possible to have a fairly effective regime that could prevent very large diversions of civilian technology to create autonomous weapons. Bad things would still happen, and I think this may be inevitable, because in small numbers it will likely always be feasible for homemade autonomous weapons to be built. In small numbers, though, autonomous weapons don’t have a huge advantage over a piloted weapon. If you’re going to launch an attack with ten or twenty weapons, you might as well pilot them because you can probably find ten or twenty people to do that.

There are other risks of course with AI and warfare, such as where an AI system may accidentally escalate warfare when machines misinterpret some signal and start attacking each other. And the future risk of a cyber-infiltration means that you may think you have a robust defense based on autonomous weapons when in fact, all your weapons have been compromised and are going to turn on you instead when a conflict begins. So that all contributes to strategic uncertainty, which is not great at all.

MARTIN FORD: These are scary scenarios. You’ve also produced a short film called Slaughterbots, which is quite a terrifying video.

STUART J. RUSSELL: We made the video really just to illustrate these concepts because I felt that, despite our best efforts to write about them and give presentations about them, that somehow the message wasn’t getting through. People were still saying, “oh, autonomous weapons are science fiction.” They were still imagining it as Skynet and Terminators, as a technology that doesn’t exist. So, we were simply trying to point out that we’re not talking about spontaneously evil weapons, and we’re not talking about taking over the world—but we also not talking about science fiction any more.

These AI warfare technologies are feasible today, and they bring some new kinds of extreme risks. We’re talking about scalable weapons of mass destruction falling into the wrong hands. These weapons could inflict enormous damage on human populations. So, that’s autonomous weapons.

MARTIN FORD: In 2014, you published a letter, along with the late Stephen Hawking and the physicists Max Tegmark and Frank Wilczek, warning that we aren’t taking the risks associated with advanced AI seriously enough. It’s notable that you were the only computer scientist among the authors. Could you tell the story behind that letter and what led you to write it? (https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html)

STUART J. RUSSELL: So, it’s an interesting story. It started when I got a call from National Public Radio, who wanted to interview me about this movie called Transcendence. I was living in Paris at the time and the movie wasn’t out in Paris, so I hadn’t seen it yet.

I happened to have a stopover in Boston on the way back from a conference in Iceland, so I got off the plane in Boston and I went to the movie theatre to watch the movie. I’m sitting there towards the front of the theatre, and I don’t really know what’s going to happen in the movie at all and then, “Oh, look! It’s showing Berkeley computer science department. That’s kind of funny.” Johnny Depp is playing the AI professor, “Oh, that’s kind of interesting.” He’s giving a talk about AI, and then someone, some anti-AI terrorist decides to shoot him. So, I’m sort of involuntarily shrinking down in my seat seeing this happening, because that could really be me at that time. Then the basic plot of the movie is that before he dies they manage to upload his brain into a big quantum computer and the combination of those two things creates a super-intelligent entity that threatens to take over the world because it very rapidly develops all kinds of amazing new technologies.

So anyway, we wrote an article that was, at least superficially, a review of the movie, but it was really saying, “You know, although this is just a movie, the underlying message is real: which is that if—or when—we create machines that can have a dominant effect on the real world, then that can present a very serious problem for us: that we could, in fact, cede control over our futures to other entities besides humans.”

The problem is very straightforward: our intelligence is what gives us our ability to control the world; and so, intelligence represents power over the world. If something has a greater degree of intelligence, then it has more power.

We are already on the way to creating things that are much more powerful than us; but somehow, we have to make sure that they never, ever, have any power. So, when we describe the AI situation like that, people say, “Oh, I see. OK, there’s a problem.”

MARTIN FORD: And yet, a lot of prominent AI researchers are quite dismissive of these concerns...

STUART J. RUSSELL: Let me talk about these AI denialists. There are various arguments that people put forward as to why we shouldn’t pay any attention to the AI problem, and that there are just too many of these arguments to count. I’ve collected somewhere between 25 and 30 distinct arguments, but they all share a single property, which is that they simply do not make any sense. They don’t really stand up to scrutiny. Just to give you one example, something you’ll often hear is, “well, you know, it’s absolutely not a problem because we’ll just be able to switch them off.” That is like saying that beating AlphaZero at Go is absolutely not a problem. You just put the white pieces in the right place, you know? It just doesn’t stand up to five seconds of scrutiny.

A lot of these AI denialist arguments, I think, reflect a kind of a knee-jerk defensive reaction. Perhaps some people think, “I’m an AI researcher. I feel threatened by this thought, and therefore I’m going to keep this thought out of my head and find some reason to keep it out of my head.” That’s one of my theories about why some otherwise very informed people will try to deny that AI is going to become a problem for humans.

This even extends to some mainstream people in the AI community who deny that AI will ever be successful, which is ironic because we’ve spent 60 years fending off philosophers, who have denied that the AI field will ever be successful. We’ve also spent those 60 years demonstrating and proving, one time after another, how things that the philosophers said would be impossible, can indeed happen—such as beating the world champion in chess.

Now, suddenly some people in the AI field are saying that AI is never going to succeed, and so there isn’t anything to worry about.

This is a completely pathological reaction if you ask me. It seems prudent, just as with nuclear energy and atomic weapons, to assume that human ingenuity will, in fact, overcome the obstacles and achieve intelligence of a kind that’s sufficient to present at least, potentially the threat of ceding control. It seems prudent to prepare for that and try to figure out how to design systems in such a way that that can’t happen. So that’s my goal: to help us prepare for the artificial intelligence threat.

MARTIN FORD: How should we address that threat?

STUART J. RUSSELL: The key to the problem is that we have made a slight mistake in the way that we define AI, and so I have a reconstructed a new definition for AI that goes as follows.

First of all, if we want to build artificial intelligence, we’d better figure out what it means to be intelligent. This means that we must draw from thousands of years of tradition, philosophy, economics and other disciplines. The idea of intelligence is that a human being is intelligent to the extent that their actions can be expected to achieve their objectives. This is the idea sometimes called rational behavior; and it contains within it various sub-kinds of intelligence, like the ability to reason; the ability to plan; the ability to perceive; and so on. Those are all kind of required capabilities for acting intelligently in the real world.

The problem with that is that if we succeed in creating artificial intelligence and machines with those abilities, then unless their objectives happen to be perfectly aligned with those of humans, then we’ve created something that’s extremely intelligent, but with objectives that are different from ours. And then, if that AI is more intelligent than us, then it’s going to attain its objectives—and we, probably, are not!

The negative consequences for humans are without limit. The mistake is in the way we have transferred the notion of intelligence, a concept that makes sense for humans, over to machines.

We don’t want machines with our type of intelligence. We actually want machines whose actions can be expected to achieve our objectives, not their objectives.

The original idea we had for AI was that to make an intelligent machine, we should construct optimizers: things that choose actions really well when we give them an objective. Then off it goes and achieves our objective. That’s probably a mistake. It’s worked up to now—but only because we haven’t made very intelligent machines, and the ones we have made we’ve only put in mini-worlds, like the simulated chessboard, the simulated Go board, and so on.

When the AI that humans have so far made, get out into the real-world, that’s when things can go wrong, and we saw an example of this with the flash crash. With the flash crash, there was a bunch of trading algorithms, some of them fairly simple, but some of them fairly complicated AI-based decision-making and learning systems. Out there in the real world, during the flash crash things went catastrophically wrong and those machines crashed the stock market. They eliminated more than a trillion dollars of value in equities in the space of a few minutes. The flash crash was a warning signal about our AI.

The right way to think about AI is that we should be making machines which act in ways to help us achieve our objectives through them, but where we absolutely do not put our objectives directly into the machine!

My vision is that AI must always be designed to try to help us achieve our objectives, but that AI systems should not be assumed to know what those objectives are.

If we make AI this way, then there is always an explicit uncertainty about the nature of the objectives that an AI is obliged to pursue. It turns out that this uncertainty actually is the margin of safety that we require.

I’ll give you an example to demonstrate this margin of safety that we really do need. Let’s go back to an old idea that we can—if we ever need to—just switch the machine off if we get into trouble. Well, of course, you know, if the machine has an objective like, “fetch the coffee,” then obviously a sufficiently intelligent machine realizes that if someone switches it off, then it’s not going to be able to fetch the coffee. If its life’s mission, if its objective, is to fetch the coffee, then logically it will take steps to prevent itself from being switched off. It will disable the Off switch. It will possibly neutralize anyone who might attempt to switch it off. So, you can imagine all these unanticipated consequences of a simple objective like “fetch the coffee,” when you have a sufficiently intelligent machine.

Now in my vision for AI, we instead design the machine so that although it still wants to “fetch the coffee” it understands that there are a lot of other things that human beings might care about, but it doesn’t really know what those are! In that situation, the AI understands that it might do something that the human doesn’t like—and if the human switches it off, that’s to prevent something that would make the human unhappy. Since in this vision the goal of the machine is to avoid making the human unhappy, even though the AI doesn’t know what that means, it actually has an incentive to allow itself to be switched off.

We can take this particular vision for AI and put it into mathematics, and show that the margin of safety (meaning, in this case, the incentive that the machine has to allow itself to be switched off) is directly related to the uncertainty it has about the human objective. As we eliminate that uncertainty, and the machine starts to believe that it knows, for sure, what the true objective really is, then that margin of safety begins to disappear again, and the machine will ultimately stop us from switching it off.

In this way, we can show that, at least in a simplified mathematical framework, that when you design machines this way—with explicit uncertainty about the objective that they are to pursue—then they can be provably beneficial, meaning that you are provably better off with this machine than without.

What I’ve shared here is an indication that there may be a way of conceiving of AI which is a little bit different from how we’ve been thinking about AI so far, that there are ways to build an AI system that has much better properties, in terms of safety and control.

MARTIN FORD: Related to these issues of AI safety and control, a lot of people worry about an arms race with other countries, especially China. Is that something we should take seriously, something we should be very concerned about?

STUART J. RUSSELL: Nick Bostrom and others have raised a concern that, if a party feels that strategic dominance in AI is a critical part of their national security and economic leadership, then that party will be driven to develop the capabilities of AI systems—as fast as possible, and yes, without worrying too much about the controllability issues.

At a high level, that sounds like a plausible argument. On the other hand, as we produce AI products that can operate out there in the real world, there will be a clear economic incentive to make sure that they remain under control.

To explore this kind of scenario, let’s think about a product that might come along fairly soon: a reasonably intelligent personal assistant that keeps track of your activities, conversations, relationships and so on, and kind of runs your life in the way that a good professional human assistant might help you. Now, if such a system does not have a good understanding of human preferences, and acts in ways that that are unsafe in ways that we’ve already talked about, then people are simply not going to buy it. If it misunderstands these things, then it might book you into a $20,000-a-night hotel room, or it might cancel a meeting with the vice president because you’re supposed to go to the dentist.

In those kinds of situations, the AI is misunderstanding your preferences and, rather than being humble about its understanding of your preferences, it thinks that it knows what you want, and it is just being plain wrong about it. I’ve cited in other forums the example of a domestic robot that doesn’t understand that the nutritional value of a cat is a lot less than the sentimental value of a cat, and so it just decides to cook the cat for dinner. If that happened, that would be the end of the domestic robot industry. No one is going to want a robot in its house that could make that kind of mistake.

Today, AI companies that are producing increasingly intelligent products have to solve at least a version of this problem in order for their products to be good AI systems.

We need to get the AI community to understand that AI that is not controllable and safe, is just not good AI.

In the same way that a bridge that falls down is simply not a good bridge, we need to recognize that AI that is not controllable and safe, is just not good AI. Civil engineers don’t go around saying, “Oh yeah, I design bridges that don’t fall down, you know, unlike the other guy, he designs bridges that fall down.” It’s just built into the meaning of the word “bridge” that it’s not supposed to fall down.

This should be built into the meaning of what we mean when we define AI. We need to define AI in such a way that it remains under the control of the humans that it’s supposed to be working for, in any country. And we need to define AI so that it has, now and in the future, properties that we call corrigibility: that it is able to be switched off, and that it is able to be corrected if it’s doing something that we don’t like.

If we can get everyone in AI, around the world, to understand that these are just necessary characteristics of good AI, then I think we move a long way forward in making the future prospects of the field of AI much, much brighter.

There’s also no better way to kill the field of AI than to have a major control failure, just as the nuclear industry killed itself through Chernobyl and Fukushima. AI will kill itself if we fail to address the control issue.

MARTIN FORD: So, on balance, are you an optimist? Do you think that things are going to work out?

STUART J. RUSSELL: Yes, I do think that I’m an optimist. I think there’s a long way to go. We are just scratching the surface of this control problem, but the first scratching seems to be productive, and so I’m reasonably optimistic that there is a path of AI development that leads us to what we might describe as “provably beneficial AI systems.”

Of course, there is the risk that even if we do solve the control problem and even if we do build provably beneficial AI systems, that there will be some parties who choose not to use them. The risk here is that one party or another chooses only to magnify the capabilities of AI without regarding the safety aspects.

This could be the Dr. Evil character type, the Austin Powers villain who wants to take over the world and accidentally releases an AI system that ends up being catastrophic for everyone. Or it could be a much more sociological risk, where it starts off as very nice for society to have capable, controllable AI but we then overuse it. In those risk scenarios, we head towards an enfeebled human society where we’ve moved too much of our knowledge and too much of our decision-making into machines, and we can never recover it. We could eventually lose our entire agency as humans along this societal path.

This societal picture is how the future is depicted in the WALL-E movie, where humanity is off on spaceships and being looked after by machines. Humanity gradually becomes fatter and lazier and stupider. That’s an old theme in science fiction and it’s very clearly illustrated in the WALL-E movie. That is a future that we need to be concerned about, assuming we successfully navigate all the other risks that we’ve been discussing.

As an optimist, I can also see a future where AI systems are well enough designed that they’re saying to humans, “Don’t use us. Get on and learn stuff yourself. Keep your own capabilities, propagate civilization through humans, not through machines.”

Of course, we might still ignore a helpful and well-design AI, if we prove to be too lazy and greedy as a race; and then we’ll pay the price. In that sense, this really might become more of a sociocultural problem, and I do think that we need to do work as a human race to prepare and make sure this doesn’t happen.

STUART J. RUSSELL is a professor of electrical engineering and computer science at the University of California Berkeley and is widely recognized as one of the world’s leading contributors in the field of artificial intelligence. He is the co-author, along with Peter Norvig, of Artificial Intelligence: A Modern Approach, which is the leading AI textbook currently in use at over 1300 colleges and universities in 118 countries.

Stuart received his undergraduate degree in Physics from Wadham College, Oxford in 1982 and his PhD in Computer Science from Stanford in 1986. His research has covered many topics related to AI, such as machine learning, knowledge representation, and computer vision, and he has received numerous awards and distinctions, including the IJCAI Computers and Thought Award and election as a fellow to the American Association for the Advancement of Science, the Association for the Advancement of Artificial Intelligence and the Association of Computing Machinery.