Computational approaches for creating machines at scale: An interview with Daniela Rus.
Daniela Rus is a professor of electrical engineering and computer science, and director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Rus is the first woman to serve as director of CSAIL and its predecessors—the AI Lab and the Lab for Computer Science.
Key Takeaways
Robotics, a field with roots as far back as ancient Greece, is undergoing a period of explosive growth, driven by improvements in computational power, better sensors and new materials.
Producing increasingly more autonomous machines requires new computational approaches to designing and fabricating functional machines at scale.
MIT-CSAIL, in partnership with Toyota Motor Corporation, have set out to create a car that will never be responsible for a collision. This project builds on MIT’s collaborative work with the National University of Singapore, which demonstrates safe self-driving vehicles at low speeds in low-complexity environments.
David Beyer: Let’s start with your background.
Daniela Rus: I’m a roboticist. I started as a computer science and math major in college. Toward the end of my college career, I met John Hopcroft, who then became my Ph.D. thesis advisor. At one point, he delivered a very inspiring talk, in which he observed that many of the classical computer science problems have already been solved. Instead, it was now time for the grand applications of computer science, including, of course, robotics.
I continued on to Cornell, where I spent about five years working on my Ph.D. In those days, Cornell produced a lot of foundational work in robotics. Following my Ph.D., I accepted a job as an assistant professor at Dartmouth College, where I founded a robotics research lab. And in 2003, I made my way to MIT, where I work today. The main objective of my work is to advance the science of autonomy: how can machines operate without human input in the physical world? I’m especially interested in cases where multiple machines coordinate to accomplish something that neither machine is able to solve alone.
DB: Can you briefly sketch out the origins of robotics as a field?
DR: Early robotic concepts date far back. The ancient Greeks engineered complex machines out of simple mechanisms, which could open temple doors or play musical instruments. In the 18th century, Swiss watchmakers designed automata, hard-coded mechanisms that could play musical instruments, write, and even paint.
In the early 1960s, George Devol, who is considered the father of industrial robotics, built a robot called Unimate. His work marked a major leap over previous automata. This robot was programmable to perform disparate tasks: it could swing a golf club or pour wine. Later in the ‘70s, The Stanford Cart presented an early example of a robotic mobile device; it was, in effect, a mobile robot and the first of its kind to combine perception and action with planning. The robot took painstakingly long to traverse the trajectory from one end of a small parking lot to the other, yet its success marked the dawn of technological breakthroughs in robotics centered on machine perception, planning and learning.
Over the past decade, the field writ large experienced remarkable progress, driven by a number of important trends: computational power has been increasing at a breakneck pace; the hardware required to interact with the physical world—the sensors and motors themselves—have become smaller and more reliable; and an array of new materials continue pushing the limits of design.
In parallel, the community has achieved breakthrough progress in the science and application of planning, control, and perception. Today’s robots have an incredible set of skills: the ability to make maps, localize, as well as smart decision-making and learning capacity. Up until recently, these twin threads of progress and knowledge have been pursued somewhat independently. We’re now witnessing their convergence. The state-of-the-art in each field is coalescing to deliver results that were merely a dream even just 10 years ago.
DB: What is your primary focus in robotics?
DR: My greatest interest is to advance the science of autonomy—in particular, systems that involve multiple robots working together. I want to make robots more capable and independent, and I want to see these advanced physical machines improve our lives.
A strong interplay underlies the robot’s physical and software capabilities. In some cases, we need to invent new robot bodies to deliver on the capabilities that we want. In other cases, we repurpose existing robot bodies to do novel things.
I have been interested in how to make capable robots, faster. One such approach advances a universal robot cell, or module, that can be reused to make all kinds of different robots. This idea, in turn, suggests shape-shifting robots—modular, cellular robots with the ability to adapt their geometric structure to the task at hand, autonomously. If you build a robot designed solely for a single task, the robot will perform that task well, but will, by its very design, perform poorly on an unrelated or different task in a foreign environment. In contrast, if you design machines with the ability to contextually re-organize their internal geometry, you obtain the right robot body for the right application and environment.
An alternative solution to this problem is to make task-specific robots more quickly, by automating the design and fabrication of robots from high-level specifications. In other words, create a robot compiler. The general idea is to automatically convert user-defined functional specifications into physical one-of-a-kind machines that meet those specifications.
DB: You’ve also spent time working on autonomous vehicles. Can you tell us about your work in that domain?
DR: I have been working on self-driving cars as part of a collaboration between MIT and the National University of Singapore for several years now. We are developing a system of autonomous golf carts and autonomous electric vehicles for mobility on demand. This project is being run under the auspices of the Singapore-MIT Alliance for Research and Technology (SMART).
Through our work, we have already demonstrated that self-driving cars, at low speeds in low-complexity environments are in fact reliable. We are now extending these self-driving vehicles to an urban mobility system, similar in spirit to the now-popular shared bicycle programs. Bicycle programs, for their part, face some basic challenges. Over time, some stations become depleted of bikes while others overflow. Cities respond by hiring people and trucks to rebalance the vehicles in what amounts to a costly and inefficient exercise.
Imagine, instead, if the bikes could drive themselves to the next destination to match local demand. In our model, the self-driving car transports you to your destination and then coordinates with other cars to pick up the next person in line. The car then drives itself to the appropriate next spot. With sufficient investment, this idea has the potential to turn transportation into a utility, available to people in cities anywhere and anytime.
In addition, we recently launched a collaboration between MIT CSAIL and Toyota to develop a safe car that will never be responsible for a collision, becoming, over time, a trusted partner for the human driver. I am very, very excited about these new research directions.
DB: What are some of the main challenges in making autonomous vehicles that can safely navigate cities?
DR: The major challenges hinge on environmental complexity, speed of movement, weather, and human interaction. The current machine perception and control algorithms are not smart enough nor fast enough to respond to the extreme driving circumstances we encounter in heavy congestion and bad weather. Imagine traffic in New Delhi, the Philippines, or L.A. It’s treacherous for numerous reasons: congestion, erratic driver behavior, coordination through silent hand gestures among human drivers, inclement weather, heavy rain, snow, poor visibility, and so on. The self-driving car, as a problem, is not solved. It’s important to keep that in mind.
DB: Can you walk us through the self-driving car hardware and software makeup?
DR: The vehicles in the SMART project contain a minimalist hardware configuration. They use two forward-pointed laser scanners, one for mapping and localization, and another for obstacle detection. They also carry a forward-pointing camera for detecting moving obstacles (e.g., a pedestrian), as well as side and rear-pointing laser scanners, an Inertial Measurement Unit (IMU), and wheel encoders.
DB: What machine learning techniques are critical to the next stages of autonomous car development?
DR: Deep learning is engendering a great deal of enthusiasm. Armed with the latest deep learning packages, we can begin to recognize objects in previously impossible ways. Machine learning presents an interesting challenge for driving because the car requires the utmost reliability and efficiency in how other cars, obstacles, and objects in the surrounding environment are recognized and taken into account. In other words, there is no room for error, especially as high speeds.
DB: What’s next—do cars take over driving altogether?
DR: A bedrock assumption in our work with Toyota is that driving is fun. We’d like to see a partnership between the car and and its (human) driver. I would like for my car to learn and adapt to my preferences and normal state. In the future, the car might be able to determine that I’m having a difficult day based on how I speak and then keep a close eye on my driving; if I am about to make a mistake, for example, by miscalculating the speed of the incoming traffic during a left turn, the car could intercede and correct the mistake. This sort of override would operate in the same spirit of the anti-lock braking systems we have come to rely on.
I want to conclude by reminding everyone that an accident happens in the U.S. every five seconds. The costs, both in terms of human life and in economic terms, are simply staggering. We envision that car of the future will possess a parallel autonomy system able to correct the mistakes of the human drivers and prevent those deadly collisions. This car, in time, will learn a lot about its human operator by way of becoming a trusted partner, without taking away the joy of driving.
Continue reading The autonomous car as a driving partner.