University of Vermont

University Communications

Search

INTERview: Joshua Bongard

The UVM computer scientist discusses a robotic breakthrough: curiosity

By Kevin Foley Article published November 20, 2006

Josh Bongard With Robot Team
Joshua Bongard (center), assistant professor of computer science, and former Cornell colleagues Victor Zykov and Hod Lipson designed the first robot capable of detecting its own shape and using this knowledge to efficiently adapt to damage. (Photo: Cornell University)

Smooth, it isn't. The star-shaped robot lurches, wheezes and flops through its ponderous perambulation, clacking laboriously but steadily across the table. But for this machine, developed by Joshua Bongard, assistant professor of computer science, the breakthrough is the journey, not the destination.

The machine, which Bongard worked on at Cornell University with then-colleagues Victor Zykov and Hod Lipson, is the first robot capable of detecting its own shape and using this knowledge to efficiently adapt to damage. The work was reported by the group in the Nov. 17 issue of Science.

Earlier this month, Bongard published a co-authored MIT Press book, How the Body Shapes the Way We Think: A New View of Intelligence with lead author Rolf Pfeifer, and, in ways perhaps evocative of the text's title, the robot's breakthrough is its physical self-awareness and adaptability. The advancement has been widely reported in the national media.

The machine, Bongard explains, starts out having no sense of how its parts are assembled. It measures the results of a limited number of small movements to develop plausible models of its shape and construction. The robot evaluates and refines these competing models through more movements and observation, eventually arriving at an accurate internal model of its shape. The robot can then use this continuously updated self-model to detect damage and develop new ways to move even after sustaining damage. The effort is a proof-of-concept for developing more resilient robots for dangerous applications like planetary exploration. Bongard elaborates on the robot and its implications in conversation with the view.

THE VIEW: What was significant about this project?

JOSHUA BONGARD: The most important thing here for us is this is the first robot that can build up a description of its own body. So the robot can build up a sense of self; that hasn't been done before in robotics. The second interesting thing about this is that it then uses that self-model, that sense of self, to actually try out different ways of moving. We commanded this robot to learn how to move; we didn't tell the robot how to move. It tries internally using this self-model, "What would happen if I tried hopping? What would happen if I crawled?" And so on. And eventually it comes up with a behavior that it thinks will actually work and then tries it out in reality; more often than not the robot starts moving.

How does that approach contrast with more traditional ideas about how to control a robot?

There are two existing approaches. In the first, the idea is to allow the robot to attempt hundreds or thousands of trials in the real world, and eventually it hits on a way of moving. In our case, we're dealing with a robot that is damaged. Potentially, for example, this could be a robot probe on a remote planet, and we don't want it thrashing around wildly because it might damage itself further or fall off a cliff. We want it to be very careful about what it does, and perform as few exploratory trials as possible. The second existing approach is to create by hand a model for the robot. The roboticist would tell the robot, you're made up of four legs, and you're put together in this way, and you can do this and you can't do that. That approach severely limits the intelligence or the adaptability of the robot. The robot in that situation can't very easily adapt and overcome unanticipated situations.

As you and your colleagues pursued this work at Cornell, was there a big, breakthrough moment where you guys hovered over the table and suddenly…

There was, it was actually near the end of the project where we had figured out how to get the robot to learn about itself. We could see the robot had created a model of itself and had come up with a particular way of moving that it thought would work but it hadn't quite yet tried out in the real world. It came down to that moment and we sort of crowded around the robot and watched as it actually tried out that behavior, and sure enough the robot actually started to crawl across the table. All three of us were there, and we all kind of went nuts when it happened.

What time of day did this happen at?

We were using a basement lab back at Cornell, and there were no windows, so you have no idea whether its day or night. I can't even remember now what time it was… at that point in the research, we were so into it we weren't really conscious of what time it was.

Where does this go from here?

We basically developed this as a proof of concept for ideas for the next generation of planetary rovers. NASA is very interested in having a robot like this… We can't assume that the robot can easily communicate back with mission control on Earth and communicate what it's sensing and what it should do next. We want the robot to figure out on its own how it should go about exploring the surface of the planet. The other application would be for deploying these robots in a disaster site. A disaster site, like the surface of a remote planet, is a very unpredictable environment and there's a high likelihood that the robot may become damaged, so again we want the robot to quickly adapt and carry on with its mission.

How does your part in this fit into your larger intellectual interests?

There's a practical interest here, but for myself in particular, what's more interesting is the conceptual side of things. This robot starts to suggest something about the nature of curiosity, in the sense that the robot, when it's learning about itself, doesn't simply thrash around randomly. It actually tries out each time a new action to try to learn something new about its own body and its local environment. In a sense, at a very rudimentary level, this robot is curious.

It also suggests something about the nature of self-awareness. This robot starts by having little awareness of its own body, and through interaction through the physical world it gains experience and builds up a sense of itself, a simulation of its own body, and it can then come to understand what that body is capable of and what it isn't capable of. Taking that a step forward then, perhaps we can start someday to use robots as tools to start to ask questions about the nature of human self-awareness and curiosity. Is there something going on in our brains similar to what's going on in the brain of this robot?