When Robots Grow Up
- By Joshua E. Brown
Want to build a really tough robot? Forget about Terminator. Instead, watch a tadpole turn into a frog. Then get out the Legos.
Or at least that’s not too far off from what University of Vermont roboticist Josh Bongard and his students, do.
In a first-of-its-kind experiment, published in the Proceedings of the National Academy of Sciences, Bongard created robots -- first on a computer and then out of Legos -- that, like tadpoles becoming frogs, change their body forms while learning how to walk. And, over generations, his simulated robots also evolve, spending less time in “infant” tadpole-like forms and more time in “adult” four-legged forms.
These evolving populations of robots were able to learn to walk more rapidly than ones with fixed body forms. And, by the end of the experiment, the changing robots had developed a more robust gait -- better able to deal with, say, being knocked with a stick -- than the ones that had learned to walk using upright legs from the beginning.
Another robot that Bongard built learned how to walk, and then had a leg removed -- and had to re-learn after this “injury.” This “resilient machine” was selected by Esquire Magazine as one of six ideas that “will change the world.” Bongard, the winner of a fellowship from Microsoft, was also selected by MIT’s Technology Review as one of the world’s leading innovators under 35 for his insights into artificial intelligence and robotics.
Robots are complex
So far, Bongard says, engineers have been largely unsuccessful at creating robots that can continually perform simple, yet adaptable, behaviors in unstructured or outdoor environments -- like clearing up a construction site or laying pavement for a new road.
In some ways, robots are too much like people for people to easily understand them, says Bongard, who co-authored a book called How The Body Shapes the Way We Think. “Robots have lots of moving parts. And their brains, like our brains, have lots of distributed materials: there’s neurons and there’s sensors and motors and they’re all turning on and off in parallel,” he says -- making it hard to know exactly how they’ll behave.
Which is why Bongard, an assistant professor in UVM’s College of Engineering and Mathematical Sciences, and other robotics experts have turned to evolution to design robots and develop their behaviors -- rather than trying to program the robots’ behavior directly.
To the light
Using a sophisticated computer simulation, Bongard unleashed a series of synthetic beasts that move about in a three-dimensional space. “It looks like a modern video game,” he says. Each creature -- or, rather, generations of the creatures -- then run a software routine, called a genetic algorithm, that experiments with various motions until it develops a slither, shuffle, or walking gait that can get it to a light source without tipping over.
Some of the creatures begin flat to the ground, like tadpoles or, perhaps, snakes with legs; others have splayed legs, a bit like a lizard; and others ran the full set of simulations with upright legs, like mammals.
And why do the generations of robots that progress from slithering to wide legs and, finally, to upright legs, ultimately perform better, getting to the desired behavior faster?
“The snake and reptilian robots are, in essence, training wheels,” says Bongard, “they allow evolution to find motion patterns quicker, because those kinds of robots can’t fall over. So evolution only has to solve the movement problem, but not the balance problem, initially. Then gradually over time it’s able to tackle the balance problem after already solving the movement problem.”
Sound anything like how a human infant first learns to roll, then crawl, then cruise along the coffee table and, finally, walk?
“Yes,” says Bongard, “We’re copying nature, we’re copying evolution, we’re copying neural science when we’re building artificial brains into these robots.” But the key point is that his robots don’t only evolve their artificial brain -- the neural network controller -- but rather do so in continuous interaction with a changing body plan. A tadpole can’t kick its legs, because it doesn’t have any yet; it’s learning some things legless and others with legs.
“One thing that has been left out all this time is the obvious fact that in nature it’s not that the animal’s body stays fixed and its brain gets better over time,” Bongard says, “in natural evolution animals bodies and brains are evolving together all the time.” A human infant, even if she knew how, couldn’t walk: her bones and joints aren’t up to the task until she starts to experience stress on the foot and ankle.
That hasn’t been done in robotics for an obvious reason: “It’s very hard to change a robot’s body,” Bongard says, “it’s much easier to change the programming inside its head.”
Still, Bongard gave it a try. After running 5,000 simulations, each taking 30 hours on the parallel processors in UVM’s Vermont Advanced Computing Center -- “it would have taken 50 or 100 years on a single machine,” Bongard says -- he took the task into the real world.
“We built a relatively simple robot, out of a couple of Lego Mindstorm kits, to demonstrate that you actually could do it,” he says. This physical robot is four-legged, like in the simulation, but the Lego creature wears a brace on its front and back legs. “The brace gradually tilts the robot,” as the controller searches for successful movement patterns, Bongard says, “so that the legs go from horizontal to vertical, from reptile to quadruped.
“While the brace is bending the legs, the controller is causing the robot to move around, so it’s able to move its legs, and bend its spine,” he says. “It’s squirming around like a reptile flat on the ground, and then it gradually stands up until, at the end of this movement pattern, it’s walking like a coyote.”
“It’s a very simple prototype,” he says, “but it works.”