Want to make robots run faster? Try letting AI take control
Quadrupedal robots are becoming a familiar sight, but engineers are still working out the full capabilities of these machines. Now, a group of researchers from MIT says one way to improve their functionality might be to use AI to help teach the bots how to walk and run.
Usually, when engineers are creating the software that controls the movement of legged robots, they write a set of rules about how the machine should respond to certain inputs. So, if a robot’s sensors detect x amount of force on leg y, it will respond by powering up motor a to exert torque b, and so on. Coding these parameters is complicated and time-consuming, but it gives researchers precise and predictable control over the robots.
An alternative approach is to use machine learning — specifically, a method known as reinforcement learning that functions through trial and error. This works by giving your AI model a goal known as a “reward function” (e.g., move as fast as you can) and then letting it loose to work out how to achieve that outcome from scratch. This takes a long time, but it helps if you let the AI experiment in a virtual environment where you can speed up time. It’s why reinforcement learning, or RL, is a popular way to develop AI that plays video games.
This is the technique that MIT’s engineers used, creating new software (known as a “controller”) for the university’s research quadruped, Mini Cheetah. Using reinforcement learning, they were able to achieve a new top-speed for the robot of 3.9m/s, or roughly 8.7mph. You can watch what that looks like in the video below:
As you can see, Mini Cheetah’s new running gait is a little ungainly. In fact, it looks like a puppy scrabbling to accelerate on a wooden floor. But, according to MIT PhD student Gabriel Margolis (a co-author of the research along with postdoc fellow Ge Yan), this is because the AI isn’t optimizing for anything but speed.
“RL finds one way to run fast, but given an underspecified reward function, it has no reason to prefer a gait that is ‘natural-looking’ or preferred by humans,” Margolis tells The Verge over email. He says the model could certainly be instructed to develop a more flowing form of locomotion, but the whole point of the endeavor is to optimize for speed alone.
Margolis and Yang say a big advantage of developing controller software using AI is that it’s less time-consuming than messing about with all the physics. “Programming how a robot should act in every possible situation is simply very hard. The process is tedious because if a robot were to fail on a particular terrain, a human engineer would need to identify the cause of failure and manually adapt the robot controller,” they say.
By using a simulator, engineers can place the robot in any number of virtual environments — from solid pavement to slippery rubble — and let it work things out for itself. Indeed, the MIT group says its simulator was able to speed through 100 days’ worth of staggering, walking, and running in just three hours of real time.
Some companies that develop legged robots are already using these sorts of methods to design new controllers. Others, though, like Boston Dynamics, apparently rely on more traditional approaches. (This makes sense given the company’s interest in developing very specific movements — like the jumps, vaults, and flips seen in its choreographed videos.)
There are also faster-legged robots out there. Boston Dynamics’ Cheetah bot currently holds the record for a quadruped, reaching speeds of 28.3 mph — faster than Usain Bolt. However, not only is Cheetah a much bigger and more powerful machine than MIT’s Mini Cheetah, but it achieved its record running on a treadmill and mounted to a lever for stability. Without these advantages, maybe AI would give the machine a run for its money.