Social learning is, according to experts in different areas, what makes humans complex beings in all their splendour. Without that ability to learn by observing others and interacting with them, we would not have culture, as evolutionary biologists Kevin Laland and Will Hoppitt argue: “Culture is based on information that is socially learned and transmitted.” However, it’s not an exclusive characteristic of our species: chimpanzees learn from their fellow chimps how to use plant stems to collect termites, and whales sing in different dialects. Now, technological progress is making it possible for robots to also join this list of beings with social learning capacity.
In 2017, researchers from the Computer Science and Artificial Intelligence Laboratory (CSAIL) of the Massachusetts Institute of Technology (MIT) made a breakthrough: robots that can learn from each other. PhD student Claudia Pérez-D’Arpino, a specialist in robotics and computer science at CSAIL, developed a system called C-LEARN that adopts a two-pronged learning approach.
First, a robot is programmed with a knowledge base that allows it to interact with different objects. This knowledge base helps it navigate through the limitations of its environment, such as the need to turn a knob to open a door. And once the robot knows how to physically interact with objects, it can begin to learn more complex tasks. For this, a human programmer uses the C-LEARN software to move the extremities of a virtual representation of the robot and thus demonstrate to its real equivalent how to execute each task. Unlike previous methods for teaching machines, C-LEARN allows the programmer to only have to demonstrate each action once.
A robot to eliminate bombs
Pérez-D’Arpino used the software to teach complex tasks to Optimus, a small two-armed robot that moves around on wheels and was designed to eliminate bombs. The researchers then tested whether Optimus would be able to transmit that knowledge to other robots, specifically Atlas, a bipedal android 1.8 metres tall developed by the firm Boston Dynamics, which is able to make impressive jumps and backflips and balance on one leg while receiving projectiles.
As they did previously with the virtual version of the robot, the researchers used C-LEARN to transfer the knowledge base of Optimus and its learning of tasks to a system with a virtual representation of Atlas. Thus, Atlas was able to integrate the knowledge of the first robot with its own information base to solve real-world challenges. The Boston Dynamics robot managed to execute tasks that had only been demonstrated to Optimus and even did them in an improved way, since it is able to maintain its balance better than the small MIT robot.
“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Pérez-D’Arpino. “We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.” The researcher adds that by combining the “intuition of learning” with the precision of algorithms for planning movement, this new field of research can help robots to perform new types of tasks that they couldn’t learn before, such as multi-step assembly.
Machines that teach other machine
The MIT experiments illustrate how humans can teach machines to teach other machines. This system of demonstrating tasks to a robot that can then transfer its abilities to other robots with different shapes and capacities could be only the first step towards independent social learning in machines. Is this the beginning of robot culture?
Pérez-D’Arpino clarifies that, at the moment, the social learning of robots still requires the great involvement of human beings, and that the machines can’t deviate from the steps learned. But her team is already working on projects to make robots more adaptable. “I think in the future this type of robotics will go out of factories and help in settings like hospitals and, ultimately, at home, where it can assist people in doing tasks that they can’t do,” says the researcher.
“Traditional programming of robots in real-world scenarios is difficult, tedious and requires a lot of domain knowledge,” adds Julie Shah, MIT professor and one of the directors of the research. “It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration,” she says. “This is an exciting step towards teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”
Comments on this publication