A Robot Has Unprecedentedly Acquired the Ability to Visualize Itself

Data Artificial Intelligence AI Problem Solving

Engineers at Columbia University created a robot that learns more about itself than it does about the world around it.

Any athlete or fashion-conscious person will tell you that our perception of our bodies is not always accurate or practical, but that it is an important factor in determining how we behave in social situations. While you’re playing ball or getting ready, your brain is constantly planning for movement so that you can move your body without colliding with anything, tripping over something, or collapsing.

We humans begin to construct our body models as infants, and now robots are doing the same. Today, Columbia Engineering announced the creation of a robot capable of learning a model of its entire body from scratch for the first time without the assistance of humans. The researchers explain how their robot built a kinematic model of itself and then used that model to plan motions, complete tasks, and avoid obstacles in a variety of scenarios in a recent paper published in Science Robotics. Even if its body was damaged, it could immediately identify and repair it.

Self-Modeling Machine from Columbian Robotics

A robot can adapt to a variety of motion planning and control tasks by visual self-modeling the full-body morphology of its own body. Columbia Engineering’s Yinuo Qin and Jane Nisselson are to be credited for their efforts.
The robot looks at its reflection with the same curiosity that a child would when playing in a maze of mirrors.

The researchers placed a robotic arm within a circle of five live streaming video cameras to collect data. The robot observed its movements through the cameras attached to it as it undulated freely. As it tried to figure out how its body moved in response to different motor commands, the robot twisted and deformed itself. This was similar to a child discovering himself for the first time in a hall of mirrors. After nearly three hours, the robot came to a halt. The robot’s internal deep neural network had finished learning the relationship between the robot’s motor operations and the amount of space it took up in its surroundings.

“We were particularly intrigued to discover how the robot imagined itself,” said Hod Lipson, a professor of mechanical engineering at Columbia University and the director of the university’s Creative Machines Lab, where the research was conducted. “However, a neural network is a black box, and you can’t just open it up and look inside.” After struggling with a variety of visualization strategies, the researchers’ self-image became clear to them. Lipson described it as “a kind of slowly flashing fog that appeared to engulf the robot’s three-dimensional body.”  “As the robot moved, the fluttering cloud trailed behind it softly.” The self-model correctly represented about 1% of the robot’s workspace. An in-depth examination of the study’s methodology. Columbia Engineering completed the work.

Self-modeling robots will pave the way for increased levels of autonomy in autonomous systems

For a variety of reasons, robots must be able to model themselves without the assistance of engineers, including the following: It not only saves labor, but it also allows the robot to maintain its wear and tear and even detect and adjust for damage. It not only saves labor, but it also allows the robot to save labor. The authors argue that this capability is required because we need autonomous systems to be more self-sufficient. A factory robot, for example, can detect when something is not moving properly and either make the necessary adjustments or request human assistance.

The paper’s first author, Boyuan Chen, who started the research and is now an assistant professor at Duke University, stated that “we humans certainly have a notion of self.” “Close your eyes and try to imagine how your own body would move if you performed a movement, such as extending your arms forward or taking a step backward.” A self-model is a concept of the self that exists somewhere within our brains. This model tells us how much space we occupy in our immediate surroundings and how that space changes as we move.

Self-awareness development in robots

This work is a continuation of Lipson’s decades-long effort to discover methods for endowing robots with a semblance of self-awareness. He claimed that self-modeling was the first form of self-awareness in humans. When a person, animal, or robot has a realistic model of themselves, they can function better in their surroundings, make better decisions, and gain an evolutionary advantage.

The researchers are well aware of the constraints, dangers, and debates that come with giving machines more autonomy through the development of self-awareness. As Lipson pointed out, the level of self-awareness demonstrated in this study is “trivial compared to that of humans, but you have to start somewhere.” Lipson readily admits this, but adds that “you have to start somewhere.” We must proceed slowly and cautiously in order to maximize the benefits while minimizing the risk of harm.

Stay Tuned
Latest posts by Oliver Carter (see all)