JAN 01, 2018 6:47 AM PST

Robots Engineered to Learn Like Babies

In the first year of life, the brain development of a baby is going on at a lightning-fast pace. All of the milestones like sitting up, grasping objects, crawling and eventually walking, are the result of neurons firing and becoming active.
 
Babies don’t realize it, but they gain these skills by imagining their next move. The brain is a step ahead, and that’s how they achieve all these abilities in such a short time.
 
Researchers at UC Berkeley have used that brain function as a model for their new robot called Vestri. Robots are being developed for many different applications by lots of researchers, but normally the way they teach a robot how to move or pick up objects is by programming hundreds of items and movements into the software. That limits the actions a robot can take, however. Once it encounters an object that has not been entered into the database, it won’t know what to do.
 
The goal of developing Vestri to have the ability to think ahead, which is called visual foresight, is so that self-driving cars might be able to avoid accidents or road hazards by anticipating them. Robots that can assist someone in the home or the workplace will also need this kind of technology and engineers have found that mimicking the way the human brain does it is the most efficient way. Babies learn almost entirely from autonomous play. While a baby sitting on the floor banging spoons and pans together might not seem like something robotic scientists would learn from, it’s the exact example they needed to make robots more helpful.
 
For now, the goal is just to get the robot’s software to anticipate what to do just a few seconds into the future. Images of their surroundings are received through cameras that serve at the “eyes” of the robot. Moving objects around on a table is where the learning starts, but it’s vital that they begin just by being left on their own to figure out how to move some objects without knocking over others. Once a robot has spent time managing these tasks, the algorithms in its software builds what is called a “predictive model” which will come in handy the next time an unfamiliar item is encountered.
 
Sergey Levine, an assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, runs the lab that is working to produce a prototype that can think ahead in a situation it hasn’t been programmed to handle. He explained, “In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it. This can enable intelligent planning of highly flexible skills in complex real-world situations.”
 
The robot isn’t complete yet, but what it has managed to do so far has been accomplished entirely by way of machine learning. It’s called “dynamic neural advection” (DNA), and it’s a form of predicting how the pixels of video will move from one frame to the next. By allowing the robots to do the same tasks, over and over again, without any human feedback to correct errors, the software literally learns how to predict what will happen next. Just as letting a baby surf around the furniture, grasping at toys and finding their way, this way of engineering robotic machinery will allow the devices to be more human-like. Making the video prediction part of the equation work will eventually replace the practice of programmers having to enter thousands of items and images into the database of a robot. The video below shows the prototype at work, check it out.
 
About the Author
Bachelor's (BA/BS/Other)
I'm a writer living in the Boston area. My interests include cancer research, cardiology and neuroscience. I want to be part of using the Internet and social media to educate professionals and patients in a collaborative environment.
You May Also Like
Loading Comments...