How This Humanoid Robot Learned to Make Coffee by Watching Videos

Figure, a company behind the coffee-making robot, has developed a full-body humanoid robot called Figure-01. This AI robot can walk, talk, and interact with its surroundings, and it can learn new skills by watching videos. The robot demonstrated how to make coffee using a Keurig machine at Figure’s headquarters in San Francisco. The robot had previously watched a video of a human making coffee using the same machine, and it learned how to mimic the human’s actions. It used its vision system to recognize the machine and its parts, and its motion planning system to coordinate its movements.

The whole process took less than a minute, and the robot did not need any human assistance or guidance. It was able to perform the task autonomously, using only the video as a reference. This is an example of end-to-end AI, where neural networks take video in and trajectories out.

This demonstration showed that Figure-01 has added a new autonomous action to its library, which can be transferred to any other Figure robot running on the same system via swarm learning. This means that if one robot learns something new, all the other robots can learn it without having to watch the video themselves. This makes the learning process faster and more efficient, enabling the robots to share their knowledge and skills with each other.

Figure-01 can learn to do anything from peeling bananas to using power tools to making art, all by watching videos. The robot can also learn not only what to do but also why and how to do it and how to adapt to different situations and contexts.

Figure’s plans for Figure-01 and video learning include making the robot more versatile, creative, and collaborative. The company has big plans for the future, and it is expected that Figure-01 could soon find a cup in the kitchen, check if the Keurig is plugged in and has plenty of water in it, make the press-button coffee, and bring it to your desk without spilling it, all by watching videos and using its walking and language capabilities.

Read More @ CyberGuy

Suggested Reading