Imagine if ChatGPT were embodied in a physical form—well, that’s precisely what the startup Figure has accomplished with their latest creation, Figure 01. In a recent demonstration, this robot showcased some truly remarkable abilities. When queried about its surroundings, aced a visual recognition test by providing accurate descriptions of items like a red apple, a dish-filled drying rack, and even the person interacting with it—utilizing its camera-like eyes.
However, where Figure 01 truly shines is in its capacity not only to perceive the world but also to interact with it. In the demonstration, when the person asked for something to eat, the robot smoothly picked up the apple the only edible item in sight and handed it over.
Also read: 8 railway stations renamed by Maharashtra Cabinet
Integrating Visual Data and Transcribed Speech with Figure 01 Self-Assessment
Figure 01 operates by gathering visual data from its cameras and transcribed speech, which it then feeds into OpenAI’s language model. It analyzes this information to grasp the full context, enabling it to generate intelligent responses and determine which motor skills to employ. Despite the lack of technical details provided by Figure, it’s unclear whether any pre-programming was involved in the demonstration. Notably, the robot demonstrated its multitasking capabilities when prompted by the demo person. For instance, Figure 01 was able to explain why it gave the person an apple while simultaneously identifying and disposing of trash items. Throughout this process, the robot maintained a conversational tone while articulating its reasoning behind the apple handoff.
Also read: RSS Changes Content, Curriculum And Format Of Training
More Stories
SC: ‘कानून के तहत बने संस्थान का अल्पसंख्यक दर्जा नहीं होगा खत्म,’ AMU पर अब नियमित पीठ करेगी फैसला
Our Hindutva aligns with Shahu-Phule ideals: Shinde
Karnataka Govt Bans Smoking and Tobacco Use for Employees Inside Offices