AI specialists from Stanford University and the Shanghai Qi Zhi Institute have introduced a robot, inspired by the agility and adaptability of canines, powered by an innovative vision-based algorithm. This algorithm, serving as the cognitive center of the robodog, enables it to navigate unfamiliar obstacles while maintaining forward momentum with minimal exertion.
These battery-operated quadrupeds, designed to be the vanguard in disaster scenarios such as earthquakes, fires, and floods, utilize computer vision to evaluate obstructions and mimic the dexterity of actual dogs to overcome them. The robodog’s impressive abilities include ascending vertical structures, vaulting over voids, wriggling under obstructions, and navigating through confined spaces before promptly advancing to subsequent challenges.
This extraordinary breakthrough signifies a substantial progression in the application of AI in disaster management and rescue operations, offering the potential to save numerous lives in future calamities.
Unlike other robodogs that learn agility skills by imitating other animals using real-world data, these robodogs have a broader skill set and superior vision capabilities. Furthermore, they overcome the computational lag, or slowness, associated with existing methods. This is the first open-source application to achieve these goals with a simple reward system without relying on real-world reference data.
Following the development of the algorithm, the research team carried out a range of extensive tests using real-world robodogs to demonstrate their novel agility approach in particularly challenging environments. These robodogs relied exclusively on standard computers, visual sensors, and power systems to perform these tasks.
In terms of measurable achievements, the upgraded robodogs were able to climb obstacles that were more than one-and-a-half times their height, leap across gaps that were greater than their length, navigate under barriers that were three-quarters of their height, and adjust their orientation to pass through narrow spaces that were narrower than their width.
Looking ahead, the team plans to utilize advancements in 3D vision and graphics to integrate real-world data into their simulated environments. This ambitious step is aimed at enhancing the level of real-world autonomy of their algorithm.