Stanfords Computer Science and Engineering department has developed an AI assisted robot that dives to depths humans cannot. Its mobility and part AI features herald a further step towards submergible artificial intelligence.

The OceanOne is filled with compressible oil to offset the crushing pressures experienced when 100 metres underwater, and AI-assisted navigation steers it clear of obstacles. Its operators remain on land, observing on screen everything the robot captures, using joysticks to drive it and guiding its hands through a feedback mechanism that relays tactile sensations. “It’s impossible to let a robot act alone in such an environment: it will fail,” says Professor Oussama Khatib, OceanOne’s creator. “The only way you can guarantee success is connecting a worker through a haptic device to the robot. You’re transmitting your goals to the robot, and the robot will touch and transmit exactly the same feelings back to your fingers. Virtually, it’s as if you’re diving there.”

The 180kg OceanOne is propelled by eight battery-powered thrusters, although an electric tether provides energy during longer trips. Its hands are geared with pressure sensors sending haptic feedback back to the claw-like rigs used to manoeuvre them. Other sensors provide real-time environmental data. Initially conceived for exploring the Red Sea’s coral reef, the 1.5-metre-long OceanOne ended up making its maiden voyage on an underwater archaeology mission: during a two-hour expedition in April, it reached a Louis XIV-era French warship 100 metres below the Mediterranean Sea and recovered a vase from the wreck. But Khatib believes OceanOne could be used for more than just treasure hunting. “It can use tools, it can fix underwater pipes,” he explains. “It is valuable for companies that are building structures underwater but cannot send people there.” GV cs.stanford.edu

Source: http://www.wired.com