How Perceptual Computing Develops Science Nonfiction
Perceptual computing will soon make interacting with our devices even more intuitive and natural.
"Devices with human-like senses—the ability to see, hear and feel much like people do—has long been a subject of science fiction but is now within reach given recent innovations in computing power and camera technology," said Arvind Sodhani, Intel executive vice president and president of Intel Capital.
The technology behind perceptual computing focuses on next-generation natural user interfaces such as touch, gesture, voice, emotion sensing, biometrics, and image recognition, which would allow computers to work around us rather than our having to continue to work around them. It is providing services in a more natural way than traditional forms of inputs like a keyboard and mouse.
"If you want a simple explanation of what we're doing, just look to Asimov," said Mooly Eden, the head of Intel's perceptual computing efforts. "Or Star Trek, Star Wars, and Avatar. The ideas have been in science fiction for years, and now they're becoming fact."
Though some products—such as Leap Motion and Microsoft Kinect—are already well known for taking advantage of the technology, perceptual computing isn’t about a specific platform. It’s an open-ended vision for what computers should be able to do. With perceptual computing, more natural, intuitive, and immersive computing experiences such as eye tracking, voice recognition, face recognition, and gesture controls can all be delivered.
Intel’s partnership with Nuance Communications Inc. has given the company the ability to mold a voice command system of its very own. The Nuance software has been used to interpret human language commands instead of specific commands. This puts Intel’s voice control system in line with services like Google Now and Apple’s Siri, but it will be focused on the PC.
The Intel Perceptual Computing SDK allows developers to add perceptual computing usages to applications for second, third, and fourth generation Intel Core processor-based Ultrabook devices, laptops, and desktop systems. The SDK works in a range of about six inches to three feet and supports perceptual computing usage modes including speech recognition; face analysis and tracking; hand, finger and gesture tracking; and 2D and 3D object tracking. Intel’s Perceptual Computing SDK aims to provide many of the same features as Kinect in a smaller package and with a less Microsoft-centered toolset.