What this means is that you can write Kinect using gadgets that work from any web page. At the moment the software is pre-alpha and provides a low level interface to the Kinect and a high level gesture recognition API.
The high-level API provides robust hand detection but needs work on more general gesture recognition.The API can recognize the following:
Presence of hand (registration)
Removal of hand (unregistration)
Large swipe up/down/left/right
If you want to see the sort of thing it might be used for take a look at the video below.:
In my opinion it looks good but I foresee lots of arm ache and perhaps even some ailment to beat carpal tunnel syndrome as the number one computer using hazard.
The code is open source and you can get it from GitHub.
The real progress in AI will only occur when the different areas come together to work on understanding and interacting with the world. NEIL is a program that scans the web 24/7 and looks at photograp [ ... ]