What this means is that you can write Kinect using gadgets that work from any web page. At the moment the software is pre-alpha and provides a low level interface to the Kinect and a high level gesture recognition API.
The high-level API provides robust hand detection but needs work on more general gesture recognition.The API can recognize the following:
Presence of hand (registration)
Removal of hand (unregistration)
Large swipe up/down/left/right
If you want to see the sort of thing it might be used for take a look at the video below.:
In my opinion it looks good but I foresee lots of arm ache and perhaps even some ailment to beat carpal tunnel syndrome as the number one computer using hazard.
The code is open source and you can get it from GitHub.
The finals of the Imagine Cup were recently held in Seattle attended by 34 teams, 125 students in all, representing 34 countries. Winners of the Imagine Cup, Team Eyenaemia will be meeting with Bill G [ ... ]
Continuing its push to give all school students the opportunity to learn to code, Code.org is inviting educators to check out the beta of its K-5 Computer Science Curriculum, which will be launched in [ ... ]