The Kinect is a great input device, but do you really need it? If you have a video projector and a camera then, with a little software, you can input 3D data in a much more flexible way.
The Kinect uses a structured light approach to depth measurement. It projects a carefully worked out pattern of infrared dots onto the scene and measures the displacement of the dots using a custom infrared camera. The advantage of this approach is that the depth measurement is built into the hardware. The disadvantage is that, because so much of it is in hardware, you can't modify how it works.
Now a research team from Japan's AIST, Kagoshima University, and Hiroshima City University have taken what you might call an "unbundled" approach to the measurement problems. They use a standard video projector to create a software-generated structured light grid, which is then read back using a standard video camera. The generated grid is more like a "texture" than the dot-like grid used by the Kinect:
The advantage of the technique is that it can be tuned to achieve higher speeds and higher accuracy. The system demonstrated in the video claims a 3mm accuracy and, used with a high speed camera, can record things moving in full 3D.
A small adjustment to the projector system and it can also be used in conjunction with other optical equipment such as a microscope, making it possible to capture 3D microscopic models in real time.
To see how it works in practice take a look at this video created by DigInfo TV:
Although the video shows only a single projector and camera, the team have plans to use multiple projectors and cameras to capture full 360 degree 3D models in a single operation.
So should you get a projector and a camera to replace that out-of-date Kinect?
Probably not at the moment. Off-the-shelf projectors work with visible light rather than infrared and this is clearly a problem for domestic use. The system, however, does have big advantages when it comes to more specialized setups for motion capture or even 3D micrography. Its ability to capture fast moving 3D object means you could use it to study deformation of bodies as they move.
The story of pointer events and its API is a complicated and divisive one, but now that it is effectively a W3C standard browser makers should start to support it. The problem is that Apple won't and [ ... ]