Yesterday I finished building a frame to hold my laptop and kinect sensor. The frame manages the cords and holds everything together so I can easily scan 3d environments.
The implications of this technology will be huge. What is interesting is the technology is only 3 months away from being much better software available from Google on a tablet.
It’s nice to adapt to the technology while it is still in its infancy. Google will have the technology in their smartphones at the end of next year and soon the technology will be standard and not thought about as new.
The implications that this technology (giving smartphones eyes) will have on society will be the biggest improvement of the smartphone since the original iPhone. There are 1000s of use cases that will dramatically improve, the quality of life for everyone.
It’s cool to look at this technology from a usability perspective. Currently creating and navigating 3d spaces is a challenge for non innovator every adaptor category. The key change on the software side is that it’s about to become easy to navigate and view 3d spaces.
I look to be on the cutting edge of usability improvements to the navigation of 3d spaces in the coming months.
It’s also going to revolutionize game design. I’m making a game where you are trapped as myself in a 3d scanned version of my house and you have to escape.