Yesterday at the E3 video game conference Microsoft presented Project Natal – a prototype device that combines a camera, depth sensor, and microphone for the XBox 360 that allows users to interact directly (without any hardware control) with specially designed video games.
All-in-all a very neat vision of what COULD be. If and when this ever gets released I have serious doubts that interacting via Natal will ever be as smooth as this promo indicates. That is of course, kind of the point with these kinds of visioning pieces – to ignore the specifics and paint as exciting and optimal picture of the near future as possible.
I always think about the Knowledge Navigator piece created by Hugh Dubberly while at Apple as the gold standard example of this.
Regardless of what actually gets built, it is clear that new, unique and exciting interaction problems are present that will need to be solved.
- How do you indicate focus on a screen when there is more than one thing that can be interacted with?
- How do you denote who’s actions should be recognized and acted upon?
- Will people get tired – how long can certain types of interactions be maintained?
- What is the setup and initiation process like for direct interaction with games?
And the list goes on and on. All of these can of course be solved, but the question is how to address them in a simple way that doesn’t outweigh any “wow” factor gained when interacting with these specially designed games.
I suspect that for now, the direct motion interaction will be contained to very specific moments (certain specially games and system functions) and that 95% of everything else will require the standard XBox controllers. This means there will be lots of transitions from one method of interacting to another.
Some would argue that one of the main reasons Apple has been successful with the iPhone is because they didn’t compromise by allowing mixed input. In interaction design, it is almost always a compromise to have to consider variable input. Once you go down this road, you need to begin to address things that are not core to the users goals like indicating which input method is acceptable when. It also requires users to learn two entirely different methods of interacting with one product.
All that aside, I am excited by what this may mean for the video game industry and human computer interaction in general. AND its an excuse to bring home yet another game system – its all “research” (and tax deductible?) for us in the product development industry!