Augmented reality (AR) is a field of computer research which deals with the combination of real-world and computer-generated data (virtual reality), where computer graphics objects are blended into real footage in real time. The term is believed to have been coined in 1990 by Thomas Caudell, an employee of Boeing at the time. – via Wikipedia
While there are plenty of examples of Augmented Reality out there right now, most are either not geared towards the consumer market and / or are only for purely entertainment purposes. For example GE’s Smart Grid Advertising Campaign.
With the release of the iPhone 3G S, with its built in compass and video camera, it looks like useful AR applications will finally soon be available en-mass to end users.
Nearest Tube, developed by Acrossair, is an exciting example of such an application.
What’s interesting about this to me is that there is no input interface to speak of. The environment (as displayed through the filter of the video camera) and the users direction (as determined by the internal compass) is the only user input.
The Microsoft Natal Project is another example of how “interfaceless” interactions (enabled by sensors) have the potential to dramatically change not only how users interact with applications, but also our roles as designers of these interactions.
While it is certain that the interaction patterns that we create (and the deliverables we use to capture them) will likely need to evolve greatly, the underlying design principles and problem solving process we use to create experiences that meet users (and business) goals will not.