Apple has been leading the way for mobile devices since the first iPhone was announced. Except it! However, this doesn’t mean that other competitors can’t be innovative when in this area. Unfortunately, up to this point Google, Microsoft, Blackberry, etc. haven’t been thinking outside of the box. In essence they are just copying, and doing a poor job at that, with what is on iOS devices.
Over the past few years, and most recently with iOS5, there is a lot of discussion about cloud technologies and having seamless access to one’s content. In addition, there is always talk about form factor, physical hard drive capacity, the number of mega-pixels that this camera has vs that one, and the potential of voice interaction with these devices.
All of these features are great, but aside from maybe voice interaction, they are all nice to haves. They are a convienence. What is missing is the answer to: “Where is the next evolution in device interaction?”. Even voice at this juncture is in it’s infancy. Siri is great and I use it all the time, but there are still too many dependencies that any company has to deal with. The computer processing has to be done in the cloud, all the different languages, accents, dialects, connotations, etc.
If I were Apple, Google, Microsoft, Blackberry, Nokia, etc. I would be focusing on multi-touch gestures. Even after 3 short years engineers have started to rest on their laurels with pushing this technology forward and the possibilities. The first place I would start in my research and work on this is by becoming an expert in sign-language. I would hire a team of the best historians and contemporaries in the world on the subject and have them work side-by-side with my UX and engineering teams.
When you look at sign language as a communication tool it beholds such beauty and elegance. It is the calligraphy of gesture communication. The potential for this, as an example, is develop a new gesture “language”. Much like we have spoken and computer languages, but with consistency. The power of this would be unlimited and easy to use and interpret. With the current usage of multi-gesture it is always a singular expression, but when examining sign language a single gesture has to the ability to express an emotion or entire concept. It is multi-dimensional. Unlike, the computational and processing dependencies listed above that come with using voice interaction, all legacy and future multi-touch hardware have the ability to implement a new era of gestures.
Gestures. They are a sign of the times to come.
Over the past few years, and most recently with iOS5, there is a lot of discussion about cloud technologies and having seamless access to one’s content. In addition, there is always talk about form factor, physical hard drive capacity, the number of mega-pixels that this camera has vs that one, and the potential of voice interaction with these devices.
All of these features are great, but aside from maybe voice interaction, they are all nice to haves. They are a convienence. What is missing is the answer to: “Where is the next evolution in device interaction?”. Even voice at this juncture is in it’s infancy. Siri is great and I use it all the time, but there are still too many dependencies that any company has to deal with. The computer processing has to be done in the cloud, all the different languages, accents, dialects, connotations, etc.
If I were Apple, Google, Microsoft, Blackberry, Nokia, etc. I would be focusing on multi-touch gestures. Even after 3 short years engineers have started to rest on their laurels with pushing this technology forward and the possibilities. The first place I would start in my research and work on this is by becoming an expert in sign-language. I would hire a team of the best historians and contemporaries in the world on the subject and have them work side-by-side with my UX and engineering teams.
When you look at sign language as a communication tool it beholds such beauty and elegance. It is the calligraphy of gesture communication. The potential for this, as an example, is develop a new gesture “language”. Much like we have spoken and computer languages, but with consistency. The power of this would be unlimited and easy to use and interpret. With the current usage of multi-gesture it is always a singular expression, but when examining sign language a single gesture has to the ability to express an emotion or entire concept. It is multi-dimensional. Unlike, the computational and processing dependencies listed above that come with using voice interaction, all legacy and future multi-touch hardware have the ability to implement a new era of gestures.
Gestures. They are a sign of the times to come.