Sunday, May 27, 2012

I want the computer from Minority Report

Remember, remember the Fifth of November?”


Oops. Wrong movie. I meant this one:

Fun movie. And a really cool computer UI:


So how close is this to reality? Maybe we’re almost there (but even if we don't NEED the ravin’ glow-finger gloves, we might still WANT them).

Nintendo started the popularization of motion control with the Wii-mote. Sony has jumped on the bandwagon as well with its karaoke-mic-styled PlayStation Move controller. And Microsoft of all people took the next step with its Kinect system of full-body optical recognition. For almost a year, this system has been available for Windows 7 computers (see http://bit.ly/tIg1U0 for the M$ vision for Kinect).

 A couple of other recent developments raise some interesting possibilities. One company (http://on.mktw.net/Jh6918) is claiming to have developed a system that will track all 10 fingers with pinpoint accuracy. Soon, all the techniques that the iPhone has taught us could be available in thin-air.

An odder approach to this is to put a sensor onto a person’s shoes so that all of your hand gestures can be captured and used to control a smartphone in your pocket – as they call it, “eyes-free interaction.” (http://cnet.co/Je1JJs)

“How ‘bout the power to kill a yak, from 200 yards away … with mind bullets! That’s telekinesis, Kyle!”
Of course, all of this optical recognition technology is nice, but it will be obsolete once we can simply control our devices with our minds. Some, like Peter Bentley, worry that we will go beyond just using mind control to help people with injuries like quadriplegia and start putting it into daily use (http://huff.to/LjYww4). While he may be right about the current state of technology (I don’t think I’d like a buggy computer chip surgically implanted in my brain), as things advance even farther, popular demand may trump all of these concerns. And then we might truly see Homo cyborgis.

Monday, May 14, 2012

Read the code, find the way, do the work - Simple!

In the past few weeks, through our posts, we have introduced you to some data about how people may work with computers in the future. Today we are here to discuss how these interface changes are helping humans at large and are making their lives easier.
Have you ever seen a huge warehouse where the workers go in and bring new orders to the delivery dock?   It takes time and effort for them to get to the required items and bring them out. An order may consist of more than one item for which and many workers may be required to complete the order. Alternately, if only one person is working on your order, the time to complete the order might be unacceptably long. As technology evolved, small pickup vans were used in warehouses to reduce the workload of the workers. Though this significantly reduced the physical effort required, the time aspect was still an issue for long and complex orders. Many stores strived to fix this delivery issue but there was no good way to speed up order completion. Increasing the number of workers did not help, as it increased both costs and management complexity.
To fix this problem, the most practical solution has been to bring more machines into the process.
In doing this, the big questions are:
1.    Do we have to direct the machine every time to get the required material?
2.    Will the machine just help find the item while the worker still picks it up?
3.    Will the machine be as big as a human?
The answer to all these questions is now a simple “No”.
We started this blog with “learn to talk to machines.” The next big step ahead is making the machines talk to each other as well.
What languages do the machines speak?  
One important method is called Client-Server Communication. This is a kind of the boss-subordinate relation where the. Server computer is the boss and the clients are the machines programmed to do what the boss says.
Another important aspect is the barcode. With this form of coded communication, the robots navigate their path to destination and back.
Kiva robots work in warehouse with a central computer that keeps track of all the robots and the robots read bar codes on the floor to navigate. Now, a worker can work on multiple orders at once. The robots have increased the accuracy of the order delivery, reduced the completion time, and have made possible a breakthrough to parallel order processing.  Watch the robots in action!
Machines can work tirelessly; they don’t need lunch or coffee breaks; just enough time to charge.

Thursday, May 3, 2012

Transforming throughout technology: innovation in medicine.


Many people may say that human contact is essential, especially when treating sick people. Are you willing to be cured by a robot? Would you take a pill that can run a lot of complicated exams? Daniel Kraft in “Medicine’s future? There’s an app for that”  explains the future of medicine, claiming robots and other digital interfaces will soon replace doctors and traditional ways of curing.

The question is, as technology keeps developing, are we losing the essence of humanity by attenuating our contact with each other? The future of medical students includes studying anatomy and surgery with machines. Jack Choi just created a virtual dissection table for example. What about the importance of enhancing students’ preparation with real organisms? To what extent is person shown on the virtual dissection table similar enough to a real human body?

Wednesday, April 25, 2012

Controlling things via thought ? Don't laugh, it's already there !

For many people, to be able to control things only using thought is more associated with fantasy than reality. However, the consumer electronics improvements combined with brain related research are currently making the dream come true.

During the last years, some prototypes were performing but were a bit too intrusive, because they necessitated a surgery to implant a chip in the user's brain.
This video about the Braingate system is a good illustration of this great but intrusive technology, in a medical context : http://www.youtube.com/watch?v=TJJPbpHoPWo

But big improvements have been done, and an article from The Economist is actually showing how this can be possible, basically connecting a human brain and a computer thanks to electrodes sit on the scalp.
These outstanding improvements are not geographically so far, because one of the most promising recent prototype comes from a team of the University of Zaragoza, which announced a commercialization during 2014 !

There is another constraint to assess in this domain, which is the headset design, quite rudimentary if we take a look on the first trials in this domain, presenting a large set of unelegant wires.
This is changing too : Emotiv sells special headsets that enable - to some extent - to interact with movies or games.
And it doesn't stop here : Neurosky proposes a large set of potential applications, for instance trying to break the secrets of sport performance.
On the whole, we can assess that the domains of application seem to be literally limitless...

Wednesday, April 18, 2012

...communication evolution


…created for calls, developed in multi-connection platforms with the most advanced computing abilities and application functionalities: here the SMARTPHONE.                          
A research published at ASYMCO.COM is forecasting that, within five years, all phones will be smartphones. But how will smartphones and their usage evolve? “Human interface” is definitely playing a key role in defining how it will look like…                                             
 Microsoft is currently working on the development of a new interface for Windows 8 phones, based on gesture recognition and advanced control movement (VR-Zone.com).         
Other companies (such as Hillcrest Labs) are moving in the same direction to create the “next-generation user experience” through multi-platforms devices for gesture recognition (infohq.com)                                                                                                                   
As reported in the article at ECNMAG.COM, this new smartphone generation will modify our habits and allow us to complete a larger number of tasks.                                                      
But do you really believe that the final result of this evolutionary process will be a “totally humanized phone”, as proposed by the following Nokia research - HumanForm, able not only to capture visible movements, but also to understand and translate feelings through emotional interactions? 

Recalculating Route: On the Road to New Ways of Communicating with Machines

So where did we start? Lost in the mists of time are the days of highly complex and non-intuitive interfaces such as punch-cards.
By the earliest days of the personal computer in the 1970s, the basic input and output devices were a pair of already-familiar repurposed devices: the typewriter and the television. With the addition of either a mouse (developed in the 70s at Xerox-PARC) or the touchpad, these are probably the same basic interfaces you are using now.

The scenic route (or were they wrong turns?).
 

A number of not-ready-for-primetime technologies have attempted to challenge the keyboard-monitor-mouse triumvirate over the years. Some versions of commercial handwriting recognition and voice recognition have been around for more than a decade. Both are coming closer and closer to becoming everyday interfaces (see the iPhone's Siri and iOS feature - click here). Others have had only very niche applications -- such as eyeball tracking devices that allow quadriplegics to operate a computer.

Please enter your destination. Or, where are we going? So, what is the future of the human interface? The purpose of this blog is to discuss developments, trends, and potential new applications of human interface technology. The question isn't when will I get my jet pack and my flying car. It is when I will get to view the world like the Terminator