Wednesday, 25 January 2012

Computer/Human Interface Systems.



“ Breakthroughs in computer / human interface 
technologies will multiply 
the speed of personal computing, eventually 
approaching the 
speed of thought”.









       There’s no mistaking the significant influence of personal computers and the Internet on our 
modern way of life. Many of us have so quickly adapted to regular use of search engines and web 
surfing that it’s difficult to imagine life without the Internet.

       The Internet allow us to research products and companies, share ideas with the public, research 
nutritional supplements, find articles on historical figures, and do a million other things that simply 
weren’t possible a mere two decades ago.


       And yet our interface with the Internet remains the lowly personal computer. With its clumsy 
interface devices (keyboard and mouse, primarily), the personal computer is a makeshift bridge 
between the ideas of human beings and the world of information found on the Internet. These 
interface devices are clumsy and simply cannot keep pace with the speed of thought of which the 
human brain is capable.
Consider this: a person with an idea who wishes to communicate that idea to others must translate 
that idea into words, then break those words into individual letters, then direct her fingers to punch 
physical buttons (the keyboard) corresponding to each of those letters, all in the correct sequence. 
Not surprisingly, typing speed becomes a major limiting factor here: most people can only type 
around sixty words per minute. Even a fast typist can barely achieve 120 words per minute. Yet 
the spoken word approaches 300 words per minute, and the speed of “thought” is obviously many 
times faster than that.
      Pushing thoughts through a computer keyboard is sort of like trying to put out a raging fire with a 
garden hose: there’s simply not enough bandwidth to move things through quickly enough. As a 
result, today’s computer / human interface devices are significant obstacles to breakthroughs in 
communicative efficiency.
      The computer mouse is also severely limited. I like to think of the mouse as a clumsy translator 
of intention: if you look at your computer screen, and you intend to open a folder, you have to 
move your hand from your keyboard to your mouse, slide the mouse to a new location on your 
desk, watch the mouse pointer move across the screen in an approximate mirror of the mouse 
movement on your desk, then click a button twice. That’s a far cry from the idea of simply looking
at the icon and intending it to open, which would of course be the desired level of computer / 
human interface as I’ll discuss below.
Today’s interface devices are little more than rudimentary translation tools that allow us to access 
the world of personal computers and the Internet in a clumsy, inefficient way. Still, the Internet is 
so valuable that even these clumsy devices grant us immeasurable benefits, but a new generation 
of computer/human interface devices would greatly multiply those benefits and open up a whole 
new world of possibilities for exploiting the power of information and knowledge for the benefit of 
humanity. Let’s take a closer look at those emerging technologies now.The Ten Most Important Emerging Technologies for Humanity

Emerging Computer / Human Interface Technologies
        The idea of eliminating the gap between human thought and computer responsiveness is an 
obvious one, and a number of companies are working hard on promising technologies. One of the 
most obvious such technologies is voice recognition software that allows the computer to type as 
you speak, or allows users to control software applications by issuing voice commands.
The most advanced and accurate software in this category is Dragon Naturally Speaking, and 
I’ve spent a considerable number of hours with this software. Its accuracy is impressive, and the 
technology is far ahead of voice recognition technology from a mere decade ago, but it’s still not 
at the point where people can walk up to their computer and start issuing voice commands without 
a whole lot of setup, training, and fine tuning of microphones and sound levels. For many people, 
that’s just way too much configuration.
       This situation is no doubt recognized by the developers of Dragon Naturally Speaking. Nevertheless, 
widespread, intuitive use of voice recognition technology still appears to be years away.
Hand-controlled computers
      Another recent technology that represents a clever approach to computer / human interfaces is the 
iGesture Pad by a company called Fingerworks (http://www.FingerWorks.com). With the iGesture 
Pad, users place their hands on a touch sensitive pad (about the size of a mouse pad), then 
move their fingers in certain patterns (gestures) that are interpreted as application commands. 
For example, placing your fingers on the pad in a tight group, then rapidly opening and spreading 
your fingers is interpreted as an Open command.
      This technology represents a leap in intuitive interface devices, and it promises a whole new 
dimension of control versus the one-dimensional mouse click, but it’s still a somewhat clumsy 
translation of intention through physical limbs.
       For more intuitive control of software interfaces, what’s needed is a device that tracks eye 
movements and accurately translates them into mouse movements: so you could just look at 
an icon on the screen and the mouse would instantly move there. Interestingly, some of the 
best technology in this area comes from companies building systems for people with physical 
disabilities. For people who can’t move their limbs, computer control through alternate means is 
absolutely essential.The Ten Most Important Emerging Technologies for Humanity

Head movement tracking technology
     One approach to this is tracking the movement of a person’s head and translating that into mouse 
movements. One device, the  HeadMouse (http://www.orin.com/access/hme/index.htm), does 
exactly that. You stick a reflective dot on your forehead, put the sensor on top of your monitor, 
then move your head to move your mouse. I haven’t tried the technology, so I can’t say how well it 
works, but the company (Origin Instruments) has a reputation for providing assistive technologies 
to physically disabled persons, and the HeadMouse is their latest technology.
Another company called Madentec (http://www.Madentec.com) offers a similar technology called 
Tracker One. Place a dot on your forehead, then you can control the mouse simply by moving 
your head.
      In terms of affordable head tracking products for widespread use, a company called NaturalPoint 
(http://www.NaturalPoint.com) seems to have the best head tracking technology at the present: 
a product called SmartNav, priced at a mere $199, allows for hands-free mouse control via head 
movement. Add a foot switch and you can click with your feet. I’ve used this product myself, and 
while it definitely presents a learning curve for new users, it works as promised.
Tracking eye movements
While tracking head movement is in many ways better than tracking mouse movement, a more 
intuitive approach, it seems, would be to track actual eye movements. A company called LC 
Technologies, Inc. is doing precisely that with their EyeGaze systems (http://www.lctinc.com/
PRODUCTS.htm). By mounting one or two cameras under your monitor and calibrating the 
software to your screen dimensions, you can control your mouse by simply looking at the desired 
position on the screen.
Once again, this technology was originally developed for people with physical disabilities, yet the 
potential application of it is far greater. In time, I believe that eye tracking systems will become the 
preferred method of cursor control for users of personal computers.
Eye tracking technology is quickly emerging as a technology with high potential for widespread 
adoption by the computing public. Companies such as Tobii Technology (http://www.tobii.se), 
Seeing Machines (http://www.SeeingMachines.com), SensoMotoric Instruments (http://www.
smi.de), Arrington Research (http://www.ArringtonResearch.com), and EyeTech Digital Systems 
(http://www.eyetechds.com) all offer eye tracking technology with potential for computer / human 
interface applications. The two most promising technologies in this list, in terms of widespread 
consumer-level use, appear to be Tobii Technology and EyeTech Digital Systems.The Ten Most Important Emerging Technologies for Humanity

Mind control for your PC
     Moving to the next level of computer / human interface technology, the ability to control your 
computer with your thoughts alone seems to be an obvious goal. The technology is called Brain 
Computer Interface technology, or BCI.
Although the idea of brain-controlled computers has been around for a while, it received a spike 
of popularity in 2004 with the announcement that nerve-sensing circuitry was implanted in a 
monkey’s brain, allowing it to control a robotic arm by merely thinking. This Washington Post
article gives a fascinating account of the breakthrough and training required by the monkey to 
learn how to use the brain implant:
http://www.washingtonpost.com/ac2/wp-dyn/A17434-2003Oct12?language=printer
The lead researchers in the monkey experiment are now involved in a commercial venture 
to develop the technology for use in humans. The company, Cyberkinetics Inc. (http://www.
cyberkineticsinc.com) hopes to someday implant circuits in the brains of disabled humans, then 
allow those people to control robotic arms, wheelchairs, computers or other devices through 
nothing more than brain behavior.
      A key obstacle to widespread use is, of course, the requirement that circuitry be surgically implanted 
in the brain. If the technology can take a quantum leap and work its magic without needing the 
surgery -- by wearing a sensing helmet, for example -- it will suddenly be a lot more interesting to 
the population at large, and not just those with severe physical disabilities.
Imagine the limitless applications of direct brain control. People could easily manipulate cursors 
on the screen or control electromechanical devices. They could direct software applications, 
enter text on virtual keyboards, or even drive vehicles on public roads. Today, all these tasks are 
accomplished by our brains moving our limbs, but the limbs, technically speaking, don’t have to 
be part of the chain of command.
Tactile feedback
      Another promising area of computer / human interface technology is being explored by companies 
like Immersion Corporation (http://www.Immersion.com), which offers tactile feedback hardware 
that allows users to “feel” their computer interfaces.
Slide on Immersion’s CyberGlove, and your computer can track and translate detailed hand and 
finger movements. Add their CyberTouch accessory, and tiny force feedback generators mounted 
on the glove deliver the sensation of touch or vibration to your fingers. With proper software The Ten Most Important Emerging Technologies for Humanity

      Translation, these technologies give users the ability to manipulate virtual objects using their 
hands. It’s an intuitive way to manipulate objects in virtual space, since nearly all humans have 
the natural ability to perform complex hand movements with practically no training whatsoever.
Another company exploring the world of tactile feedback technologies is SensAble Technologies 
(http://www.sensable.com). Their PHANTOM devices allow users to construct and “feel” threedimensional objects in virtual space. Their consumer-level products include a utility for gamers 
that translates computer game events into tactile feedback (vibrations, hitting objects, gun recoil, 
etc.).
On a consumer level, Logitech makes a device called the IFeel Mouse that vibrates or thumps 
when your mouse cursor passes over certain on-screen features. Clickable icons, for example, 
feel like “bumps” as you mouse over them. The edges of windows can also deliver subtle feedback. 
The mouse sells for around $40, but it hasn’t seen much success in the marketplace. Reviews 
from users reveal that the vibrating mouse is considered more annoying than helpful, so don’t 
expect to see this technology taking over the world of computer mice.
But tactile feedback has potential for making human / computer interfaces more intuitive and 
efficient, even if today’s tactile technologies are clunky first attempts. The more senses we can 
directly involve in our control of computers, the broader the bandwidth of information and intention 
between human beings and machines.
Three-dimensional displays
The long-promised 3D computer monitor finally seems to be close to reality. Manipulating complex 
windows, documents and virtual objects on a two-dimensional display -- as is standard today -- is 
rather limiting. With a 3D monitor, we could work in layers or position documents and objects in 
3D space rather than squeezing them down to a tiny toolbar at the bottom of one screen.
For human beings, 3D space is intuitive. We get it without training. That’s because we live in a world 
of 3D objects and space, and our perception is hard-wired to understand spatial relationships. 
That’s why gamers who play first-person shooters like Quake can mentally retrace their way 
through enormous maps (levels) in their heads, eyes closed, without even trying: the human brain 
was built to remember and navigate 3D space.
Recent breakthroughs in 3D displays promise to make computing more intuitive and powerful. 
Companies like LightSpace Technologies (http://www.lightspacetech.com) are already selling 
desktop 3D display monitors that display true 3D images without the need for special glasses.
The trouble is, Windows and Mac operating systems weren’t written with 3D displays in mind. So The Ten Most Important Emerging Technologies for Humanity
 
      There’s no capability to stack windows or view the depth of objects. It’s a classic chicken-and-egg 
conundrum: who’s going to buy 3D displays if the software can’t support them, and why would 
software makers write 3D layering logic if nobody owns the displays?
In time, thanks to the “cool” factor of 3D displays, the technology will eventually receive enough 
attention to warrant the necessary R&D investment by operating system developers like Microsoft 
and Apple. No doubt, future generations will conduct all their computing with the aid of 3D displays, 
and the very idea of 2D displays will seem as outdated as black & white movies do to us today.
Another new 3D display device is the Perspecta Spatial 3D globe, seen at:
http://www.actuality-systems.com/index.php/actuality . This device displays 3D objects or 
animations inside a globe. Users can walk around the globe and view the objects from any angle. 
It’s a rather expensive item, of course, so early applications for this product focus on medical and 
research tasks. In time, however, the technology will drop in price, bringing it within reach of more 
consumers.
      In the category of the more familiar, a German company called SeeReal Technologies (http://
www.SeeReal.com) offers a 20” LCD 3D display that uses eye tracking combined with unique 
left/right display technology to create a true 3D image on a flat panel monitor without the need for 
special viewing glasses. These monitors are typically used in the CAD/CAM industry where the 
visualization of 3D objects is especially helpful. The lack of support for 3D space in the Windows 
operating system, however, makes these monitors useless for everyday users... at least for the 
moment.
What would 3D displays do for us?
So what should a flat panel 3D display actually do for a typical Windows or Mac user? At the most 
basic level, operating systems would need to support fundamental 3D features like:
• Layering of windows: Background windows would appear further away, while 
foreground windows appear closer.
• Pop out elements: Certain elements of a document or page could appear to 
“pop out” of the screen a half inch or so. This might be used similarly to bolding
or italicizing.
• Floating cursors: the mouse cursor appears to float above the screen and then, 
when clicked, it actually buries itself in the button being clicked, then quickly 
returns to its hover status.The Ten Most Important Emerging Technologies for Humanity

Note, however, that a 3D flat panel monitor is not the same as a true 3D display system: you can’t 
walk to the side of the monitor and see the windows behind it. It’s still essentially a 2D system in 
that it can’t display true volumetric shapes and objects that are viewable from multiple angles.
Tabletop 3D displays
     For that, we’ll ultimately need a tabletop 3D display system that lays flat on your desk (like an LCD 
monitor laying down) and projects 3D images into the space above the panel. This would be a true 
volumetric 3D display system, and it’s here that the technology truly represents a breakthrough. 
Program application windows could literally be stacked from the rear to the front, and if you 
peeked around the side of the display, you could see a side view of all the windows at once.
With proper software control, objects or documents could be placed in true 3D space: desktop 
icons, for example, could be lined up along the very back row. Games could display true 3D 
scenes as if you’re actually in them, and CAD engineers would have the ability to observe their 
designs in true 3D space.
Better yet, if coupled with a motion tracking glove or similar technology, users could use their 
hands to grasp, move, resize or otherwise manipulate elements in 3D space. This, of course, 
opens up an unlimited universe of possibilities for computer / human interaction.
Closing the gap
This brief tour of computer / human interface technologies is really only a glimpse of what’s 
possible. It’s all about closing the gap between human intention and computing systems. Today, 
the gap is very large: a typical keyboard and mouse setup is essentially a two-channel interface 
system. But tomorrow, the gap could be very small: add a head tracking system, hand-sensing 
glove, foot pedal switches, voice recognition system, 3D display and a brainwave-sensing helmet, 
and you’ve created layers of multi-channel interface technologies that allow infinite expression.
In time, as this technology is developed and adopted by mainstream users, the gap will continue 
to shrink. This has enormous positive implications in the workplace, medicine, science, education, 
social interaction, entertainment and many other areas, which is why it earns such a lengthy 
discussion in this report. And it’s not technology that’s “way out there,” either: it’s technology that’s 
emerging now and will continue to be developed in the years ahead.

No comments:

Post a Comment