| 
			
			 The researchers say the technology demonstrates a potential use for 
			robots to help people extend their range of abilities and do more 
			than one task at a time. 
 At the college's Brain and Behavior Lab, engineers have taken a 
			robotic arm and devised a system for it to be used as an extension 
			of the human body. Instead of following a set of computer commands, 
			the robot arm is guided by a tracker that follows the direction of 
			the eyes, with an algorithm translating the path of the user's gaze 
			into commands that control the robotic arm.
 
 On Wednesday (October 14), in what the team says was quite possibly 
			a world first, a researcher used the technology to paint a picture 
			while simultaneously eating a croissant and drinking coffee. Post 
			graduate student Sabine Dziemian said the intuitive computer program 
			meant that even when her hands were otherwise occupied she could 
			still accurately control the robotic arm.
 
			
			 
			"In general it's very intuitive because I don't have to think about 
			commands or something like this. I simply think about where I want 
			to draw or which color I want to take. And by thinking, a person 
			usually looks at that color. So I also then look at that color and 
			the robot goes there because it detects my eye movements and where 
			I'm looking, and it has the co-ordinates exactly so it goes there 
			directly. So I don't have to think a lot about this when I'm 
			controlling it," said Dziemian.
 The resulting painting is, admittedly, rudimentary. But the exercise 
			demonstrates how the technology could be implemented into everyday 
			life to literally give users an extra pair of hands.
 
 Led by Dr. Aldo Faisal from the Departments of Computing and 
			Bioengineering, the researchers developed sophisticated computer 
			software to decode the eye movements of the user into actions.
 
 "Six years ago we started to look at eye movements. It's a very 
			natural, intuitive means by which we can operate devices. And so 
			over the course of the years we developed systems that decode our 
			intention of action from our eye movements. So you can imagine, for 
			example, when you want to grab a cup; you will look at that cup 
			before you grab it. And you will look in a specific way so you can 
			judge where it is and how wide you have to shape your grip. And so 
			we're developing algorithms that decode this intention from eye 
			movement and we're then translating them into action," Faisal told 
			Reuters.
 
 The technology could have a massive impact on the lives of people 
			suffering from debilitating conditions like multiple sclerosis, 
			Amyotrophic Lateral Sclerosis, or Parkinson's Disease. Faisal said 
			the next step is to 'augment' the body so that everyone can 
			multi-task with the aid of eye-controlled robotics.
 
 [to top of second column]
 | 
            
 
			"Now we're not just talking about restorations of the body, but 
			really about augmentation of the body. So, we are developing 
			technology that is not only helpful in restoring the ability of 
			people to move, but really technology that can give even able-bodied 
			people an extra pair of hands; and extra pair of arms," he said 
			"Imagine, for example, that you can paint and eat and drink at the 
			same time, imagine holding a baby and preparing its food while you 
			do it all simultaneously. So there are whole new ways we can think 
			about interacting with the world." 
			For Dziemian, who took part in much of the research, the software 
			translates her eye movements into actions with very little effort - 
			even while she was eating and drinking.
 "I think the level of concentration is not very high because it's 
			something very intuitive. I didn't need a lot of time to learn how 
			to use it. Actually, using it one time was enough to know how to 
			control it completely," she said.
 
 Faisal said that their program is non-invasive, compared to other 
			areas of research that are focused on implanting technology directly 
			into people's brains. He said the "very invasive, very expensive, 
			(and) very risky operations that people have to undergo" would be 
			unnecessary with their system.
 
 "We are following a non-invasive approach where you don't have to 
			put technology into the head, but you can just, you know, you can 
			just take it on and off like a pair of glasses. That's the level of 
			technology that we want to offer to people because we think it's 
			much more acceptable, it's lower risk and, if we can operate 
			technology with the same level of intuition, we think it will have a 
			better sense, better opportunities for success."
 
			
			 
			
 The researchers are now looking for partners to commercialize the 
			technology, while working on making the software even more intuitive 
			so that it becomes a seamless interface between man and machine.
 
			[© 2015 Thomson Reuters. All rights 
				reserved.] Copyright 2015 Reuters. All rights reserved. This material may not be published, 
			broadcast, rewritten or redistributed. |