September 28, 2009

The Reality of Robot Surrogates


How far are we from sending robots into the world in our stead?

Imagine a world where you're stronger, younger, better looking, and don't age. Well, you do, but your robot surrogate—which you control with your mind from a recliner at home while it does your bidding in the world—doesn't.

It's a bit like The Matrix, but instead of a computer-generated avatar in a graphics-based illusion, in Surrogates—which opens Friday and stars Bruce Willis—you have a real titanium-and-fluid copy impersonating your flesh and blood and running around under your mental control. Other recent films have used similar concepts to ponder issues like outsourced virtual labor (Sleep Dealer) and incarceration (Gamer).

The real technology behind such fantastical fiction is grounded both in far-out research and practical robotics. So how far away is a world of mind-controlled personal automatons?

"We're getting there, but it will be quite a while before we have anything that looks like Bruce Willis," says Trevor Blackwell, the founder and CEO of Anybots, a robotics company in Mountain View, Calif., that builds "telepresence" robots controlled remotely like the ones in Surrogates.

Telepresence is action at a distance, or the projection of presence where you physically aren't. Technically, phoning in to your weekly staff meeting is a form of telepresence. So is joysticking a robot up to a suspected IED in Iraq so a soldier can investigate the scene while sitting in the (relative) safety of an armored vehicle.

Researchers are testing brain-machine interfaces on rats and monkeys that would let the animals directly control a robot, but so far the telepresence interfaces at work in the real world are physical. Through wireless Internet connections, video cameras, joysticks, and sometimes audio, humans move robots around at the office, in the operating room, underwater, on the battlefield, and on Mars.

A recent study by NextGen Research, a market research firm, projects that in the next five years, telepresence will become a significant feature of the US $1.16 billion personal robotics market, meaning robots for you or your home.

According to the study's project manager, Larry Fisher, telepresence "makes the most sense" for security and surveillance robots that would be used to check up on pets or family members from far away. Such robots could also allow health-care professionals to monitor elderly people taking medication at home to ensure the dosage and routine are correct.

Right now, most commercial teleoperated robots are just mobile webcams with speakers, according to NextGen. They can be programmed to roam a set path, or they can be controlled over the Internet by an operator. iRobot, the maker of the Roomba floor cleaner, canceled its telepresence robot, ConnectR, in January, choosing to wait until such a robot would be easier to use. But plenty of companies, such as Meccano/Erector and WowWee, are marketing personal telepresence bots.

Blackwell's Anybots, for example, has developed an office stand-in called QA. It's a Wi-Fi enabled, vaguely body-shaped wheeled robot with an ET-looking head that has cameras for eyes and a display in its chest that shows an image of the person it's standing in for. You can slap on virtual-reality goggles, sensor gloves, and a backpack of electronics to link to it over the Internet for an immersive telepresence experience. Or you can just connect to the robot through your laptop's browser.

For the rest of the article go to ieee spectrum

Original article posted by Anne-Marie Corley // September 2009

September 26, 2009

Honda's U3-X Personal Mobility Device is the Segway of unicycles

Yeah, we've seen a self-balancing unicycle before, but the brand new U3-X from Honda takes it to another level. A creepy-sterile, awesomely futuristic Honda level, to be precise. What makes the U3-X particularly interesting is it has the regular large wheel of a unicycle, but that wheel is actually made up of several small wheels in a series, which can rotate independently, meaning that the device can go forward, backward, side-to-side and diagonally, all being controlled with a simple lean. Honda credits its ASIMO research for this multi-directional capability, though we're not sure we see it -- ASIMO is biped, after all -- but far be it from us to discredit an excuse to keep up the good work on the ASIMO front. Right now the "experimental model" of the U3-X gets a single hour of battery and weighs under 22 pounds, with a seat and foot rests that fold into the device for extra portability. No word of course on when the thing might make it to market, but Honda plans to show it off next month at the Tokyo Motor Show. A devastatingly short video of the U3-X in action is after the break.

September 24, 2009

Stimulating Sight: Retinal Implant Could Help Restore Useful Level Of Vision To Certain Groups Of Blind People


Retinal Implant receives visual data from a camera mounted on a pair of glasses. The coil sends the images to a chip attached to the side of the eyeball, which processes the data and sends it to electrodes implanted below the retina. (Credit: Courtesy of Shawn Kelly)
Inspired by the success of cochlear implants that can restore hearing to some deaf people, researchers at MIT are working on a retinal implant that could one day help blind people regain a useful level of vision.

The eye implant is designed for people who have lost their vision from retinitis pigmentosa or age-related macular degeneration, two of the leading causes of blindness. The retinal prosthesis would take over the function of lost retinal cells by electrically stimulating the nerve cells that normally carry visual input from the retina to the brain.

Such a chip would not restore normal vision but it could help blind people more easily navigate a room or walk down a sidewalk.

"Anything that could help them see a little better and let them identify objects and move around a room would be an enormous help," says Shawn Kelly, a researcher in MIT's Research Laboratory for Electronics and member of the Boston Retinal Implant Project.

The research team, which includes scientists, engineers and ophthalmologists from Massachusetts Eye and Ear Infirmary, the Boston VA Medical Center and Cornell as well as MIT, has been working on the retinal implant for 20 years. The research is funded by the VA Center for Innovative Visual Rehabilitation, the National Institutes of Health, the National Science Foundation, the Catalyst Foundation and the MOSIS microchip fabrication service.

Led by John Wyatt, MIT professor of electrical engineering, the team recently reported a new prototype that they hope to start testing in blind patients within the next three years.

Electrical stimulation

Patients who received the implant would wear a pair of glasses with a camera that sends images to a microchip attached to the eyeball. The glasses also contain a coil that wirelessly transmits power to receiving coils surrounding the eyeball.

When the microchip receives visual information, it activates electrodes that stimulate nerve cells in the areas of the retina corresponding to the features of the visual scene. The electrodes directly activate optical nerves that carry signals to the brain, bypassing the damaged layers of retina.

One question that remains is what kind of vision this direct electrical stimulation actually produces. About 10 years ago, the research team started to answer that by attaching electrodes to the retinas of six blind patients for several hours.

When the electrodes were activated, patients reported seeing a small number of "clouds" or "drops of blood" in their field of vision, and the number of clouds or blood drops they reported corresponded to the number of electrodes that were stimulated. When there was no stimulus, patients accurately reported seeing nothing. Those tests confirmed that retinal stimulation can produce some kind of organized vision in blind patients, though further testing is needed to determine how useful that vision can be.

After those initial tests, with grants from the Boston Veteran's Administration Medical Center and the National Institutes of Health, the researchers started to build an implantable chip, which would allow them to do more long-term tests. Their goal is to produce a chip that can be implanted for at least 10 years.

One of the biggest challenges the researchers face is designing a surgical procedure and implant that won't damage the eye. In their initial prototypes, the electrodes were attached directly atop the retina from inside the eye, which carries more risk of damaging the delicate retina. In the latest version, described in the October issue of IEEE Transactions on Biomedical Engineering, the implant is attached to the outside of the eye, and the electrodes are implanted behind the retina.

That subretinal location, which reduces the risk of tearing the retina and requires a less invasive surgical procedure, is one of the key differences between the MIT implant and retinal prostheses being developed by other research groups.

Another feature of the new MIT prototype is that the chip is now contained in a hermetically sealed titanium case. Previous versions were encased in silicone, which would eventually allow water to seep in and damage the circuitry.

While they have not yet begun any long-term tests on humans, the researchers have tested the device in Yucatan miniature pigs, which have roughly the same size eyeballs as humans. Those tests are only meant to determine whether the implants remain functional and safe and are not designed to observe whether the pigs respond to stimuli to their optic nerves.

So far, the prototypes have been successfully implanted in pigs for up to 10 months, but further safety refinements need to be made before clinical trials in humans can begin.

Wyatt and Kelly say they hope that once human trials begin and blind patients can offer feedback on what they're seeing, they will learn much more about how to configure the algorithm implemented by the chip to produce useful vision.

Patients have told them that what they would like most is the ability to recognize faces. "If they can recognize faces of people in a room, that brings them into the social environment as opposed to sitting there waiting for someone to talk to them," says Kelly.


Journal reference:

  1. Shire, D. B.; Kelly, S. K.; Chen , J.; Doyle , P.; Gingerich, M. D.; Cogan, S. F.; Drohan, W. A.; Mendoza, O.; Theogarajan, L.; Wyatt, J. L.; Rizzo, J. F. Development and Implantation of a Minimally Invasive Wireless Subretinal Neurostimulator. IEEE Transactions on Biomedical Engineering, October 2009 DOI: 10.1109/TBME.2009.2021401
Adapted from materials provided by Massachusetts Institute of Technology. Original article written by Anne Trafton, MIT News Office.

September 23, 2009

Video surveillance system that reasons like a human brain

BRS Labs announced a video-surveillance technology called Behavioral Analytics, which leverages cognitive reasoning, and processes visual data on a level similar to the human brain.

It is impossible for humans to monitor the tens of millions of cameras deployed throughout the world, a fact long recognized by the international security community. Security video is either used for forensic analysis after an incident has occurred, or it employs a limited-capability technology known as Video Analytics – a video-motion and object-classification-based software technology that attempts to watch video streams and then sends an alarm on specific pre-programmed events. The problem is that this legacy solution generates a great number of false alarms that effectively renders it useless in the real world.

BRS Labs has created a technology it calls Behavioral Analytics. It uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming.

The system learns autonomously, and builds cognitive “memories” while continuously monitoring a scene through the “eyes” of a CCTV security camera. It sees and then registers the context of what constitutes normal behavior, and the software distinguishes and alerts on abnormal behavior without requiring any special programming, definition of rules or virtual trip lines.

AISight is currently fielded across a wide variety of global critical infrastructure assets, protecting major international hotels, banking institutions, seaports, nuclear facilities, airports and dense urban areas plagued by criminal activity.

Original article by Helpnet security

September 21, 2009

Robots get smarter by asking for help


To the right:Robot recharging itself

ASKING someone for help is second nature for humans, and now it could help robots overcome one of the thorniest problems in artificial intelligence.

That's the thinking behind a project at Willow Garage, a robotics company in Palo Alto, California. Researchers there are training a robot to ask humans to identify objects it doesn't recognise. If successful, it could be an important step in developing machines capable of operating with consistent autonomy.

Object recognition has long troubled AI researchers. While computers can be taught to recognise simple objects, such as pens or mugs, they often make mistakes when the lighting conditions or viewing angle change. This makes it difficult to create robots that can navigate safely around buildings and interact with objects, a problem Willow Garage encountered when building its Personal Robot 2 (PR2).

Where AI struggles, humans excel, finding this sort of recognition task almost effortless. So Alex Sorokin, a computer scientist at the University of Illinois at Urbana-Champaign, who collaborates with Willow Garage, decided to take advantage of this by building a system that allows PR2 to ask humans for help.

The system uses Amazon's Mechanical Turk, an online marketplace which pairs up workers with employers that have simple tasks they need completing. The robot takes a photo of the object it doesn't recognise and sends it to Mechanical Turk. Workers can then use Sorokin's software to draw an outline around an object in the image and attach a name to it, getting paid between 3 and 15 cents for each image they process.

In initial tests, the robot moved through Willow Garage's offices, sending images to be processed every few seconds. Labelled images started coming back a few minutes later. The accuracy rate was only 80 per cent, but Sorokin says this can be improved by paying other workers to verify that the responses are valid.

Sorokin believes his system will help robots learn about new environments. A cleaning robot, for example, could spend its first week in a new building taking pictures and having people label them, helping it to build up a model of the space and the objects it contained. If it got stuck, it could always ask for help again.

"This is a fantastic idea," says John Leonard, a roboticist at the Massachusetts Institute of Technology. Potentially this could allow robots to operate for long periods without direct intervention from a human operator, he adds.

The next step for the programmers is to enable PR2 to make sense of the human responses and then act upon them, Sorokin says.

September 17, 2009

The Eyeborg Project (Eye Socket Camera)

(Not The Movie Eyeborgs)

Eyeborg Phase II from eyeborg on Vimeo.



Is Rob Spence's( a filmaker) and Kosta Grammatis's( a former SpaceX avionics systems engineer) project to embed a video camera and transmitter in a prosthetic eye that will then record the world from a perspective never seen before. The only thing I'd be concerned with is it getting hacked into since it has a wireless transmitter.

Check it out at Eyeborgproject .com

Check out their blog -->here<--

If the video loads too slowly check it out at youtube -->here<--

September 16, 2009

Cyborg crickets could chirp at the smell of survivors


To the right: Could Modified insects be joining rescue workers in the search for survivors in the future?(Image: KPA/Zuma / Rex Features)

IF you're trapped under rubble after an earthquake, wondering if you'll see daylight again, the last thing you need is an insect buzzing around your face. But that insect could save your life, if a scheme funded by the Pentagon comes off.

The project aims to co-opt the way some insects communicate to give early warning of chemical attacks on the battlefield - the equivalent of the "canary in a coal mine". The researchers behind it say the technology could be put to good use in civilian life, from locating disaster victims to monitoring for pollution and gas leaks, or acting as smoke detectors.

Pentagon-backed researchers have already created insect cyborgs by implanting them with electrodes to control their wing muscles. The latest plan is to create living communication networks by implanting a package of electronics in crickets, cicadas or katydids - all of which communicate via wing-beats. The implants will cause the insects in these OrthopterNets to modulate their calls in the presence of certain chemicals.

"We could do this by adjusting the muscle tension or some other parameter that affects the sound-producing movements. The insect itself might not even notice the modulation," says Ben Epstein of OpCoast, who came up with the idea during a visit to China, where he heard cicadas changing calls in response to each other. The firm, which is based in Point Pleasant Beach, New Jersey, has been awarded a six-month contract to develop a mobile communications network for insects.

As well as a biochemical sensor and a device for modulating the wing muscles, the electronics package would contain an acoustic sensor designed to respond to the altered calls of other insects. This should ensure the "alarm" signal is passed quickly across the network and is ultimately picked up by ground-based transceivers.

The Pentagon's priority is for the insects to detect chemical and biological agents on the battlefield, but Epstein says they could be modified to respond to the scent of humans and thus be used to find survivors of earthquakes and other disasters.

The real challenge will be to miniaturise the electronics. "Given a big enough insect it wouldn't be a problem," says Epstein. But the company is looking at ubiquitous species such as crickets, which tend to be smaller. Each network is likely to use hundreds or thousands of insects, though they could be spread far apart: some katydids can be heard a kilometre away.

Are OrthopterNets feasible? "I don't see why not," says Peter Barnard, director of science at the Royal Entomological Society in London. "Although insects might appear to be limited by the anatomy of their sound-producing organs, we know that they can produce different signals for different purposes." Since there is already evidence of modulation within quite broad bandwidths of frequencies for communication, it might be possible to modify and exploit these abilities, he says.

Originally posted in New scientist

September 14, 2009

Smart implants may alleviate neurological conditions


Brain implants have been tried as a treatment for epilepsy, but they could tackle a range of other conditions (Image: Medtronic)

SMART implants in the brains of people with neurological disorders could eventually help develop treatments for people with Parkinson's disease, depression and obsessive compulsive disorder.

Last week, a team from Medtronic of Minneapolis, Minnesota, reported on their design for a neurostimulator at the Engineering in Medicine and Biology Society meeting in Minneapolis. The devices use electrodes to deliver deep stimulation to specific parts of the brain.
Neurostimulators are already approved to treat conditions such as Parkinson's disease, essential tremor, and dystonia, as well as obsessive compulsive disorder. But existing devices deliver stimulation on a set schedule, not in response to abnormal brain activity. The Medtronic researchers think a device that reacts to brain signals could be more effective, plus the battery would last longer, an important consideration for implantable devices.

Tim Denison, a Medtronic engineer working on the device, says that the neurostimulator will initially be useful for studying brain signals as patients go about their day. Eventually, the data collected will show whether the sensors would be useful for detecting and preventing attacks.

Human trials are years away, but elsewhere, NeuroPace a start-up firm in Mountain View, California, is finishing clinical trials using its RNS smart implant device in 240 people with epilepsy, the results of which will be available in December, says Martha Morrell, chief medical officer at NeuroPace. An earlier feasibility study on 65 patients provided preliminary evidence that the devices did reduce seizures.

The NeuroPace device is implanted within the skull where it monitors electrical activity via electrodes implanted deep in the brain. If it spots the "signature" of a seizure, it will deliver brief and mild electrical stimulation to suppress it. Mark George, a neurologist at the Medical University of South Carolina in Charleston, says heart pacemakers developed in a similar way, as researchers learned to make them detect and react to signals from the heart. "I think it's absolutely inevitable that we'll develop a smarter, more intelligent way to figure out how and when to stimulate," George says.

Posted in newscientist 12 September 2009 by Kurt Kleiner

September 13, 2009

Japanese scientists aim to create robot-insects


Live male silkmoth is used for an experiment to create insect-machine hybrids, in Tokyo. Researchers from Tokyo University's Research Centre for Advanced Science and Technology motivate the insect to steer the vehicle left or right by using female odour.

Police release a swarm of robot-moths to sniff out a distant drug stash. Rescue robot-bees dodge through earthquake rubble to find survivors.
These may sound like science-fiction scenarios, but they are the visions of Japanese scientists who hope to understand and then rebuild the brains of insects and programme them for specific tasks.

Ryohei Kanzaki, a professor at Tokyo University's Research Centre for Advanced Science and Technology, has studied insect brains for three decades and become a pioneer in the field of insect-machine hybrids.
His original and ultimate goal is to understand human brains and restore connections damaged by diseases and accidents -- but to get there he has taken a very close look at insects' "micro-brains".

The human brain has about 100 billion neurons, or nerve cells, that transmit signals and prompt the body to react to stimuli. Insects have far fewer, about 100,000 inside the two-millimetre-wide (0.08 inch) brain of a silkmoth.

But size isn't everything, as Kanzaki points out.

Insects' tiny brains can control complex aerobatics such as catching another bug while flying, proof that they are "an excellent bundle of software" finely honed by hundreds of millions of years of evolution, he said.
For example, male silkmoths can track down females from more than a kilometre (half a mile) away by sensing their odour, or pheromone.
Kanzaki hopes to artificially recreate insect brains.
"Supposing a brain is a jigsaw-puzzle picture, we would be able to reproduce the whole picture if we knew how each piece is shaped and where it should go," he told AFP.
"It will be possible to recreate an insect brain with electronic circuits in the future. This would lead to controlling a real brain by modifying its circuits," he said.
Kanzaki's team has already made some progress on this front.
In an example of 'rewriting' insect brain circuits, Kanzaki's team has succeeded in genetically modifying a male silkmoth so that it reacts to light instead of odour, or to the odour of a different kind of moth.
Such modifications could pave the way to creating a robo-bug which could in future sense illegal drugs several kilometres away, as well as landmines, people buried under rubble, or toxic gas, the professor said.

All this may appear very futuristic -- but then so do the insect-robot hybrid machines the team has been working on since the 1990s.
In one experiment, a live male moth is strapped onto what looks like a battery-driven toy car, its back glued securely to the frame while its legs move across a free-spinning ball.
Researchers motivate the insect to turn left or right by using female odour.
The team found that the moth can steer the car and quickly adapt to changes in the way the vehicle operates -- for example by introducing a steering bias to the left or right similar to the effect of a flat tyre.
In another, more advanced, test, the team severed a moth's head and mounted it onto the front of a similar vehicle.
They then directed similar odour stimuli to the contraption which the insect's still-functioning antennae and brain picked up.
Researchers recorded the motor commands issued by nerve cells in the brain, which were transmitted to steer the vehicle in real time.
The researchers also observed which neuron responds to which stimulus, making them visible using fluorescent markers and 3-D imaging.
The team has so far obtained data on 1,200 neurons, one of the world's best collections on a single species.
Kanzaki said that animals, like humans, are proving to be highly adaptable to changing conditions and environments.
"Humans walk only at some five kilometres per hour but can drive a car that travels at 100 kilometres per hour. It's amazing that we can accelerate, brake and avoid obstacles in what originally seem like impossible conditions," he said.
"Our brain turns the car into an extension of our body," he said, adding that "an insect brain may be able to drive a car like we can. I think they have the potential.
"It isn't interesting to make a robo-worm that crawls as slowly as the real one. We want to design a machine which is far more powerful than the living body."
(c) 2009 AFP

September 11, 2009

The Next Hacking Frontier: Your Brain?

Hackers who commandeer your computer are bad enough. Now scientists worry that someday, they’ll try to take over your brain.

-->In the past year, researchers have developed technology that makes it possible to use thoughts to operate a computer, maneuver a wheelchair or even use Twitter — all without lifting a finger. But as neural devices become more complicated — and go wireless — some scientists say the risks of “brain hacking” should be taken seriously.
“Neural devices are innovating at an extremely rapid rate and hold tremendous promise for the future,” said computer security expert Tadayoshi Kohno of the University of Washington. “But if we don’t start paying attention to security, we’re worried that we might find ourselves in five or 10 years saying we’ve made a big mistake.”
Hackers tap into personal computers all the time — but what would happen if they focused their nefarious energy on neural devices, such as the deep-brain stimulators currently used to treat Parkinson’s and depression, or electrode systems for controlling prosthetic limbs? According to Kohno and his colleagues, who published their concerns July 1 in Neurosurgical Focus, most current devices carry few security risks. But as neural engineering becomes more complex and more widespread, the potential for security breaches will mushroom.
For example, the next generation of implantable devices to control prosthetic limbs will likely include wireless controls that allow physicians to remotely adjust settings on the machine. If neural engineers don’t build in security features such as encryption and access control, an attacker could hijack the device and take over the robotic limb.
“It’s very hard to design complex systems that don’t have bugs,” Kohno said. “As these medical devices start to become more and more complicated, it gets easier and easier for people to overlook a bug that could become a very serious risk. It might border on science fiction today, but so did going to the moon 50 years ago.”

Some might question why anyone would want to hack into someone else’s brain, but the researchers say there’s a precedent for using computers to cause neurological harm. In November 2007 and March 2008, malicious programmers vandalized epilepsy support websites by putting up flashing animations, which caused seizures in some photo-sensitive patients.
“It happened on two separate occasions,” said computer science graduate student Tamara Denning, a co-author on the paper. “It’s evidence that people will be malicious and try to compromise peoples’ health using computers, especially if neural devices become more widespread.”
In some cases, patients might even want to hack into their own neural device. Unlike devices to control prosthetic limbs, which still use wires, many deep brain stimulators already rely on wireless signals. Hacking into these devices could enable patients to “self-prescribe” elevated moods or pain relief by increasing the activity of the brain’s reward centers.
Despite the risks, Kohno said, most new devices aren’t created with security in mind. Neural engineers carefully consider the safety and reliability of new equipment, and neuroethicists focus on whether a new device fits ethical guidelines. But until now, few groups have considered how neural devices might be hijacked to perform unintended actions. This is the first time an academic paper has addressed the topic of “neurosecurity,” a term the group coined to describe their field.

“The security and privacy issues somehow seem to slip by,” Kohno said. “I would not be surprised if most people working in this space have never thought about security.”
Kevin Otto, a bioengineer who studies brain-machine interfaces at Purdue Universty, said he was initially skeptical of the research. “When I first picked up the paper, I don’t know if I agreed that it was an issue. But the paper gives a very compelling argument that this is important, and that this is the time to have neural engineers collaborate with security developers.”
It’s never too early to start thinking about security issues, said neural engineer Justin Williams of the University of Wisconsin, who was not involved in the research. But he stressed that the kinds of devices available today are not susceptible to attack, and that fear of future risks shouldn’t impede progress in the field. “These kinds of security issues have to proceed in lockstep with the technology,” Williams said.
History provides plenty of examples of why it’s important to think about security before it becomes a problem, Kohno said. Perhaps the best example is the internet, which was originally conceived as a research project and didn’t take security into account.
“Because the internet was not originally designed with security in mind,” the researchers wrote, “it is incredibly challenging — if not impossible — to retrofit the existing internet infrastructure to meet all of today’s security goals.” Kohno and his colleagues hope to avoid such problems in the neural device world, by getting the community to discuss potential security problems before they become a reality.
“The first thing is to ask ourselves is, ‘Could there be a security and privacy problem?’” Kohno said. “Asking ‘Is there a problem?’ gets you 90 percent there, and that’s the most important thing.”

Originally posted in Wired magazine

September 10, 2009

iCub, the Toddler Robot


(PhysOrg.com) -- A little humanoid robot called iCub is learning how to think for itself, bringing the world of science fiction to reality. The major goal of the "RobotCub" project is to study how humans learn and think, using a robot with the size and brain of a toddler, but the study is also expected to have practical applications in the near future.

The robot, with its cute white face and big eyes, is designed to learn from experience and adapt to changes in its environment, just like a human child. As iCub learns, the scientists behind it hope to learn about the development of cognition in humans. According to research director Peter Ford Dominey, the goal is to understand more about the ability of humans to cooperate, work together, and understand what others want us to do.

Human intelligence develops through interaction with the environment and other human beings, and mental processes are strongly connected to the physical body and its actions. The central hypothesis of the project is therefore that the best way to model the human mind is to create a humanoid that is controlled by realistic algorithms and allowed to explore the world like a real child.
Scientists are working on several versions of iCub in laboratories throughout Europe, attempting to perfect the robot's "brain", but the birthplace of iCub is the Italian Institute of Technology (ITT) in Genoa, Italy, where the RobotCub project began in 2004 under the leadership of Giulio Sandini.

The iCub robot stands at just over three feet high, or about the size of a three year old child. Its face has just a hint of a nose and mouth, and its big eyes allow it to see and track objects in its environment. Its body consists of many electronic circuits built into articulated trunk and limbs that give it a wide range of movements. Sensors allow the robot to feel, and some iCubs can speak. In a recent experiment in Lyon, France, iCub demonstrated that it could change roles in a game. iCub watched two humans play the "game", in which one lifted up a box to reveal a toy, and the second lifted up the toy and put it down again. The first person then replaced the box over the toy. Having watched the game, iCub could take the part of either "player".

This game may sound simple enough, but such learning capabilities put iCub at the forefront of robotics research. It also raises the question of what is consciousness. If iCub understands that someone has a goal, is that consciousness? asks Dominey.

As well as the scientific advancements expected from iCub studies, the robots may well have practical uses in the future. Suggestions include playing games with hospital physiotherapy patients to help in their recovery, and in the longer term, perhaps even in the next decade, iCub could become a helper in the home, making its own decisions on what needs to be done.
The five year project is supported by the European Commission. The software is open-source and the developers are open to forming further collaborations with laboratories around the world.

September 9th, 2009 by Lin Edwards for Physorg.com

September 8, 2009

Better Vision, With a Telescope Inside the Eye


A tiny telescope, already approved for use in Europe, can be implanted in one eye to help people with an advanced form of macular degeneration. The device takes the place of the natural lens.
A TINY glass telescope, the size of a pea, has been successfully implanted in the eyes of people with severely damaged retinas, helping them to read, watch television and better see familiar faces.
The new device is for people with an irreversible, advanced form of macular degeneration in which a blind spot develops in the central vision of both eyes.
In a brief, outpatient procedure, a corneal specialist implants the mini-telescope in one eye in place of its natural lens. The telescope magnifies images on the retina, extending them so they fall on healthy cells outside the damaged macula, said Allen W. Hill, chief executive of VisionCare Ophthalmic Technologies in Saratoga, Calif., the implant’s maker.
In March, an advisory panel to the Food and Drug Administration unanimously recommended approval of the device. VisionCare says it expects the F.D.A. to give its O.K. later this year. The device has already been approved for use in Europe.
The implanted telescope holds much promise for patients, typically elderly, who suffer from end-stage, age-related macular degeneration, or A.M.D., said Janet P. Szlyk, a member of the advisory panel. Dr. Szlyk is executive director of the Chicago Lighthouse for People Who Are Blind or Visually Impaired, a social services agency.
The device does not cure the disease, but it does improve visual acuity, she said. For example, a person who might usually see a blur when looking at a friend’s face might, with the help of the magnified image, see a blur only in the area of the person’s nose or mouth.
“People can use it to recognize faces in a social setting,” she said. ‘That’s a huge advance.”
The telescope is implanted in one eye for jobs like reading and facial recognition. The other eye, unaltered, is used for peripheral vision during other activities like walking. After implantation, extensive therapy is crucial, she said, to learn to deal with the different abilities of the eyes.
Ruth A. Boocks, 86, of Alpharetta, Ga., who received an implant of the device in March 2003 during clinical trials, said her brain learned to adapt quickly. Mrs. Boocks uses her new visual abilities in various ways — for instance, to read e-mail and the messages that scroll across the bottom of the screen when she’s watching television. “My goal was to read to the bottom of the eye charts,” she said. “But I didn’t quite make it.” (She has gotten to the third line from the bottom.)
“I feel like a young woman,” she added. “It’s opened a lot of opportunities for me.”
Henry L. Hudson, a retina specialist in Tucson, Ariz., and lead author of two papers on the telescope published in peer-reviewed journals, said the device was not for everyone with A.M.D. “Maybe only 20 out of every 100 candidates will get the telescope,” he said. “They may not be eligible because of the shape of their eyes,” or they may have another problem, like maintaining balance, that precludes their selection, he added.
After F.D.A. approval, VisionCare will apply to Medicare to cover the device, Mr. Hill said. “We anticipate that it will be seen as a covered benefit for the improvement of visual acuity,” he said.
The price of the device has not been set. Current tools for ameliorating low-vision problems, like glasses fitted with telescopes or reading machines, are typically not covered by insurance.
Dr. Bruce P. Rosenthal is chief of low-vision programs at Lighthouse International in New York City, where telescopes mounted on eyeglass frames, for instance, might be prescribed for people with A.M.D. to help them watch a sports event. He said that patients might be as well served by these glasses as by the new implants, and that he hoped long-term studies would compare the benefits of the two approaches.
“Even though studies on the implants have reported minimal complications, there can be complications when you are inserting anything in the eye,” he said. “Even routine cataract surgery can lead to loss of vision.”
Dr. Rosenthal said the implanted telescope might be beneficial for some patients, “especially if they don’t want other people to know they are visually impaired.” Telescopes mounted on eyeglasses bulge outward, often extending an inch or so beyond the frames.
But he is concerned that people using implants might have trouble with balance. “There is a potential for falling when a person has a big image from one eye and a normal-sized image from the other,” he said.
DURING trials of the device, there was no increase in the incidence of falls among participants, Dr. Hudson said. More than 200 patients received implants in the study, and the effects have been tracked in the group for the past five years.
“The vast majority of the patients have been able to adapt to the new state,” using one eye for ambulating and the other for reading, facial recognition and similar chores, he said. “The
average patient goes from legally blind to being able to read large-print books.”


Published: July 18, 2009
for New york times

September 7, 2009

Implantable Device Offers Continuous Cancer Monitoring

Surgical removal of a tissue sample is now the standard for diagnosing cancer. Such procedures, known as biopsies, are accurate but offer only a snapshot of the tumor at a single moment in time.
Monitoring a tumor for weeks or months after the biopsy and tracking its growth and how it responds to treatment would be much more valuable, says Michael J. Cima, Ph.D., who has developed the first implantable device that can do just that. Dr. Cima, professor of materials science and engineering at the Massachusetts Institute of Technology (MIT) and a member of the MIT-Harvard Center of Cancer Nanotechnology Excellence (CCNE), and his colleagues reported in the journal Biosensors and Bioelectronics that their device successfully tracked a tumor marker in mice for 1 month. Fellow MIT CCNE investigators Robert Langer, Ph.D., Al Charest, Ph.D., M.Sc., and Ralph Weissleder, M.D., Ph.D., also contributed to this work.
Such implants could one day provide up-to-the-minute information about what a tumor is doing—whether it is growing or shrinking, how it is responding to treatment, and whether it has metastasized or is about to do so. “What this does is basically take the lab and put it in the patient,” said Dr. Cima.
The devices, which could be implanted at the time of biopsy, also could be tailored to monitor chemotherapy agents, allowing doctors to determine whether cancer drugs are reaching the tumors. They also can be designed to measure acidity (pH) or oxygen levels, which reveal tumor metabolism and how it is responding to therapy.
The cylindrical, 5-millimeter implant is made of high-density polyethylene encased in a polycarbonate membrane with 10-nanometer-diameter pores. Magnetic nanoparticles coated with antibodies specific to the target molecules are loaded into the device. Target molecules enter the implant through the polycarbonate membrane, binding to the nanoparticles and causing them to clump together. That clumping can be detected by magnetic resonance imaging (MRI) because the aggregated nanoparticles produce a marked change in the MRI signal associated with the implanted device. The researchers observed measurable changes within 1 day of implantation.
In the published work, the investigators transplanted human tumors into test mice and then used the implants to track levels of human chorionic gonadotropin, a hormone produced by the human tumor cells. Dr. Cima said he believes an implant to test for pH levels could be commercially available in a few years, followed by devices to test for complex chemicals such as other hormones and drugs.
This work, which is detailed in the paper “Implantable diagnostic device for cancer monitoring,” was supported by the NCI Alliance for Nanotechnology in Cancer, a comprehensive initiative designed to accelerate the application of nanotechnology to the prevention, diagnosis, and treatment of cancer. An abstract is available at the journal’s Web site.
Provided by National Cancer Institute

September 6, 2009

'Plasmobot': Scientists To Design First Robot Using Mould

Scientists at the University of the West of England are to design the first ever biological robot using mould.

Researchers have received a Leverhulme Trust grant worth £228,000 to develop the amorphous non-silicon biological robot, plasmobot, using plasmodium, the vegetative stage of the slime mould Physarum polycephalum, a commonly occurring mould which lives in forests, gardens and most damp places in the UK. The Leverhulme Trust funded research project aims to design the first every fully biological (no silicon components) amorphous massively-parallel robot.

This project is at the forefront of research into unconventional computing. Professor Andy Adamatzky, who is leading the project, says their previous research has already proved the ability of the mould to have computational abilities.

Professor Adamatzky explains, “Most people’s idea of a computer is a piece of hardware with software designed to carry out specific tasks. This mould, or plasmodium, is a naturally occurring substance with its own embedded intelligence. It propagates and searches for sources of nutrients and when it finds such sources it branches out in a series of veins of protoplasm. The plasmodium is capable of solving complex computational tasks, such as the shortest path between points and other logical calculations. Through previous experiments we have already demonstrated the ability of this mould to transport objects. By feeding it oat flakes, it grows tubes which oscillate and make it move in a certain direction carrying objects with it. We can also use light or chemical stimuli to make it grow in a certain direction.
“This new plasmodium robot, called plasmobot, will sense objects, span them in the shortest and best way possible, and transport tiny objects along pre-programmed directions. The robots will have parallel inputs and outputs, a network of sensors and the number crunching power of super computers. The plasmobot will be controlled by spatial gradients of light, electro-magnetic fields and the characteristics of the substrate on which it is placed. It will be a fully controllable and programmable amorphous intelligent robot with an embedded massively parallel computer.”
This research will lay the groundwork for further investigations into the ways in which this mould can be harnessed for its powerful computational abilities.
Professor Adamatzky says that there are long term potential benefits from harnessing this power, “We are at the very early stages of our understanding of how the potential of the plasmodium can be applied, but in years to come we may be able to use the ability of the mould for example to deliver a small quantity of a chemical substance to a target, using light to help to propel it, or the movement could be used to help assemble micro-components of machines. In the very distant future we may be able to harness the power of plasmodia within the human body, for example to enable drugs to be delivered to certain parts of the human body. It might also be possible for thousands of tiny computers made of plasmodia to live on our skin and carry out routine tasks freeing up our brain for other things. Many scientists see this as a potential development of amorphous computing, but it is purely theoretical at the moment.”
Professor Adamatzky has recently edited and had published by Springer, ‘Artificial Life Models in Hardware’ aimed at students and researchers of robotics. The book focuses on the design and real-world implementation of artificial life robotic devices and covers a range of hopping, climbing, swimming robots, neural networks and slime mould and chemical brains.

.Original article posted at science daily

September 4, 2009

Go to hospital to see computing's future


Innovation is our regular column that highlights emerging technological ideas and where they may lead.
If you want to know how people will interact with machines in the future, head for a hospital.
That's the impression I got from a new report about the future of human-computer interaction from IT analysts Gartner, based in Stamford, Connecticut.
Gartner's now-classic chart, shown right, shows the rollercoaster of expectations ridden by new technologies: rocketing from obscurity to a peak of overblown hype, then falling into a "trough of disillusionment" before finally becoming mainstream as a tech's true worth is found.

Enlightened climb

Speech recognition, currently climbing the slope of enlightenment towards the plateau of productivity, is a good example of how healthcare helps new technology.
Some homeworkers are now hooked, and the technology is appearing in cellphones and voicemail systems. But its maturity owes as much to the rehabilitation industry as the software industry.
Today's true power users of voice recognition are people who are physically unable to use keyboard or mouse. For them, it is as much a medical device as an office aide. They have not only supported public and private research over the years, but also provided a market for the technology when it was far from perfect.

Guided by eyes

Eye tracking, climbing the hype peak as you read this, is also an everyday reality for many people for whom conventional interfaces are difficult.
Without that spur to innovation it is unlikely that more mainstream uses for eye tracking, from making computer games spring baddies when you least expect it to having billboards track passers by, would be so advanced.
Slumped at the bottom of the trough of disillusionment, virtual reality seems too familiar an idea to be labelled "emerging". But it, too, is relatively well established in the clinic, where the high installation costs can be justified.
Psychologists have long used it to recreate scary scenarios while treating phobias. More recently it has shown promise for phantom limb pain and schizophrenia diagnosisMovie Camera. Many US soldiers returning from Iraq and Afghanistan are being treated using virtual experiences.
Gartner forecasts 10 more years before virtual reality reaches the mainstream – a prediction some readers may remember from the 1980s – but it is likely to become mainstream for psychology much earlier than that.

Mind control

Haptics is another technology with consumer potential that's already being used in clinical contexts: for remote surgery and training, and for interpreting complex scan output.
And the computer interface technology that's likely to be the most significant of all can also be experienced properly only in a hospital so far. It's not hard to imagine who looks forward most eagerly to the latest developments in mind control of computers.

A handful of people already know what exerting such control can offer. Without lifting a finger they are able to send email, play video games(see video), control wheelchairs or prosthetic arms, update Twitter and even have their thoughts read aloud(see video)
Similarly, victims of accidents or injury provide the first hints of the kind of "upgrades" the otherwise healthy may in future choose to make to their bodies.

Seal of approval

Hospitals may not only be providing a preview of future interfaces, though – they may also be ensuring that they hit the big time with fewer design glitches.
Despite some conspicuous success in the smartphone arena, touch interface technology could still do with some improvementMovie Camera, and it's often less use than older but better-understood interfaces.
The technological nursery of the healthcare market could prevent so many ergonomic and design wrinkles making it to mass deployment in future.
Not only will the mainstream gadget industry have some tried-and-tested examples to draw on, but designs will have benefited from the safety and usability requirements demanded of medical devices by regulators like the US Food and Drug Administration.

Original article posted in New Scientist on 31 August 2009 by Tom Simonite

September 3, 2009

We Are All Mutants: Measurement Of Mutation Rate In Humans By Direct Sequencing




An international team of 16 scientists today reports the first direct measurement of the general rate of genetic mutation at individual DNA letters in humans. The team sequenced the same piece of DNA - 10,000,000 or so letters or 'nucleotides' from the Y chromosome - from two men separated by 13 generations, and counted the number of differences. Among all these nucleotides, they found only four mutations.

In 1935 one of the founders of modern genetics, J. B. S. Haldane, studied men in London with the blood disease haemophilia and estimated that there would be one in 50,000 incidence of mutations causing haemophilia in the gene affected - the equivalent of a mutation rate of perhaps one in 25 million nucleotides across the genome. Others have measured rates at a few further specific genes or compared DNA from humans and chimpanzees to produce general estimates of the mutation rate expressed more directly in nucleotides of DNA.
Remarkably, the new research, recently published in Current Biology, shows that these early estimates were spot on - in total, we all carry 100-200 new mutations in our DNA. This is equivalent to one mutation in each 15 to 30 million nucleotides. Fortunately, most of these are harmless and have no apparent effect on our health or appearance.
"The amount of data we generated would have been unimaginable just a few years ago," says Dr Yali Xue from the Wellcome Trust Sanger Institute and one of the project's leaders. "But finding this tiny number of mutations was more difficult than finding an ant's egg in the emperor's rice store."
Team member Qiuju Wang recruited a family from China who had lived in the same village for centuries. The team studied two distant male-line relatives - separated by thirteen generations - whose common ancestor lived two hundred years ago.
To establish the rate of mutation, the team examined an area of the Y chromosome. The Y chromosome is unique in that, apart from rare mutations, it is passed unchanged from father to son; so mutations accumulate slowly over the generations.
Despite many generations of separation, researchers found only 12 differences among all the DNA letters examined. The two Y chromosomes were still identical at 10,149,073 of the 10,149,085 letters examined. Of the 12 differences, eight had arisen in the cell lines used for the work. Only four were true mutations that had occurred naturally through the generations.
We have known for a long time that mutations occur occasionally in each of us, but have had to guess exactly how often. Now, thanks to advances in the technology for reading DNA, this new research has been possible.
Understanding mutation rates is key to many aspects of human evolution and medical research: mutation is the ultimate source of all our genetic variation and provides a molecular clock for measuring evolutionary timescales. Mutations can also lead directly to diseases like cancer. With better measurements of mutation rates, we could improve the calibration of the evolutionary clock, or test ways to reduce mutations, for example.
Even with the latest DNA sequencing technology, the researchers had to design a special strategy to search for the vanishingly rare mutations. They used next-generation sequencing to establish the order of letters on the two Y chromosomes and then compared these to the Y chromosome reference sequence.
Having identified 23 candidate SNPs - or single letter changes in the DNA - they amplified the regions containing these candidates and checked the sequences using the standard Sanger method. A total of four naturally occurring mutations were confirmed. Knowing this number of mutations, the length of the area that they had searched and the number of generations separating the individuals, the team were able to calculate the rate of mutation.
"These four mutations gave us the exact mutation rate - one in 30 million nucleotides each generation - that we had expected," says the study's coordinator, Chris Tyler-Smith, also from The Wellcome Trust Sanger Institute. "This was reassuring because the methods we used - harnessing next-generation sequencing technology - had not previously been tested for this kind of research. New mutations are responsible for an array of genetic diseases. The ability to reliably measure rates of DNA mutation means we can begin to ask how mutation rates vary between different regions of the genome and perhaps also between different individuals."
This work was supported by the Joint Project from the NSFC and The Royal Society, and the Wellcome Trust.
POSTED BY SCIENCE DAILY