Showing posts with label Brain to Machine Interfacing. Show all posts
Showing posts with label Brain to Machine Interfacing. Show all posts

June 18, 2012

New energy source for future medical implants: brain glucose


The Matrix was right: humans will act as batteries

-CAPTION TO THE RIGHT: Brain power: harvesting power from the cerebrospinal fluid within the subarachnoid space. Inset at right: a micrograph of a prototype, showing the metal layers of the anode (central electrode) and cathode contact (outer ring) patterned on a silicon wafer. (Credit: Karolinska Institutet/Stanford University))--

MIT engineers have developed a fuel cell that runs on glucose for powering highly efficient brain implants of the future that can help paralyzed patients move their arms and legs again — batteries included.

The fuel cell strips electrons from glucose molecules to create a small electric current.

The researchers, led by Rahul Sarpeshkar, an associate professor of electrical engineering and computer science at MIT, fabricated the fuel cell on a silicon chip, allowing it to be integrated with other circuits that would be needed for a brain implant.

In the 1970s, scientists showed they could power a pacemaker with a glucose fuel cell, but the idea was abandoned in favor of lithium-ion batteries, which could provide significantly more power per unit area than glucose fuel cells.

These glucose fuel cells also used enzymes that proved to be impractical for long-term implantation in the body, since they eventually ceased to function efficiently.

How to generate hundreds of microwatts from sugar
[+]silicon_wafer_glucose

A silicon wafer with glucose fuel cells of varying sizes; the largest is 64 by 64 mm. (credit: Sarpeshkar Lab)

The new fuel cell is fabricated from silicon, using the same technology used to make semiconductor electronic chips, with no biological components.

A platinum catalyst strips electrons from glucose, mimicking the activity of cellular enzymes that break down glucose to generate ATP, the cell’s energy currency. (Platinum has a proven record of long-term biocompatibility within the body.)

So far, the fuel cell can generate up to hundreds of microwatts — enough to power an ultra-low-power and clinically useful neural implant.

Benjamin Rapoport, a former graduate student in the Sarpeshkar lab and the first author on the new MIT study, calculated that in theory, the glucose fuel cell could get all the sugar it needs from the cerebrospinal fluid (CSF) that bathes the brain and protects it from banging into the skull.

There are very few cells in the CSF, so it’s highly unlikely that an implant located there would provoke an immune response, the researchers say.
[+]glucose_fuel_cell

Structure of the glucose fuel cell and the oxygen and glucose concentration gradients crucially associated with its cathode and anode half-cell reactions (credit: Benjamin I. Rapoport, Jakub T. Kedzierski, Rahul Sarpeshkar/PLoS One)

There is also significant glucose in the CSF, which does not generally get used by the body. Since only a small fraction of the available power is utilized by the glucose fuel cell, the impact on the brain’s function would likely be small.

Implantable medical devices

“It will be a few more years into the future before you see people with spinal-cord injuries receive such implantable systems in the context of standard medical care, but those are the sorts of devices you could envision powering from a glucose-based fuel cell,” says Rapoport.

Karim Oweiss, an associate professor of electrical engineering, computer science and neuroscience at Michigan State University, says the work is a good step toward developing implantable medical devices that don’t require external power sources.

“It’s a proof of concept that they can generate enough power to meet the requirements,” says Oweiss, adding that the next step will be to demonstrate that it can work in a living animal.

A team of researchers at Brown University, Massachusetts General Hospital and other institutions recently demonstrated that paralyzed patients could use a brain-machine interface to move a robotic arm; those implants have to be plugged into a wall outlet.

Ultra-low-power bioelectronics

Sarpeshkar’s group is a leader in the field of ultra-low-power electronics, having pioneered such designs for cochlear implants and brain implants. “The glucose fuel cell, when combined with such ultra-low-power electronics, can enable brain implants or other implants to be completely self-powered,” says Sarpeshkar, author of the book Ultra Low Power Bioelectronics.

The book discusses how the combination of ultra-low-power and energy-harvesting design can enable self-powered devices for medical, bio-inspired and portable applications.

Sarpeshkar’s group has worked on all aspects of implantable brain-machine interfaces and neural prosthetics, including recording from nerves, stimulating nerves, decoding nerve signals and communicating wirelessly with implants.

One such neural prosthetic is designed to record electrical activity from hundreds of neurons in the brain’s motor cortex, which is responsible for controlling movement. That data is amplified and converted into a digital signal so that computers — or in the Sarpeshkar team’s work, brain-implanted microchips — can analyze it and determine which patterns of brain activity produce movement.

The fabrication of the glucose fuel cell was done in collaboration with Jakub Kedzierski at MIT’s Lincoln Laboratory. “This collaboration with Lincoln Lab helped make a long-term goal of mine — to create glucose-powered bioelectronics — a reality,” Sarpeshkar says.

Although he has begun working on bringing ultra-low-power and medical technology to market, he cautions that glucose-powered implantable medical devices are still many years away.

Ref.: Benjamin I. Rapoport, Jakub T. Kedzierski, Rahul Sarpeshkar, A Glucose Fuel Cell for Implantable Brain-Machine Interfaces, PLoS ONE, 2012, DOI: 10.1371/journal.pone.0038436 (open access)

April 11, 2012

Interview with Professor Piotr J. Durka About the BCI Appliance Developed at University of Warsaw


At CeBIT 2012 University of Warsaw presented a wireless brain-computer interface (BCI) system, called BCI Appliance. The device is a tablet-sized box with just one button, running entirely on Open Source software. Neurogadget.com could ask some questions via email about the BCI Appliance to the leader of the project. Welcome to an exclusive interview with Piotr J. Durka, professor of the University of Warsaw, Department of Physics.

Read the rest of the article at Brain Machine Interfacing

July 28, 2011

The Walk Again Project

Over the past decade, neuroscientists at the Duke University Center for Neuroengineering (DUCN) have developed the field of brain-machine interface (BMI) into one of the most exciting—and promising—areas of basic and applied research in modern neuroscience. By creating a way to link living brain tissue to a variety of artificial tools, BMIs have made it possible for non-human primates to use the electrical activity produced by hundreds of neurons, located in multiple regions of their brains, to directly control the movements of a variety of robotic devices, including prosthetic arms and legs.

As a result, BMI research raises the hope that in the not-too-distant future, patients suffering from a variety of neurological disorders that lead to devastating levels of paralysis may be able to recover their mobility by harnessing their own brain impulses to directly control sophisticated neuroprostheses.
The Walk Again Project, an international consortium of leading research centers around the world represents a new paradigm for scientific collaboration among the world’s academic institutions, bringing together a global network of scientific and technological experts, distributed among all the continents, to achieve a key humanitarian goal.

The project’s central goal is to develop and implement the first BMI capable of restoring full mobility to patients suffering from a severe degree of paralysis. This lofty goal will be achieved by building a neuroprosthetic device that uses a BMI as its core, allowing the patients to capture and use their own voluntary brain activity to control the movements of a full-body prosthetic device. This “wearable robot,” also known as an “exoskeleton,” will be designed to sustain and carry the patient’s body according to his or her mental will.

In addition to proposing to develop new technologies that aim at improving the quality of life of millions of people worldwide, the Walk Again Project also innovates by creating a complete new paradigm for global scientific collaboration among leading academic institutions worldwide. According to this model, a worldwide network of leading scientific and technological experts, distributed among all the continents, come together to participate in a major, non-profit effort to make a fellow human being walk again, based on their collective expertise. These world renowned scholars will contribute key intellectual assets as well as provide a base for continued fundraising capitalization of the project, setting clear goals to establish fundamental advances toward restoring full mobility for patients in need.

Walk again Project Homepage

July 27, 2011

Comprehensive list of BCI Labs Worldwide

You can find a comprehensive listing of companies and labs doing research for Brain-Computer Interefaces at Now Possible. It is great for researchers, executives and other professionals to join their BCI group on LinkedIn.

You can find it at Now Possible

July 25, 2011

Scientists differentiate brain activity associated with grasping

Quickly grabbing a cup of coffee is an everyday action for most of us. For people with severe paralysis however, this task is unfeasible - yet not "unthinkable". Because of this, interfaces between the brain and a computer can in principle detect these "thoughts" and transform them into steering commands. Scientists from Freiburg now have found a way to distinguish between different types of grasping on the basis of the accompanying brain activity.

In the current issue of the journal "NeuroImage", Tobias Pistohl and colleagues from the Bernstein Center Freiburg and the University Medical Centre describe how they succeeded in differentiating the brain activity associated with a precise grip and a grip of the whole hand. Ultimately, the scientists aim to develop a neuroprosthesis: a device that receives commands directly from the brain, and which can be used by paralysed people to control the arm of a robot - or even their own limbs.

One big problem about arm movements had been so far unresolved. In our daily lives, it is important to handle different objects in different ways, for example a feather and a brick. The researchers from Freiburg now found aspects in the brain's activity that distinguish a precise grip from one with the whole hand.

To this end, Pistohl and his collaborators made use of signals that are measured on the surface of the brain. The big advantage of this approach is that no electrodes have to be implanted directly into this delicate organ. At the same time, the obtained signals are much more precise than those that can be measured on the skull's surface.

The scientists conducted a simple experiment with patients that were not paralysed, but had electrodes implanted into their skull for medical reasons. The task was to grab a cup, either with a precise grip formed by the thumb and the index finger, or with their whole hand. At the same time, a computer recorded the electrical changes at the electrodes. And in fact, the scientists were able to find signals in the brain's activity that differed, depending on the type of grasp. A computer was able to attribute these signals to the different hand positions with great reliability. Now, the next challenge will be to identify these kinds of signals in paralysed patients as well - with the aim of eventually putting a more independent life back within their reach.

Source Bernstein Center Freiburg

July 17, 2011

June 2, 2011

Archinoetics-Helping patients with Locked-In-Syndrome

The Challenge

Imagine waking up one day and not being able to move any part of your body. You try to call for help but discover you cannot speak. You attempt to look around the room to see if anyone is nearby, but your eyes won’t move either. Your mind is working perfectly, but you are trapped in your own body.
People with this rare horrific condition, known as “locked-in syndrome”, can become this way almost instantly from a brainstem stroke or severe trauma, or more either gradually from amyotrophic lateral sclerosis (ALS, otherwise known as Lou Gerhig’s Disease). In some cases, people will continue to maintain control over the eyes, allowing them to use assistive technologies like eye-tracking devices to communicate through a computer. Unfortunately, for those who lose eye control, communication is significantly more challenging and solutions are severely limited.

Our Approach

There’s no success more gratifying than helping a locked-in man communicate with his family for the first time in years.
Archinoetics developed brain-machine interfaces (BMI, also known as brain-computer interfaces or BCI) that enables people to interact with and control machines through our custom-designed functional brain imaging system. These systems, which use functional near-infrared imaging (fNIR), monitor the brain’s activity in real-time to detect what types of mental tasks a person is doing. By giving the subject a choice of a few tasks to select from, the person is able to create a signal that can be interpreted by our software and thereby allow them to manipulate basic computer interfaces. In our research lab testing our system on healthy people, everything appeared to function perfectly. The real test was when we visited a man who, because he was locked-in, had not been able to communicate with his family in years. The Archinoetics team looked on anxiously as the sensors were placed on his head and the computer started receiving data. As with many studies involving human subjects, our first tests did not work. But, over the course of several days, we worked through a number of challenges and were able to help this man answer several yes or no questions that his family wanted to ask him.
For the first time in years, he communicated!
After our initial success, we continued to improve our software for communication to improve its reliability and the speed with which someone could communicate. In parallel, we also worked on some fun applications to give locked-in people a way to entertain themselves while practicing the required mental tasks that allow them to control the system for communications. The applications included video games and painting applications. A screenshot of the video game appears here and shows a dolphin that the person controls in an attempt to eat the fish that swim by. The painting application is discussed more below.
Brain Painting
Archinoetics has developed a BCI called “brain painting”. This application allows someone to paint through consciously modifying the level of activity in a region of his or her brain. Typically this means either “singing in your head” or repeating nonsense syllables in your head (such as “la la la”). The first activity activates the language area, thereby raising the signal measured by OTIS, whereas the second activity lowers the signal. In addition to being a fun creative tool, brain painting also helps people learn the skills necessary to use a BCI effectively for communication.
For Brain Painting, Archinoetics worked closely with the late artist, Peggy Chun, whose tropical watercolor paintings made her a household name in Hawaii. Peggy was diagnosed with ALS in 2002, but never let the disease stop her from painting. As she became paralyzed, she switched to painting with her left hand, then by holding the paintbrush in her teeth. Even when she was only able to move her eyes, Peggy used an eye-tracking system to communicate and paint. At Archinoetics, we helped Peggy become the world’s first ‘brain painter’ (see her most famous brain painting on the laptop screen in the photo, entitled ‘Navajo Nightfall’). Sadly, Peggy passed away in 2008, but her memory and spirit live on in her beautiful paintings.
To view or purchase Peggy’s artwork, please visit her website at www.peggychun.com.

Support

This research is in collaboration with the University of Virginia and Georgia Tech University, and has received support from the National Science Foundation under Grant No. 0705804 and Grant No. 0512003.
This article from the Archinoetics website

May 22, 2011

Bionic hand for 'elective amputation' patient

An Austrian man has voluntarily had his hand amputated so he can be fitted with a bionic limb.

The patient, called "Milo", aged 26, lost the use of his right hand in a motorcycle accident a decade ago.
After his stump heals in several weeks' time, he will be fitted with a bionic hand which will be controlled by nerve signals in his own arm.

The surgery is the second such elective amputation to be performed by Viennese surgeon Professor Oskar Aszmann.

The patient, a Serbian national who has lived in Austria since childhood, suffered injuries to a leg and shoulder when he skidded off his motorcycle and smashed into a lamppost in 2001 while on holiday in Serbia.
Milo and his hybrid hand Milo used a hybrid hand before deciding on the operation
While the leg healed, what is called a "brachial plexus" injury to his right shoulder left his right arm paralysed. Nerve tissue transplanted from his leg by Professor Aszmann restored movement to his arm but not to his hand.

A further operation involving the transplantation of muscle and nerve tissue into his forearm also failed to restore movement to the hand, but it did at least boost the electric signals being delivered from his brain to his forearm, signals that could be used to drive a bionic hand.

Then three years ago, Milo was asked whether he wanted to consider elective amputation.
"The operation will change my life. I live 10 years with this hand and it cannot be (made) better. The only way is to cut this down and I get a new arm," Milo told BBC News prior to his surgery at Vienna's General Hospital.

Read the rest of the original here article at BBC news 

May 14, 2011

Imec BCI to be unveiled in feb 2011

IMEC emotivepoclike headset1 e1297248366713 New EEG Headset From Imec to Compete Existing Brain Reading ProductsAt the heart of the system is IMEC’s 8-channel ASIC (application-specific integrated circuit) which consumes ultra-low-power (200µW only) and features high common mode rejection ratio (CMRR) of 120dB and low noise (input referred noise of 55nV/√Hz).

A brand new wireless brain-reading headset debuts this week at the Medical Design and Manufacturing conference and exhibition in Anaheim, California.


The Belgian Imec and Dutch Holst Centre say their device will enable continuous ambulatory monitoring, it could improve safety in the future (no more sleeping while driving a car) and also improve the joy of video games by adjusting the action and the environment to the player’s cognitive state (just like Emotiv EPOC). With future medical applications the headset might functioning as warning system for epileptic patients and even enabling typing text by thoughts to people with motoric disabilities to communicate.

The prototype headset doesn’t look worse than any other consumer brain-computer interface on the market, it has a unique futuristic shape. The flexible and magnetic dry electrodes could be practical in use, hopefully
they don’t cause too much pain on the skin.

The electronics, including the integrated circuit, radio, and controller chips are integrated in a small wireless EEG system of 25 x 35 x 5 millimeters, that can easily be embedded in headsets, helmets or other accessories.

This wireless EEG system has been integrated in a prototype EEG headset. The prototype headset can be easily adapted to the head of the user by extending a plastic bridge near the back of the head and by moving the part that contains the electronics upwards or downwards. On top of that, a spring suspension, guaranteeing improved robustness, and a magnetized pivoting mechanism can be used for fine adaptation to the head. The magnetic connection of the electrodes allows quick and easy replacement making it a hygienic solution. Gel injection is still possible if required for certain applications. Today the system relies on commercial off-the-shelf Ag/AgCl electrodes, which may lead to certain level of discomfort. According to Imec, in a few years, research on dry electrodes will result in increased comfort and higher signal quality.
According to Imec’s is mentioned in press release, the industry can get access to this technology by joining the Human++ program as research partner or by licensing agreements for further product development.

Will it be a real competition to existing market leaders such as Emotiv and NeuroSky? Future will tell.

March 1, 2011

Punk rock skeleton demos mind control system

Who says punk is dead? In the video above, a skeleton with a mohawk is helping to visualise how a new neural implant device reads brain signals and interprets them to control a prosthetic arm. The yellow spikes radiating from the skeleton's head represent the firing of motor neurons in the brain. Each neuron is tuned to recognise a different direction in space, so as the arm moves, the spikes change to reflect the changing direction. By adding together the output of all the neurons, the direction of the arm's movement - represented by the blue arrow - can be predicted.
Mind control devices are quite the rage these days, with systems designed to control everything from iPad apps, to prosthetic limbs, to cars. This system, developed by Daniel Moran of Washington University in St. Louis uses a grid of disc-shaped electrodes, inserted between the brain and the skull, to read electrical activity in the brain. It's more precise than electrodes placed outside of the skull, and less invasive than probes inserted into the brain itself.
With further refinements, the system could give amputees better control over prosthetic limbs without overly invasive surgical implants.

Original article from New Scientists magazine

May 8, 2010

Army of smartphone chips could emulate the human brain

IF YOU have a smartphone, you probably have a slice of Steve Furber's brain in your pocket. By the time you read this, his 1-billion-neuron silicon brain will be in production at a microchip plant in Taiwan.
Computer engineers have long wanted to copy the compact power of biological brains. But the best mimics so far have been impractical, being simulations running on supercomputers.
Furber, a computer scientist at the University of Manchester, UK, says that if we want to use computers with even a fraction of a brain's flexibility, we need to start with affordable, practical, low-power components.
"We're using bog-standard, off-the-shelf processors of fairly modest performance," he says.
Furber won't come close to copying every property of real neurons, says Henry Markram, head of Blue Brain. This is IBM's attempt to simulate a brain with unsurpassed accuracy on a Blue Gene supercomputer at the Swiss Institute for Technology, Lausanne. "It's a worthy aim, but brain-inspired chips can only produce brain-like functions," he says.
That's good enough for Furber, who wants to start teaching his brain-like computer about the world as soon as possible. His first goal is to teach it how to control a robotic arm, before working towards a design to control a humanoid. A robot controller with even a dash of brain-like properties should be much better at tasks like image recognition, navigation and decision-making, says Furber.
"Robots offer a natural, sensory environment for testing brain-like computers," says Furber. "You can instantly tell if it is being useful."
Called Spinnaker - for Spiking Neural Network Architecture - the brain is based on a processor created in 1987 by Furber and colleagues at Acorn Computers in Cambridge, UK, makers of the seminal BBC Microcomputer.
Although the chip was made for a follow-up computer that flopped, the ARM design at its heart lived on, becoming the most common "embedded" processor in devices like e-book readers and smartphones.
But coaxing any computer into behaving like a brain is tough. Both real neurons and computer circuits communicate using electrical signals, but in biology the "wires" carrying them do not have fixed roles as in electronics. The importance of a particular neural connection, or synapse, varies as the network learns by balancing the influence of the different signals being received. This synaptic "weighting" must be dynamic in a silicon brain, too.
To coordinate its 'neurons' the chip mimics the way real neurons communicate using 'spikes' in voltage
The chips under construction in Taiwan contain 20 ARM processor cores, each modelling 1000 neurons. With 20,000 neurons per chip, 50,000 chips will be needed to reach the target of 1 billion neurons.
A memory chip next to each processor stores the changing synaptic weights as simple numbers that represent the importance of a given connection at any moment. Initially, those will be loaded from a PC, but as the system gets bigger and smarter, says Furber, "the only computer able to compute them will be the machine itself".
Another brain-like behaviour his chips need to master is to communicate coordinated "spikes" of voltage. A computer has no trouble matching the speed at which individual neurons spike - about 10 times per second - but neurons work in very much larger, parallel groups than silicon logic gates.
In a brain there is no top-down control to coordinate their actions because the basic nature of individual neurons means that they work together in an emergent, bottom-up way.
Spinnaker cannot mimic that property, so it relies on a miniature controller to direct spike traffic, similar to one of the routers in the internet's backbone. "We can route to more than 4 billion neurons," says Furber, "many more than we need."
While the Manchester team await the arrival of their chips, they have built a cut-down version with just 50 neurons and have put the prototype through its paces in the lab. They have created a virtual environment in which the silicon brain controls a Pac-Man-like program that learns to hunt for a virtual doughnut.
"It shows that our four years designing the system haven't been wasted," says Furber. He hopes to have a 10,000-processor version working later this year.
As they attempt to coax brain-like behaviour from phone chips, others are working with hardware which may have greater potential.
The Defense Advanced Research Projects Agency, the Pentagon's research arm, is funding a project called Synapse. Wei Lu of the University of Michigan at Ann Arbor, is working on a way of providing synaptic weights with memristors, first made in 2008 (New Scientist, 3 May 2008, p 26).
Handily, their most basic nature is brain-like: at any one moment a memristor's resistance depends on the last voltage placed across it. This rudimentary "memory" means that simple networks of memristors form weighted connections like those of neurons. This memory remains without drawing power, unlike the memory chips needed in Spinnaker. "Memristors are pretty neat," says Lu.
Their downside is that they are untested, though. "Synapse is an extremely ambitious project," says Furber. "But ambition is what drives this field. No one knows the right way to go."

Original article posted on
 

October 10, 2009

Brain-to-brain communication demonstrated

Brain-to-brain ("B2B") communication has been achieved for the first time by Dr. Christopher James of the University of Southampton.



While attached to an EEG amplifier, the first person generated and transmitted a series of binary digits by imagining moving their left arm for zero and their right arm for one. That data was sent via the Internet to another PC. The second person was also attached to an EEG amplifier and their PC flashed an LED lamp at two different frequencies, one for zero and the other one for one.

The pattern of the flashing LEDs was too subtle to be detected by the second person, but was picked up by electrodes detecting visual cortex activity. The PC deciphered whether a zero or a one was transmitted, with an end-to-end bandwidth of about .14 bit/sec.

"B2B could be of benefit such as helping people with severe debilitating muscle wasting diseases, or with the so-called 'locked-in' syndrome, to communicate and it also has applications for gaming," said James.

Possible extensions of the research include two-way and multiuser B2B communication with faster, broader-bandwidth transmission by using more complex signal generation and pattern recognition. - Ed.

Source: University of Southampton news release


October 4, 2009

Burst of Technology Helps Blind to See


Barbara Campbell is part of a worldwide experiment testing whether electrodes implanted in the eye can restore sight.

Blindness first began creeping up on Barbara Campbell when she was a teenager, and by her late 30s, her eye disease had stolen what was left of her sight.

Reliant on a talking computer for reading and a cane for navigating New York City, where she lives and works, Ms. Campbell, now 56, would have been thrilled to see something. Anything.

Now, as part of a striking experiment, she can. So far, she can detect burners on her stove when making a grilled cheese, her mirror frame, and whether her computer monitor is on.

She is beginning an intensive three-year research project involving electrodes surgically implanted in her eye, a camera on the bridge of her nose and a video processor strapped to her waist.

The project, involving patients in the United States, Mexico and Europe, is part of a burst of recent research aimed at one of science’s most-sought-after holy grails: making the blind see.

Some of the 37 other participants further along in the project can differentiate plates from cups, tell grass from sidewalk, sort white socks from dark, distinguish doors and windows, identify large letters of the alphabet, and see where people are, albeit not details about them.

Linda Morfoot, 65, of Long Beach, Calif., blind for 12 years, says she can now toss a ball into a basketball hoop, follow her nine grandchildren as they run around her living room and “see where the preacher is” in church.

“For someone who’s been totally blind, this is really remarkable,” said Andrew P. Mariani, a program director at the National Eye Institute. “They’re able to get some sort of vision.”

Scientists involved in the project, the artificial retina, say they have plans to develop the technology to allow people to read, write and recognize faces.

Advances in technology, genetics, brain science and biology are making a goal that long seemed out of reach — restoring sight — more feasible.

“For a long time, scientists and clinicians were very conservative, but you have to at some point get out of the laboratory and focus on getting clinical trials in actual humans,” said Timothy J. Schoen, director of science and preclinical development for the Foundation Fighting Blindness. Now “there’s a real push,” he said, because “we’ve got a lot of blind people walking around, and we’ve got to try to help them.”

More than 3.3 million Americans 40 and over, or about one in 28, are blind or have vision so poor that even with glasses, medicine or surgery, everyday tasks are difficult, according to the National Eye Institute, a federal agency. That number is expected to double in the next 30 years. Worldwide, about 160 million people are similarly affected.

“With an aging population, it’s obviously going to be an increasing problem,” said Michael D. Oberdorfer, who runs the visual neuroscience program for the National Eye Institute, which finances several sight-restoration projects, including the artificial retina. Wide-ranging research is important, he said, because different methods could help different causes of blindness.

The approaches include gene therapy, which has produced improved vision in people who are blind from one rare congenital disease. Stem cell research is considered promising, although far from producing results, and other studies involve a light-responding protein and retinal transplants.

Others are implanting electrodes in monkeys’ brains to see if directly stimulating visual areas might allow even people with no eye function to see.

And recently, Sharron Kay Thornton, 60, from Smithdale, Miss., blinded by a skin condition, regained sight in one eye after doctors at the University of Miami Miller School of Medicine extracted a tooth (her eyetooth, actually), shaved it down and used it as a base for a plastic lens replacing her cornea.

It was the first time the procedure, modified osteo-odonto-keratoprosthesis, was performed in this country. The surgeon, Dr. Victor L. Perez, said it could help people with severely scarred corneas from chemical or combat injuries.

Other techniques focus on delaying blindness, including one involving a capsule implanted in the eye to release proteins that slow the decay of light-responding cells. And with BrainPort, a camera worn by a blind person captures images and transmits signals to electrodes slipped onto the tongue, causing tingling sensations that a person can learn to decipher as the location and movement of objects.

Ms. Campbell’s artificial retina works similarly, except it produces the sensation of sight, not tingling on the tongue. Developed by Dr. Mark S. Humayun, a retinal surgeon at the University of Southern California, it drew on cochlear implants for the deaf and is partly financed by a cochlear implant maker.

It is so far being used in people with retinitis pigmentosa, in which photoreceptor cells, which take in light, deteriorate.

Gerald J. Chader, chief scientific officer at the University of Southern California’s Doheny Retinal Institute, where Dr. Humayun works, said it should also work for severe cases of age-related macular degeneration, the major cause of vision loss in older people.

Go -->here<-- to read the rest of the original article from New York Times

September 24, 2009

Stimulating Sight: Retinal Implant Could Help Restore Useful Level Of Vision To Certain Groups Of Blind People


Retinal Implant receives visual data from a camera mounted on a pair of glasses. The coil sends the images to a chip attached to the side of the eyeball, which processes the data and sends it to electrodes implanted below the retina. (Credit: Courtesy of Shawn Kelly)
Inspired by the success of cochlear implants that can restore hearing to some deaf people, researchers at MIT are working on a retinal implant that could one day help blind people regain a useful level of vision.

The eye implant is designed for people who have lost their vision from retinitis pigmentosa or age-related macular degeneration, two of the leading causes of blindness. The retinal prosthesis would take over the function of lost retinal cells by electrically stimulating the nerve cells that normally carry visual input from the retina to the brain.

Such a chip would not restore normal vision but it could help blind people more easily navigate a room or walk down a sidewalk.

"Anything that could help them see a little better and let them identify objects and move around a room would be an enormous help," says Shawn Kelly, a researcher in MIT's Research Laboratory for Electronics and member of the Boston Retinal Implant Project.

The research team, which includes scientists, engineers and ophthalmologists from Massachusetts Eye and Ear Infirmary, the Boston VA Medical Center and Cornell as well as MIT, has been working on the retinal implant for 20 years. The research is funded by the VA Center for Innovative Visual Rehabilitation, the National Institutes of Health, the National Science Foundation, the Catalyst Foundation and the MOSIS microchip fabrication service.

Led by John Wyatt, MIT professor of electrical engineering, the team recently reported a new prototype that they hope to start testing in blind patients within the next three years.

Electrical stimulation

Patients who received the implant would wear a pair of glasses with a camera that sends images to a microchip attached to the eyeball. The glasses also contain a coil that wirelessly transmits power to receiving coils surrounding the eyeball.

When the microchip receives visual information, it activates electrodes that stimulate nerve cells in the areas of the retina corresponding to the features of the visual scene. The electrodes directly activate optical nerves that carry signals to the brain, bypassing the damaged layers of retina.

One question that remains is what kind of vision this direct electrical stimulation actually produces. About 10 years ago, the research team started to answer that by attaching electrodes to the retinas of six blind patients for several hours.

When the electrodes were activated, patients reported seeing a small number of "clouds" or "drops of blood" in their field of vision, and the number of clouds or blood drops they reported corresponded to the number of electrodes that were stimulated. When there was no stimulus, patients accurately reported seeing nothing. Those tests confirmed that retinal stimulation can produce some kind of organized vision in blind patients, though further testing is needed to determine how useful that vision can be.

After those initial tests, with grants from the Boston Veteran's Administration Medical Center and the National Institutes of Health, the researchers started to build an implantable chip, which would allow them to do more long-term tests. Their goal is to produce a chip that can be implanted for at least 10 years.

One of the biggest challenges the researchers face is designing a surgical procedure and implant that won't damage the eye. In their initial prototypes, the electrodes were attached directly atop the retina from inside the eye, which carries more risk of damaging the delicate retina. In the latest version, described in the October issue of IEEE Transactions on Biomedical Engineering, the implant is attached to the outside of the eye, and the electrodes are implanted behind the retina.

That subretinal location, which reduces the risk of tearing the retina and requires a less invasive surgical procedure, is one of the key differences between the MIT implant and retinal prostheses being developed by other research groups.

Another feature of the new MIT prototype is that the chip is now contained in a hermetically sealed titanium case. Previous versions were encased in silicone, which would eventually allow water to seep in and damage the circuitry.

While they have not yet begun any long-term tests on humans, the researchers have tested the device in Yucatan miniature pigs, which have roughly the same size eyeballs as humans. Those tests are only meant to determine whether the implants remain functional and safe and are not designed to observe whether the pigs respond to stimuli to their optic nerves.

So far, the prototypes have been successfully implanted in pigs for up to 10 months, but further safety refinements need to be made before clinical trials in humans can begin.

Wyatt and Kelly say they hope that once human trials begin and blind patients can offer feedback on what they're seeing, they will learn much more about how to configure the algorithm implemented by the chip to produce useful vision.

Patients have told them that what they would like most is the ability to recognize faces. "If they can recognize faces of people in a room, that brings them into the social environment as opposed to sitting there waiting for someone to talk to them," says Kelly.


Journal reference:

  1. Shire, D. B.; Kelly, S. K.; Chen , J.; Doyle , P.; Gingerich, M. D.; Cogan, S. F.; Drohan, W. A.; Mendoza, O.; Theogarajan, L.; Wyatt, J. L.; Rizzo, J. F. Development and Implantation of a Minimally Invasive Wireless Subretinal Neurostimulator. IEEE Transactions on Biomedical Engineering, October 2009 DOI: 10.1109/TBME.2009.2021401
Adapted from materials provided by Massachusetts Institute of Technology. Original article written by Anne Trafton, MIT News Office.

September 14, 2009

Smart implants may alleviate neurological conditions


Brain implants have been tried as a treatment for epilepsy, but they could tackle a range of other conditions (Image: Medtronic)

SMART implants in the brains of people with neurological disorders could eventually help develop treatments for people with Parkinson's disease, depression and obsessive compulsive disorder.

Last week, a team from Medtronic of Minneapolis, Minnesota, reported on their design for a neurostimulator at the Engineering in Medicine and Biology Society meeting in Minneapolis. The devices use electrodes to deliver deep stimulation to specific parts of the brain.
Neurostimulators are already approved to treat conditions such as Parkinson's disease, essential tremor, and dystonia, as well as obsessive compulsive disorder. But existing devices deliver stimulation on a set schedule, not in response to abnormal brain activity. The Medtronic researchers think a device that reacts to brain signals could be more effective, plus the battery would last longer, an important consideration for implantable devices.

Tim Denison, a Medtronic engineer working on the device, says that the neurostimulator will initially be useful for studying brain signals as patients go about their day. Eventually, the data collected will show whether the sensors would be useful for detecting and preventing attacks.

Human trials are years away, but elsewhere, NeuroPace a start-up firm in Mountain View, California, is finishing clinical trials using its RNS smart implant device in 240 people with epilepsy, the results of which will be available in December, says Martha Morrell, chief medical officer at NeuroPace. An earlier feasibility study on 65 patients provided preliminary evidence that the devices did reduce seizures.

The NeuroPace device is implanted within the skull where it monitors electrical activity via electrodes implanted deep in the brain. If it spots the "signature" of a seizure, it will deliver brief and mild electrical stimulation to suppress it. Mark George, a neurologist at the Medical University of South Carolina in Charleston, says heart pacemakers developed in a similar way, as researchers learned to make them detect and react to signals from the heart. "I think it's absolutely inevitable that we'll develop a smarter, more intelligent way to figure out how and when to stimulate," George says.

Posted in newscientist 12 September 2009 by Kurt Kleiner

September 11, 2009

The Next Hacking Frontier: Your Brain?

Hackers who commandeer your computer are bad enough. Now scientists worry that someday, they’ll try to take over your brain.

-->In the past year, researchers have developed technology that makes it possible to use thoughts to operate a computer, maneuver a wheelchair or even use Twitter — all without lifting a finger. But as neural devices become more complicated — and go wireless — some scientists say the risks of “brain hacking” should be taken seriously.
“Neural devices are innovating at an extremely rapid rate and hold tremendous promise for the future,” said computer security expert Tadayoshi Kohno of the University of Washington. “But if we don’t start paying attention to security, we’re worried that we might find ourselves in five or 10 years saying we’ve made a big mistake.”
Hackers tap into personal computers all the time — but what would happen if they focused their nefarious energy on neural devices, such as the deep-brain stimulators currently used to treat Parkinson’s and depression, or electrode systems for controlling prosthetic limbs? According to Kohno and his colleagues, who published their concerns July 1 in Neurosurgical Focus, most current devices carry few security risks. But as neural engineering becomes more complex and more widespread, the potential for security breaches will mushroom.
For example, the next generation of implantable devices to control prosthetic limbs will likely include wireless controls that allow physicians to remotely adjust settings on the machine. If neural engineers don’t build in security features such as encryption and access control, an attacker could hijack the device and take over the robotic limb.
“It’s very hard to design complex systems that don’t have bugs,” Kohno said. “As these medical devices start to become more and more complicated, it gets easier and easier for people to overlook a bug that could become a very serious risk. It might border on science fiction today, but so did going to the moon 50 years ago.”

Some might question why anyone would want to hack into someone else’s brain, but the researchers say there’s a precedent for using computers to cause neurological harm. In November 2007 and March 2008, malicious programmers vandalized epilepsy support websites by putting up flashing animations, which caused seizures in some photo-sensitive patients.
“It happened on two separate occasions,” said computer science graduate student Tamara Denning, a co-author on the paper. “It’s evidence that people will be malicious and try to compromise peoples’ health using computers, especially if neural devices become more widespread.”
In some cases, patients might even want to hack into their own neural device. Unlike devices to control prosthetic limbs, which still use wires, many deep brain stimulators already rely on wireless signals. Hacking into these devices could enable patients to “self-prescribe” elevated moods or pain relief by increasing the activity of the brain’s reward centers.
Despite the risks, Kohno said, most new devices aren’t created with security in mind. Neural engineers carefully consider the safety and reliability of new equipment, and neuroethicists focus on whether a new device fits ethical guidelines. But until now, few groups have considered how neural devices might be hijacked to perform unintended actions. This is the first time an academic paper has addressed the topic of “neurosecurity,” a term the group coined to describe their field.

“The security and privacy issues somehow seem to slip by,” Kohno said. “I would not be surprised if most people working in this space have never thought about security.”
Kevin Otto, a bioengineer who studies brain-machine interfaces at Purdue Universty, said he was initially skeptical of the research. “When I first picked up the paper, I don’t know if I agreed that it was an issue. But the paper gives a very compelling argument that this is important, and that this is the time to have neural engineers collaborate with security developers.”
It’s never too early to start thinking about security issues, said neural engineer Justin Williams of the University of Wisconsin, who was not involved in the research. But he stressed that the kinds of devices available today are not susceptible to attack, and that fear of future risks shouldn’t impede progress in the field. “These kinds of security issues have to proceed in lockstep with the technology,” Williams said.
History provides plenty of examples of why it’s important to think about security before it becomes a problem, Kohno said. Perhaps the best example is the internet, which was originally conceived as a research project and didn’t take security into account.
“Because the internet was not originally designed with security in mind,” the researchers wrote, “it is incredibly challenging — if not impossible — to retrofit the existing internet infrastructure to meet all of today’s security goals.” Kohno and his colleagues hope to avoid such problems in the neural device world, by getting the community to discuss potential security problems before they become a reality.
“The first thing is to ask ourselves is, ‘Could there be a security and privacy problem?’” Kohno said. “Asking ‘Is there a problem?’ gets you 90 percent there, and that’s the most important thing.”

Originally posted in Wired magazine

September 4, 2009

Go to hospital to see computing's future


Innovation is our regular column that highlights emerging technological ideas and where they may lead.
If you want to know how people will interact with machines in the future, head for a hospital.
That's the impression I got from a new report about the future of human-computer interaction from IT analysts Gartner, based in Stamford, Connecticut.
Gartner's now-classic chart, shown right, shows the rollercoaster of expectations ridden by new technologies: rocketing from obscurity to a peak of overblown hype, then falling into a "trough of disillusionment" before finally becoming mainstream as a tech's true worth is found.

Enlightened climb

Speech recognition, currently climbing the slope of enlightenment towards the plateau of productivity, is a good example of how healthcare helps new technology.
Some homeworkers are now hooked, and the technology is appearing in cellphones and voicemail systems. But its maturity owes as much to the rehabilitation industry as the software industry.
Today's true power users of voice recognition are people who are physically unable to use keyboard or mouse. For them, it is as much a medical device as an office aide. They have not only supported public and private research over the years, but also provided a market for the technology when it was far from perfect.

Guided by eyes

Eye tracking, climbing the hype peak as you read this, is also an everyday reality for many people for whom conventional interfaces are difficult.
Without that spur to innovation it is unlikely that more mainstream uses for eye tracking, from making computer games spring baddies when you least expect it to having billboards track passers by, would be so advanced.
Slumped at the bottom of the trough of disillusionment, virtual reality seems too familiar an idea to be labelled "emerging". But it, too, is relatively well established in the clinic, where the high installation costs can be justified.
Psychologists have long used it to recreate scary scenarios while treating phobias. More recently it has shown promise for phantom limb pain and schizophrenia diagnosisMovie Camera. Many US soldiers returning from Iraq and Afghanistan are being treated using virtual experiences.
Gartner forecasts 10 more years before virtual reality reaches the mainstream – a prediction some readers may remember from the 1980s – but it is likely to become mainstream for psychology much earlier than that.

Mind control

Haptics is another technology with consumer potential that's already being used in clinical contexts: for remote surgery and training, and for interpreting complex scan output.
And the computer interface technology that's likely to be the most significant of all can also be experienced properly only in a hospital so far. It's not hard to imagine who looks forward most eagerly to the latest developments in mind control of computers.

A handful of people already know what exerting such control can offer. Without lifting a finger they are able to send email, play video games(see video), control wheelchairs or prosthetic arms, update Twitter and even have their thoughts read aloud(see video)
Similarly, victims of accidents or injury provide the first hints of the kind of "upgrades" the otherwise healthy may in future choose to make to their bodies.

Seal of approval

Hospitals may not only be providing a preview of future interfaces, though – they may also be ensuring that they hit the big time with fewer design glitches.
Despite some conspicuous success in the smartphone arena, touch interface technology could still do with some improvementMovie Camera, and it's often less use than older but better-understood interfaces.
The technological nursery of the healthcare market could prevent so many ergonomic and design wrinkles making it to mass deployment in future.
Not only will the mainstream gadget industry have some tried-and-tested examples to draw on, but designs will have benefited from the safety and usability requirements demanded of medical devices by regulators like the US Food and Drug Administration.

Original article posted in New Scientist on 31 August 2009 by Tom Simonite

August 29, 2009

Brain develops motor memory for prosthetics, study finds

July 21st, 2009 Brain develops motor memory for prosthetics, study finds

Enlarge

Signals from the brain's motor cortex were translated by a "decoder" into deliberate movements of a computer cursor creating a kind of brain cybernetics. The task involved moving the cursor from a central starting point to a nearby target. UC Berkeley researchers have learned that the brain is capable of developing a motor memory of the task, much like it masters other physical skills such as riding a bike or playing tennis, but with the added distinction that the control is of a device separate from one's own body. Credit: Illustration by John Blanchard

"Practice makes perfect" is the maxim drummed into students struggling to learn a new motor skill - be it riding a bike or developing a killer backhand in tennis. Stunning new research now reveals that the brain can also achieve this motor memory with a prosthetic device(brain cybernetics), providing hope that physically disabled people can one day master control of artificial limbs with greater ease.

In this study, to be published July 21 in the open-access journal , macaque monkeys using brain signals learned how to move a computer cursor to various targets. What the researchers learned was that the brain could develop a mental map of a solution to achieve the task with high proficiency, and that it adhered to that neural pattern without deviation, much like a driver sticks to a given route commuting to work.

The study, conducted by scientists at the University of California, Berkeley, addresses a fundamental question about whether the brain can establish a stable, neural map of a motor task to make control of an artificial limb more intuitive.

"When your own body performs repeatedly, the movements become almost automatic," said study principal investigator Jose Carmena, a UC Berkeley assistant professor with joint appointments in the Department of Electrical Engineering and Computer Sciences, the Helen Wills Neuroscience Institute, and the Program in Cognitive Science. "The profound part of our study is that this is all happening with something that is not part of one's own body. We have demonstrated that the brain is able to form a motor memory to control a disembodied device in a way that mirrors how it controls its own body. That has never been shown before."

Researchers in the field of brain-machine interfaces, including Carmena, have made significant strides in recent years in the effort to improve the lives of people with physical disabilities. An April 2009 survey by the Christopher and Dana Reeve Foundation found that nearly 1.3 million people in the United States suffer from some form of paralysis caused by spinal cord injury. When other causes of restricted movement are considered, such as stroke, multiple sclerosis and cerebral palsy, the number of Americans affected jumps to 5.6 million, the survey found.

Already, researchers have demonstrated that rodents, non-human primates and humans are able to control robotic devices or computer cursors in real time using only . But what had not been clear before was whether such a skill had been consolidated as a motor memory. The new study suggests that the brain is capable of creating a stable, mental representation of a disembodied device so that it can be controlled with little effort.

To demonstrate this, Carmena and Karunesh Ganguly, a post-doctoral fellow in Carmena's laboratory, used a mathematical model, or "decoder," that remained static during the length of the study, and they paired it with a stable group of neurons in the brain. The decoder, analogous to a simplified spinal cord, translated the signals from the brain's motor cortex into movement of the cursor.

It took about four to five days of practice for the monkeys to master precise control of the cursor. Once they did, they completed the task easily and quickly for the next two weeks.

As the tasks were being completed, the researches were able to monitor the changes in activity of individual neurons involved in controlling the cursor. They could tell which cells were firing when the cursor moved in specific directions. The researchers noticed that when the animals became proficient at the task, the neural patterns involved in the "solution" stabilized.

"The solution adopted is what the brain returned to repeatedly," said Carmena.

That stability is one of three major features scientists associate with motor memory, and it is all too familiar to music teachers and athletic coaches who try to help their students "unlearn" improper form or techniques, as once a motor memory has been consolidated, it can be difficult to change.

Other characteristics of motor memory include the ability for it to be rapidly recalled upon demand and its resistance to interference when new skills are learned. All three elements were demonstrated in the UC Berkeley study.

In the weeks after they achieved proficiency, the primates exhibited rapid recall by immediately completing their learned task on the first try. "They did it from the get-go; there was no need to retrain them," said Carmena.

Real-life examples of resistance to interference, the third feature of motor memory, include people who return to an automatic transmission car after learning how to drive stick-shift. In the study, the researchers presented a new decoder - marked by a different colored cursor - two weeks after the monkeys showed mastery of the first decoder.

As the monkeys were mastering the new decoder, the researchers would suddenly switch back to the original decoder and saw that the monkeys could immediately perform the task without missing a beat. The monkeys could easily switch back and forth between the two decoders, showing a level of neural plasticity never before associated with the control of a prosthetic device, the researchers said.

"This is a study that says that maybe one day, we can really think of the ultimate neuroprosthetic device that humans can use to perform many different tasks in a more natural way," said Carmena.

Yet, the researchers acknowledged that prosthetic devices will not match what millions of years of evolution have accomplished to enable animal brains to control body movement. The complexity of wiring one's brain to properly control the body is made clear whenever one watches an infant's haphazard attempts to find its own hands and feet.

"Nevertheless, beyond its clinical applications, which are very clear, this line of research sheds light on how the brain assembles and organizes neurons, and how it forms a motor memory to control the prosthetic device," Carmena said. "These are important, fundamental questions about how the brain learns in general."

Source: University of California - Berkeley (news : web)
found at physorg.com

June 7, 2009

Electronic Memory Chips That Can Bend And Twist


A potential solution for a brain chip?

A flexible memory switch that
operates on less than 10 volts,
maintains its memory when power is
lost, and still functions after
being flexed more than 4,000 times
has been developed by National
Institute of Standards and
Technology (NIST) researhers. (NIST)
The switch can be built out of
inexpensive, readily available
materials and it...

Read original article here-->