Showing posts with label Cybernetics Research. Show all posts
Showing posts with label Cybernetics Research. Show all posts

June 18, 2012

New energy source for future medical implants: brain glucose


The Matrix was right: humans will act as batteries

-CAPTION TO THE RIGHT: Brain power: harvesting power from the cerebrospinal fluid within the subarachnoid space. Inset at right: a micrograph of a prototype, showing the metal layers of the anode (central electrode) and cathode contact (outer ring) patterned on a silicon wafer. (Credit: Karolinska Institutet/Stanford University))--

MIT engineers have developed a fuel cell that runs on glucose for powering highly efficient brain implants of the future that can help paralyzed patients move their arms and legs again — batteries included.

The fuel cell strips electrons from glucose molecules to create a small electric current.

The researchers, led by Rahul Sarpeshkar, an associate professor of electrical engineering and computer science at MIT, fabricated the fuel cell on a silicon chip, allowing it to be integrated with other circuits that would be needed for a brain implant.

In the 1970s, scientists showed they could power a pacemaker with a glucose fuel cell, but the idea was abandoned in favor of lithium-ion batteries, which could provide significantly more power per unit area than glucose fuel cells.

These glucose fuel cells also used enzymes that proved to be impractical for long-term implantation in the body, since they eventually ceased to function efficiently.

How to generate hundreds of microwatts from sugar
[+]silicon_wafer_glucose

A silicon wafer with glucose fuel cells of varying sizes; the largest is 64 by 64 mm. (credit: Sarpeshkar Lab)

The new fuel cell is fabricated from silicon, using the same technology used to make semiconductor electronic chips, with no biological components.

A platinum catalyst strips electrons from glucose, mimicking the activity of cellular enzymes that break down glucose to generate ATP, the cell’s energy currency. (Platinum has a proven record of long-term biocompatibility within the body.)

So far, the fuel cell can generate up to hundreds of microwatts — enough to power an ultra-low-power and clinically useful neural implant.

Benjamin Rapoport, a former graduate student in the Sarpeshkar lab and the first author on the new MIT study, calculated that in theory, the glucose fuel cell could get all the sugar it needs from the cerebrospinal fluid (CSF) that bathes the brain and protects it from banging into the skull.

There are very few cells in the CSF, so it’s highly unlikely that an implant located there would provoke an immune response, the researchers say.
[+]glucose_fuel_cell

Structure of the glucose fuel cell and the oxygen and glucose concentration gradients crucially associated with its cathode and anode half-cell reactions (credit: Benjamin I. Rapoport, Jakub T. Kedzierski, Rahul Sarpeshkar/PLoS One)

There is also significant glucose in the CSF, which does not generally get used by the body. Since only a small fraction of the available power is utilized by the glucose fuel cell, the impact on the brain’s function would likely be small.

Implantable medical devices

“It will be a few more years into the future before you see people with spinal-cord injuries receive such implantable systems in the context of standard medical care, but those are the sorts of devices you could envision powering from a glucose-based fuel cell,” says Rapoport.

Karim Oweiss, an associate professor of electrical engineering, computer science and neuroscience at Michigan State University, says the work is a good step toward developing implantable medical devices that don’t require external power sources.

“It’s a proof of concept that they can generate enough power to meet the requirements,” says Oweiss, adding that the next step will be to demonstrate that it can work in a living animal.

A team of researchers at Brown University, Massachusetts General Hospital and other institutions recently demonstrated that paralyzed patients could use a brain-machine interface to move a robotic arm; those implants have to be plugged into a wall outlet.

Ultra-low-power bioelectronics

Sarpeshkar’s group is a leader in the field of ultra-low-power electronics, having pioneered such designs for cochlear implants and brain implants. “The glucose fuel cell, when combined with such ultra-low-power electronics, can enable brain implants or other implants to be completely self-powered,” says Sarpeshkar, author of the book Ultra Low Power Bioelectronics.

The book discusses how the combination of ultra-low-power and energy-harvesting design can enable self-powered devices for medical, bio-inspired and portable applications.

Sarpeshkar’s group has worked on all aspects of implantable brain-machine interfaces and neural prosthetics, including recording from nerves, stimulating nerves, decoding nerve signals and communicating wirelessly with implants.

One such neural prosthetic is designed to record electrical activity from hundreds of neurons in the brain’s motor cortex, which is responsible for controlling movement. That data is amplified and converted into a digital signal so that computers — or in the Sarpeshkar team’s work, brain-implanted microchips — can analyze it and determine which patterns of brain activity produce movement.

The fabrication of the glucose fuel cell was done in collaboration with Jakub Kedzierski at MIT’s Lincoln Laboratory. “This collaboration with Lincoln Lab helped make a long-term goal of mine — to create glucose-powered bioelectronics — a reality,” Sarpeshkar says.

Although he has begun working on bringing ultra-low-power and medical technology to market, he cautions that glucose-powered implantable medical devices are still many years away.

Ref.: Benjamin I. Rapoport, Jakub T. Kedzierski, Rahul Sarpeshkar, A Glucose Fuel Cell for Implantable Brain-Machine Interfaces, PLoS ONE, 2012, DOI: 10.1371/journal.pone.0038436 (open access)

August 14, 2011

“I Would Hope That Saner Minds Would Prevail” Deus Ex: Human Revolution Lead Writer Mary DeMarle on the Ethics of Transhumanism

Among gamers, Deus Ex is something of a legendary fusion of disparate gaming styles. Among science fiction buffs, Deus Ex is lauded for managing to take two awesome genres, William Gibson-esque cyberpunk and Robert Anton Wilson-level conspiracy theories, and jam them together into an immanentizing of the eschaton unlike anything you’ve seen since Doktor Sleepless. And among transhumanists, Deus Ex brought up every issue of humanity’s fusion with technology one could imagine. It is a rich video game.
So when Square Enix decided to pick up the reins from Eidos and create a new installment in the series, Deus Ex: Human Revolution (DX:HR), I was quite excited. The first indication DX:HR was not going to be a crummy exploitation of the original’s success (see: Deus Ex 2: Invisible War), was the teaser trailer, shown above. Normally, a teaser trailer is just music and a slow build to a logo or single image that lets you know the game is coming out. Instead, the development team decided to demonstrate that it was taking the philosophy of the game seriously.
What philosophy? you might ask. Why transhumanism, of course. Nick Bostrom, chair of the Future of Humanity Institute at Oxford, centers the birth of transhumanism in the Renaissance and the Age of the Enlightenment in his article “A History of Transhumanist Thought” [pdf]. The visuals of the teaser harken to Renaissance imagery (such as the Da Vinci style drawings) and the teaser ends with a Nietzschean quote “Who we are is but a stepping stone to what we can become.” Later trailers would reference Icarus and Daedalus (who also happened to be the names of AI constructs in the original game), addressing the all-too-common fear that by pursuing technology, we are pursuing our own destruction. This narrative thread has become the central point of conflict in DX:HR. Even its viral ad campaign has been told through two lenses: that of Sarif Industries, maker of prosthetic bodies that change lives, and that of Purity First, a protest group that opposes human augmentation. The question is: upon which part of our shared humanity do we step as we climb to greater heights?

Read rest of original article here

July 28, 2011

The Walk Again Project

Over the past decade, neuroscientists at the Duke University Center for Neuroengineering (DUCN) have developed the field of brain-machine interface (BMI) into one of the most exciting—and promising—areas of basic and applied research in modern neuroscience. By creating a way to link living brain tissue to a variety of artificial tools, BMIs have made it possible for non-human primates to use the electrical activity produced by hundreds of neurons, located in multiple regions of their brains, to directly control the movements of a variety of robotic devices, including prosthetic arms and legs.

As a result, BMI research raises the hope that in the not-too-distant future, patients suffering from a variety of neurological disorders that lead to devastating levels of paralysis may be able to recover their mobility by harnessing their own brain impulses to directly control sophisticated neuroprostheses.
The Walk Again Project, an international consortium of leading research centers around the world represents a new paradigm for scientific collaboration among the world’s academic institutions, bringing together a global network of scientific and technological experts, distributed among all the continents, to achieve a key humanitarian goal.

The project’s central goal is to develop and implement the first BMI capable of restoring full mobility to patients suffering from a severe degree of paralysis. This lofty goal will be achieved by building a neuroprosthetic device that uses a BMI as its core, allowing the patients to capture and use their own voluntary brain activity to control the movements of a full-body prosthetic device. This “wearable robot,” also known as an “exoskeleton,” will be designed to sustain and carry the patient’s body according to his or her mental will.

In addition to proposing to develop new technologies that aim at improving the quality of life of millions of people worldwide, the Walk Again Project also innovates by creating a complete new paradigm for global scientific collaboration among leading academic institutions worldwide. According to this model, a worldwide network of leading scientific and technological experts, distributed among all the continents, come together to participate in a major, non-profit effort to make a fellow human being walk again, based on their collective expertise. These world renowned scholars will contribute key intellectual assets as well as provide a base for continued fundraising capitalization of the project, setting clear goals to establish fundamental advances toward restoring full mobility for patients in need.

Walk again Project Homepage

July 27, 2011

Comprehensive list of BCI Labs Worldwide

You can find a comprehensive listing of companies and labs doing research for Brain-Computer Interefaces at Now Possible. It is great for researchers, executives and other professionals to join their BCI group on LinkedIn.

You can find it at Now Possible

July 25, 2011

Scientists differentiate brain activity associated with grasping

Quickly grabbing a cup of coffee is an everyday action for most of us. For people with severe paralysis however, this task is unfeasible - yet not "unthinkable". Because of this, interfaces between the brain and a computer can in principle detect these "thoughts" and transform them into steering commands. Scientists from Freiburg now have found a way to distinguish between different types of grasping on the basis of the accompanying brain activity.

In the current issue of the journal "NeuroImage", Tobias Pistohl and colleagues from the Bernstein Center Freiburg and the University Medical Centre describe how they succeeded in differentiating the brain activity associated with a precise grip and a grip of the whole hand. Ultimately, the scientists aim to develop a neuroprosthesis: a device that receives commands directly from the brain, and which can be used by paralysed people to control the arm of a robot - or even their own limbs.

One big problem about arm movements had been so far unresolved. In our daily lives, it is important to handle different objects in different ways, for example a feather and a brick. The researchers from Freiburg now found aspects in the brain's activity that distinguish a precise grip from one with the whole hand.

To this end, Pistohl and his collaborators made use of signals that are measured on the surface of the brain. The big advantage of this approach is that no electrodes have to be implanted directly into this delicate organ. At the same time, the obtained signals are much more precise than those that can be measured on the skull's surface.

The scientists conducted a simple experiment with patients that were not paralysed, but had electrodes implanted into their skull for medical reasons. The task was to grab a cup, either with a precise grip formed by the thumb and the index finger, or with their whole hand. At the same time, a computer recorded the electrical changes at the electrodes. And in fact, the scientists were able to find signals in the brain's activity that differed, depending on the type of grasp. A computer was able to attribute these signals to the different hand positions with great reliability. Now, the next challenge will be to identify these kinds of signals in paralysed patients as well - with the aim of eventually putting a more independent life back within their reach.

Source Bernstein Center Freiburg

July 18, 2011

Soft memory device opens door to new biocompatible electronics

Jello Memory
A memory device with the physical properties of Jell-O that functions well in wet environments (credit: Michael Dickey, North Carolina State University)


North Carolina State University researchers have developed a soft memory device design that functions well in wet environments and has memristor-like characteristics, opening the door to new types of smart biocompatible electronic devices.
A memristor (“memory resistor”) is an electronic device that changes its resistive state depending on the current or voltage history through the device.
The ability to function in wet environments and the biocompatibility of the gels mean that this technology holds promise for interfacing electronics with biological systems and medical monitoring,  such as cells, enzymes or tissue.
Jello Memory2
(Credit: Michael Dickey, North Carolina State University)
The device is made using a liquid alloy of gallium and indium metals set into water-based gels. When the alloy electrode is exposed to a positive charge, it creates an oxidized skin that makes it resistive to electricity (a “0″ state).
When the electrode is exposed to a negative charge, the oxidized skin disappears, and it becomes conductive to electricity (a “1″ state).
Ref.: Orlin D. Velev, et al., Towards All-Soft Matter Circuits: Prototypes of Quasi-Liquid Devices with Memristor Characteristics, Advanced Materials, 2011; [DOI: 10.1002/adma.201101257]






Original Article by the Editor of Kurzweilai.net

July 12, 2011

Grow a new eye

“I am attempting to recreate my eye with the help of a miniature camera implant in my prosthetic artificial eye. The intraocular installation of an eye-cam will substitute for the field of vision of my left eye that I lost in 2005 from a car accident.”
So says Tanya Marie Vlach, who lost her left eye in a car accident. After she received “hundreds of international engineering proposals, support from my  one-eyed community, and thousands of media inquiries.I’ve been plotting new strategies to tell my story, both my personal one and the one of my sci-fi alter ego, into a transmedia platform, which will include: a graphic novel, an experimental documentary, a web series, a game, and a live performance.”
And she wants to build a “bionic camera eye” as a Kickstarter project,  described here. (Also see Grow a new eye by Tanya Vlach.)
Specifications:
  • SD at least, 720p HD at best
  • MPEG-4 / H.264 Recording
  • Built in Wireless Transmitter
  • Bluetooth Wireless Method
  • Remote Trigger
  • Mini A/V out
  • Firewire / USB / Mini HDMI
  • Optical 3X
  • Inductors: (Power Source)
Wish List:
  • Wireless
  • Sensors that respond to blinking enabling camera to take still photos, zoom, focus, and turn on and off.
  • Dilating pupil with change of light.
  • Infrared / Ultraviolet
  • Geo-tagging
  • Facial Recognition
  • Water Tight
  • Verisimilitude
Sounds like a great project. Thanks to Ehren Wells for the tip!


Grow a new eye from Tanya Vlach on Vimeo.


Original article  by July 11, 2011 by Amara D. Angelica
 

May 22, 2011

Bionic hand for 'elective amputation' patient

An Austrian man has voluntarily had his hand amputated so he can be fitted with a bionic limb.

The patient, called "Milo", aged 26, lost the use of his right hand in a motorcycle accident a decade ago.
After his stump heals in several weeks' time, he will be fitted with a bionic hand which will be controlled by nerve signals in his own arm.

The surgery is the second such elective amputation to be performed by Viennese surgeon Professor Oskar Aszmann.

The patient, a Serbian national who has lived in Austria since childhood, suffered injuries to a leg and shoulder when he skidded off his motorcycle and smashed into a lamppost in 2001 while on holiday in Serbia.
Milo and his hybrid hand Milo used a hybrid hand before deciding on the operation
While the leg healed, what is called a "brachial plexus" injury to his right shoulder left his right arm paralysed. Nerve tissue transplanted from his leg by Professor Aszmann restored movement to his arm but not to his hand.

A further operation involving the transplantation of muscle and nerve tissue into his forearm also failed to restore movement to the hand, but it did at least boost the electric signals being delivered from his brain to his forearm, signals that could be used to drive a bionic hand.

Then three years ago, Milo was asked whether he wanted to consider elective amputation.
"The operation will change my life. I live 10 years with this hand and it cannot be (made) better. The only way is to cut this down and I get a new arm," Milo told BBC News prior to his surgery at Vienna's General Hospital.

Read the rest of the original here article at BBC news 

March 1, 2011

Punk rock skeleton demos mind control system

Who says punk is dead? In the video above, a skeleton with a mohawk is helping to visualise how a new neural implant device reads brain signals and interprets them to control a prosthetic arm. The yellow spikes radiating from the skeleton's head represent the firing of motor neurons in the brain. Each neuron is tuned to recognise a different direction in space, so as the arm moves, the spikes change to reflect the changing direction. By adding together the output of all the neurons, the direction of the arm's movement - represented by the blue arrow - can be predicted.
Mind control devices are quite the rage these days, with systems designed to control everything from iPad apps, to prosthetic limbs, to cars. This system, developed by Daniel Moran of Washington University in St. Louis uses a grid of disc-shaped electrodes, inserted between the brain and the skull, to read electrical activity in the brain. It's more precise than electrodes placed outside of the skull, and less invasive than probes inserted into the brain itself.
With further refinements, the system could give amputees better control over prosthetic limbs without overly invasive surgical implants.

Original article from New Scientists magazine

January 26, 2011

Plastic artificial retina is a hit with nerve cells

Light-sensitive plastic might be key to repairing damaged retinas. Creating neuro-prosthetic devices such as retinal implants is tricky because biological tissue doesn't mix well with electronics. Metals and inorganic semiconductor materials can adversely affect the health or function of nerve cells, says Fabio Benfenati at the Italian Institute of Technology in Milan. And over time the body's natural defences can be incredibly hostile and corrosive to such materials.
The emergence of flexible, organic semiconductor materials now offers an alternative. To test them, Benfenati and colleagues seeded nerve cells onto the surface of a light-sensitive semiconducting polymer similar to those used in some solar cells. The cells grew into extensive networks containing thousands of neurons. "We have proved that the materials are highly biocompatible," says Benfenati.
What's more, the presence of the cells did not interfere with the optical properties of the polymer. The team were able to use the neuron-coated polymer as an electrode in a light-driven electrolytic cell.

Artificial colour vision

When short pulses of light were aimed at specific sections of the polymer, only local neurons fired, suggesting the material has the spatial selectivity needed for artificial retinas, says Benfenati.
"It's very elegant science," says Robert Greenberg, whose company Second Sight is close to receiving clinical approval for its retinal prosthesis. But Greenberg questions whether the electrical currents generated would be sufficient to stimulate nerve cells in the eye.
It's still too early to tell, says Benfenati. But he thinks the new material is worth further study, because of another benefit. It can be tuned to respond only to specific wavelengths of light, raising the prospect of creating artificial colour vision, he says.

December 13, 2009

Stanford researchers develop the next generation of retinal implants

A team of Stanford researchers has developed a new generation of retinal implants that aims to provide higher resolution and make artificial vision more natural.

This could be a boon to the several million people in the United States who are blind or visually impaired as a result of retinal degeneration. Every year, 50,000 people in the United States become blind, according to the National Federation of the Blind. But only a couple of dozen Americans have retinal implants.

The team, consisting of ophthalmology Associate Professor Daniel Palanker, electrical engineering Assistant Professor Peter Peumans and neurobiology Assistant Professor Stephen Baccus of Stanford, and biophysics Assistant Professor Alexander Sher of the University of California-Santa Cruz, presented their research Dec. 9 at the International Electron Devices Meeting in Baltimore.

Retinal implants are arrays of , placed at the back of the eye, which partially restore vision to people with diseases that cause their light-sensing photoreceptors to die. Typically, a camera embedded in glasses collects and sends it to a computer that converts the images to electrical signals, which are then transmitted to the implant and interpreted by the brain. There are several private companies and universities working on different versions, but most people with implants can only make out fuzzy borders between light and dark areas.

Analogous to high-definition TV

The Stanford implant would allow patients to make out the shape of objects and see meaningful images. "A good analogy is high-def TV," Baccus said. "If you only have a few pixels of stimulation, you're not going to see much. One clear advantage of our implant is high resolution." The Stanford implant has approximately 1,000 electrodes, compared to 60 electrodes commonly found in fully implantable systems.

What's more, patients would not have to move their heads to see, as they do with older implants. Although we don't notice it, images fade when we do not move our eyes, and we make several tiny eye movements each second to prevent fading. With older retinal implants, the camera moves when the head moves, but not when the eyes move.

The Stanford implant, on the other hand, retains the natural link between eye movements and vision, Palanker said. A patient would wear a video camera that transmits images to a processor, which displays the images on an LCD screen on the inside of patient's goggles. The LCD display transmits infrared light pulses that project the image to photovoltaic cells implanted underneath the retina. The photovoltaic cells convert light signals into electrical impulses that in turn stimulate retinal neurons above them.

As patients move their eyes, the light falls on a different part of the implant, just as visible light falls on different parts of the retina. "The Palanker group has developed a device that actually allows patients to see infrared light on the implant and visible light through the normal optics of the eye," Baccus said.

"It's a sophisticated approach," said Shelley Fried, a research scientist working on the Boston Project. "It should definitely be helpful."

This is also the first flexible implant, and it makes use of a material commonly used in computer chips and solar cells. Peumans and his team at the Stanford Nanofabrication Facility engineered a silicon implant with tiny bridges that allow it to fold over the shape of the eye. "The advantage of having it flexible is that relatively large implants can be placed under the retina without being deformed, and the whole image would stay in focus," Palanker said. A set of flexible implants can cover an even larger portion of the retina, allowing patients to see the entire visual field presented on the display.

"It's really a very interesting idea," Fried said. "The ability to get all the electrodes to sit perfectly on the retina would be a very nice advantage." He said that a spring technology allows their device to conform to the contour of the eye, maintaining close contact between electrodes and neurons.

The tiny crevices between the bridges serve a useful function. Distant retinal cells migrate to the implant and fill in the spaces between the electrodes. Previously, one major challenge was to get cells close enough to the device to receive signals, Fried said. "If we can find a way to bring the retinal neurons closer to the electrode, that would have a huge advantage," he said.

Implanted under the retina

The Stanford device is implanted under the retina, at the earliest possible stage in the visual pathway. "In many degenerative diseases where the photoreceptors are lost, you lose the first and second cells in the pathway," Baccus said. "Ideally you want to talk to the next cell that's still there." The goal is to preserve the complex circuitry of the retina so that images appear more natural.

"With most of the current devices, we are replicating only very few elements of normal retinal signaling," Fried said.

To further enhance the naturalness of restored vision, Baccus and Palanker are developing software that performs functions that the retina normally performs. For example, cells in the retina tend to enhance the appearance of edges, or boundaries between objects. What's more, objects that we focus on are seen in better detail than objects that appear at the corners of our eyes.

The researchers hope to incorporate these features in the next generation of retinal implants. Baccus envisions a day when patients will be able to adjust their implants to see objects better, just like an optometrist adjusts the lens while we read a letter chart.

Palanker and his team will test the ability of animals with retinal diseases similar to those in humans to use the implant to discriminate visual patterns.

One of the major challenges is to understand how the retina works, especially after it is damaged. "We operate on the assumption that the photoreceptors are gone, but otherwise it's a normal retina," Baccus said. "This is almost certainly not true."

Future devices should learn, patient by patient, the new language needed to communicate with the altered circuitry of the damaged retina, he said. Even if the retinal circuitry were unaltered, the brain would still have to learn how to interpret the signals. By mimicking normal vision, retinal implants may overcome these obstacles and bring enhanced vision to blind patients.

Provided by Stanford University

October 14, 2009

One step closer to an artificial nerve cell

Scientists at Karolinska Institutet and Linköping University (Sweden) are well on the way to creating the first artificial nerve cell that can communicate specifically with nerve cells in the body using neurotransmitters. The technology has been published in an article in Nature Materials.

The methods that are currently used to stimulate nerve signals in the nervous system are based on electrical stimulation. Examples of this are cochlear implants, which are surgically inserted into the cochlea in the inner ear, and electrodes that are used directly in the brain. One problem with this method is that all cell types in the vicinity of the electrode are activated, which gives undesired effects.

Scientists have now used an electrically conducting plastic to create a new type of "delivery electrode" that instead releases the neurotransmitters that brain cells use to communicate naturally. The advantage of this is that only neighbouring cells that have receptors for the specific neurotransmitter, and that are thus sensitive to this substance, will be activated.

The scientists demonstrate in the article in Nature Materials that the delivery electrode can be used to control the hearing function in the brains of guinea pigs.

"The ability to deliver exact doses of neurotransmitters opens completely new possibilities for correcting the signalling systems that are faulty in a number of neurological disease conditions", says Professor Agneta Richter-Dahlfors who has led the work, together with Professor Barbara Canlon.

The scientists intend to continue with the development of a small unit that can be implanted into the body. It will be possible to program the unit such that the release of neurotransmitters takes place as often or as seldom as required in order to treat the individual patient. Research projects that are already under way are targeted towards hearing, epilepsy and Parkinson's disease.

The research is being carried out in collaboration between the research groups of Professor Agneta Richter-Dahlfors and Professor Barbara Canlon, together with Professor Magnus Berggren's group at Linköping University. The work falls under the auspices of the Center of Excellence in Organic Bioelectronics, financed by the Swedish Foundation for Strategic Research and led by Magnus Berggren and Agneta Richter-Dahlfors.

More information:

Daniel T. Simon, Sindhulakshmi Kurup, Karin C. Larsson, Ryusuke Hori, Klas Tybrandt, Michel Goiny, Edwin W. H. Jager, Magnus Berggren, Barbara Canlon and Agneta Richter-Dahlfors
Organic electronics for precise delivery of neurotransmitters to modulate mammalian sensory function
Nature Materials, Advance Online Publication, 5 June 2009.

Provided by Karolinska Institutet

October 6, 2009

Understanding A Cell's Split Personality Aids Synthetic Circuits


In this colony, the bacteria lighting up in green are those being "turned on," while those in red remain "off."
As scientists work toward making genetically altered bacteria create living "circuits" to produce a myriad of useful proteins and chemicals, they have logically assumed that the single-celled organisms would always respond to an external command in the same way.

Alas, some bacteria apparently have an individualistic streak that makes them zig when the others zag.

A new set of experiments by Duke University bioengineers has uncovered the existence of "bistability," in which an individual cell has the potential to live in either of two states, depending on which state it was in when stimulated.

Taking into account the effects of this phenomenon should greatly enhance the future efficiency of synthetic circuits, said biomedical engineer Lingchong You of Duke's Pratt School of Engineering and the Duke Institute for Genome Sciences & Policy.

In principle, re-programmed bacteria in a synthetic circuit can be useful for producing proteins, enzymes or chemicals in a coordinated way, or even delivering different types of drugs or selectively killing cancer cells, the scientists said.

Researchers in this new field of synthetic biology "program" populations of genetically altered bacteria to direct their actions in much the same way that a computer program directs a computer. In this analogy, the genetic alteration is the software, the cell the computer. The Duke researchers found that not only does the software drive the computer's actions, but the computer in turn influences the running of the software.

"In the past, synthetic biologists have often assumed that the components of the circuit would act in a predictable fashion every time and that the cells carrying the circuit would just serve as a passive reactor," You said. "In essence, they have taken a circuit-centric view for the design and optimization process. This notion is helpful in making the design process more convenient."

But it's not that simple, say You and his graduate student Cheemeng Tan, who published the results of their latest experiments early online in the journal Nature Chemical Biology.

"We found that there can be unintended consequences that haven't been appreciated before," said You. "In a population of identical cells, some can act one way while others act in another. However, this process appears to occur in a predictable manner, which allows us to take into account this effect when we design circuits."

Bistability is not unique to biology. In electrical engineering, for example, bistability describes the functioning of a toggle switch, a hinged switch that can assume either one of two positions – on or off.

"The prevailing wisdom underestimated the complexity of these synthetic circuits by assuming that the genetic changes would not affect the operation of the cell itself, as if the cell were a passive chassis," said Tan. "The expression of the genetic alteration can drastically impact the cell, and therefore the circuit.

"We now know that when the circuit is activated, it affects the cell, which in turn acts as an additional feedback loop influencing the circuit," Tan said. "The consequences of this interplay have been theorized but not demonstrated experimentally."

The scientists conducted their experiments using a genetically altered colony of the bacteria Escherichia coli (E.coli) in a simple synthetic circuit. When the colony of bacteria was stimulated by external cues, some of the cells went to the "on" position and grew more slowly, while the rest went to the "off" position and grew faster.

"It is as if the colony received the command not to expand too fast when the circuit is on," Tan explained. "Now that we know that this occurs, we used computer modeling to predict how many of the cells will go to the 'on' or 'off' state, which turns out to be consistent with experimental measurements"

The experiments were supported by the National Science Foundation, the National Institutes of Health and a David and Lucille Packard Fellowship. Duke's Philippe Marguet was also a member of the research team.


Adapted from materials provided by Duke University, via EurekAlert!, a service of AAAS.

October 4, 2009

Burst of Technology Helps Blind to See


Barbara Campbell is part of a worldwide experiment testing whether electrodes implanted in the eye can restore sight.

Blindness first began creeping up on Barbara Campbell when she was a teenager, and by her late 30s, her eye disease had stolen what was left of her sight.

Reliant on a talking computer for reading and a cane for navigating New York City, where she lives and works, Ms. Campbell, now 56, would have been thrilled to see something. Anything.

Now, as part of a striking experiment, she can. So far, she can detect burners on her stove when making a grilled cheese, her mirror frame, and whether her computer monitor is on.

She is beginning an intensive three-year research project involving electrodes surgically implanted in her eye, a camera on the bridge of her nose and a video processor strapped to her waist.

The project, involving patients in the United States, Mexico and Europe, is part of a burst of recent research aimed at one of science’s most-sought-after holy grails: making the blind see.

Some of the 37 other participants further along in the project can differentiate plates from cups, tell grass from sidewalk, sort white socks from dark, distinguish doors and windows, identify large letters of the alphabet, and see where people are, albeit not details about them.

Linda Morfoot, 65, of Long Beach, Calif., blind for 12 years, says she can now toss a ball into a basketball hoop, follow her nine grandchildren as they run around her living room and “see where the preacher is” in church.

“For someone who’s been totally blind, this is really remarkable,” said Andrew P. Mariani, a program director at the National Eye Institute. “They’re able to get some sort of vision.”

Scientists involved in the project, the artificial retina, say they have plans to develop the technology to allow people to read, write and recognize faces.

Advances in technology, genetics, brain science and biology are making a goal that long seemed out of reach — restoring sight — more feasible.

“For a long time, scientists and clinicians were very conservative, but you have to at some point get out of the laboratory and focus on getting clinical trials in actual humans,” said Timothy J. Schoen, director of science and preclinical development for the Foundation Fighting Blindness. Now “there’s a real push,” he said, because “we’ve got a lot of blind people walking around, and we’ve got to try to help them.”

More than 3.3 million Americans 40 and over, or about one in 28, are blind or have vision so poor that even with glasses, medicine or surgery, everyday tasks are difficult, according to the National Eye Institute, a federal agency. That number is expected to double in the next 30 years. Worldwide, about 160 million people are similarly affected.

“With an aging population, it’s obviously going to be an increasing problem,” said Michael D. Oberdorfer, who runs the visual neuroscience program for the National Eye Institute, which finances several sight-restoration projects, including the artificial retina. Wide-ranging research is important, he said, because different methods could help different causes of blindness.

The approaches include gene therapy, which has produced improved vision in people who are blind from one rare congenital disease. Stem cell research is considered promising, although far from producing results, and other studies involve a light-responding protein and retinal transplants.

Others are implanting electrodes in monkeys’ brains to see if directly stimulating visual areas might allow even people with no eye function to see.

And recently, Sharron Kay Thornton, 60, from Smithdale, Miss., blinded by a skin condition, regained sight in one eye after doctors at the University of Miami Miller School of Medicine extracted a tooth (her eyetooth, actually), shaved it down and used it as a base for a plastic lens replacing her cornea.

It was the first time the procedure, modified osteo-odonto-keratoprosthesis, was performed in this country. The surgeon, Dr. Victor L. Perez, said it could help people with severely scarred corneas from chemical or combat injuries.

Other techniques focus on delaying blindness, including one involving a capsule implanted in the eye to release proteins that slow the decay of light-responding cells. And with BrainPort, a camera worn by a blind person captures images and transmits signals to electrodes slipped onto the tongue, causing tingling sensations that a person can learn to decipher as the location and movement of objects.

Ms. Campbell’s artificial retina works similarly, except it produces the sensation of sight, not tingling on the tongue. Developed by Dr. Mark S. Humayun, a retinal surgeon at the University of Southern California, it drew on cochlear implants for the deaf and is partly financed by a cochlear implant maker.

It is so far being used in people with retinitis pigmentosa, in which photoreceptor cells, which take in light, deteriorate.

Gerald J. Chader, chief scientific officer at the University of Southern California’s Doheny Retinal Institute, where Dr. Humayun works, said it should also work for severe cases of age-related macular degeneration, the major cause of vision loss in older people.

Go -->here<-- to read the rest of the original article from New York Times

October 3, 2009

It's tempting to call them lords of the flies. For the first time, researchers have controlled the movements of free-flying insects from afar, as if t

Green beetles

The Berkeley team implanted electrodes into the brain and muscles of two species: green June beetles called Cotinus texana from the southern US, and the much larger African species Mecynorrhina torquata. Both responded to stimulation in much the same way, but the weight of the electronics and their battery meant that only Mecynorrhina – which can grow to the size of a human palm – was strong enough to fly freely under radio control.

A particular series of electrical pulses to the brain causes the beetle to take off. No further stimulation is needed to maintain the flight. Though the average length of flights during trials was just 45 seconds, one lasted for more than 30 minutes. A single pulse causes a beetle to land again.

The insects' flight can also be directed. Pulses sent to the brain trigger a descent, on average by 60 centimetres. The beetles can be steered by stimulating the wing muscle on the opposite side from the direction they are required to turn, though this works only three-quarters of the time. After each manoeuvre, the beetles quickly right themselves and continue flying parallel to the ground.

Brain insights

Tyson Hedrick, a biomechanist at the University of North Carolina, Chapel Hill, who was not involved in the research, says he is surprised at the level of control achieved, because the controlling impulses were delivered to comparatively large regions of the insect brain.

Precisely stimulating individual neurons or circuits may harness the beetles more precisely, he told New Scientist, but don't expect aerial acrobatics. "It's not entirely clear how much control a beetle has over its own flight," Hedrick says. "If you've ever seen a beetle flying in the wild, they're not the most graceful insects."

The research may be more successful in revealing just how the brain, nerves and muscles of insects coordinate flight and other behaviours than at bringing six-legged cyborg spies into service, Hedrick adds. "It may end up helping biologists more than it will help DARPA."

Brain-recording backpacks

It's a view echoed by Reid Harrison, an electrical engineer at the University of Utah, Salt Lake City, who has designed brain-recording backpacks for insects. "I'm sceptical about their ability to do surveillance for the following reason: no one has solved the power issue."

Batteries, solar cells and piezoelectrics that harvest energy from movement cannot provide enough power to run electrodes and radio transmitters for very long, Harrison says. "Maybe we'll have some advances in those technologies in the near future, but based on what you can get off the shelf now it's not even close."

Journal reference: Frontiers in Integrative Neuroscience, DOI: 10.3389/neuro.07.024.2009

Original article by Ewen Callaway for New Scientist

A Startup That Builds Biological Parts

Ginkgo BioWorks aims to push synthetic biology to the factory level.

In a warehouse building in Boston, wedged between a cruise-ship drydock and Au Bon Pain's corporate headquarters, sits Ginkgo BioWorks, a new synthetic-biology startup that aims to make biological engineering easier than baking bread. Founded by five MIT scientists, the company offers to assemble biological parts--such as strings of specific genes--for industry and academic scientists.

Biological parts: Ginkgo BioWorks, a synthetic-biology startup, is automating the process of building biological machines. Shown here is a liquid-handling robot that can prepare hundreds of reactions.
Credit: Ginkgo BioWorks

"Think of it as rapid prototyping in biology--we make the part, test it, and then expand on it," says Reshma Shetty, one of the company's cofounders. "You can spend more time thinking about the design, rather than doing the grunt work of making DNA." A very simple project, such as assembling two pieces of DNA, might cost $100, with prices increasing from there.

Synthetic biology is the quest to systematically design and build novel organisms that perform useful functions, such as producing chemicals, using genetic-engineering tools. The field is often considered the next step beyond metabolic engineering because it aims to completely overhaul existing systems to create new functionality rather than improve an existing process with a number of genetic tweaks.

Scientists have so far created microbes that can produce drugs and biofuels, and interest among industrial chemical makers is growing. While companies already exist to synthesize pieces of DNA, Ginkgo assembles synthesized pieces of DNA to create functional genetic pathways. (Assembling specific genes into long pieces of DNA is much cheaper than synthesizing that long piece from scratch.)

Ginkgo will build on technology developed by Tom Knight, a research scientist at MIT and one of the company's cofounders, who started out his scientific career as an engineer. "I'm interested in transitioning biology from being sort of a craft, where every time you do something it's done slightly differently, often in ad hoc ways, to an engineering discipline with standardized methods of arranging information and standardized sets of parts that you can assemble to do things," says Knight.

Scientists generally create biological parts by stitching together genes with specific functions, using specialized enzymes to cut and sew the DNA. The finished part is then inserted into bacteria, where it can perform its designated task. Currently, this process is mostly done by a lab technician or graduate student; consequently, the process is slow, and the resulting construct isn't optimized for use in other projects. Knight developed a standardized way of putting together pieces of DNA, called the BioBricks standard, in which each piece of DNA is tagged on both sides with DNA connectors that allow pieces to be easily interchanged.

"If your part obeys those rules, we can use identical reactions every time to assemble those fragments into larger constructs," says Knight. "That allows us to standardize and automate the process of assembly. If we want to put 100 different versions of a system together, we can do that straightforwardly, whereas it would be a tedious job to do with manual techniques." The most complicated part that Ginkgo has built to date is a piece of DNA with 15 genes and a total of 30,000 DNA letters. The part was made for a private partner, and its function has not been divulged.

Assembling parts is only part of the challenge in building biological machines. Different genes can have unanticipated effects on each other, interfering with the ultimate function. "One of the things we'll be able to do is to assemble hundreds or thousands of versions of a specific pathway with slight variations," says Knight. Scientists can then determine which version works best.

So far, Knight says, the greatest interest has come from manufacturing companies making chemicals for cosmetics, perfumes, and flavorings. "Many of them are trying to replace a dirty chemical process with an environmentally friendly, biologically based process," he says.

Ginkgo is one of just a handful of synthetic-biology companies. Codon Devices, a well-funded startup that synthesized DNA, ceased operations earlier this year. "The challenge now is not to synthesize genes; there are a few companies that do that," says Shetty. "It's to build pathways that can make specific chemicals, such as fuels." And unlike Codon, Ginkgo is starting small. The company is funded by seed money and a $150,000 loan from Lifetech Boston, a program to attract biotech to Boston. Its lab space is populated with banks of PCR machines, which amplify DNA, and liquid-handling robots, mostly bought on eBay or from other biotech firms that have gone out of business. And the company already has a commercial product--a kit sold through New England Biolabs that allows scientists to put together parts on their own.

"If successful, they will be providing a very important service for synthetic biology," says Chris Voigt, a synthetic biologist at the University of California, San Francisco. "There isn't anybody else who would be characterizing and providing parts to the community. I think that this type of research needs to occur outside of the academic community--at either a company or a nonprofit institute."

Original article by Emily Singer for MIT Technology Review

October 2, 2009

Locust flight simulator helps robot insects evolve


Right:Smoke signals helps robots fly better (Image: Simon Walker, Animal Flight Group, Oxford University)

A LOCUST flight simulator could be the key to perfecting the ultimate surveillance machine: an artificial flying insect. The simulator can model the way wings of varying shapes and surface features beat, as well as how they change their shape during flight.

The device was created using extremely high-speed flash photography to track the way smoke particles flow over a locust's wings in a wind tunnel - a technique called particle flow velocimetry. This allowed researchers at the University of Oxford to build a computer model of the insect's wing motion. They then built software that mimicked not only this motion, but also how wing surface features, such as structural veins and corrugations, and the wings' deformation as they flap, change aerodynamic performance.

The work has shown that wings' surface structures are crucial to efficient lift generation, says lead researcher Adrian Thomas (Science, DOI: 10.1126/science.1175928).

The simulator could be a big step forward for the many teams around the world who are designing robotic insects, mainly for military purposes, though Thomas expects them to have a massive role as toys, too. "Imagine sitting in your living room doing aerial combat with radio-controlled dragonflies. Everybody would love that," he says.

Imagine sitting in your living room doing aerial combat with remote-controlled dragonflies

Until now, modelling insect wings involved building physical replicas from rigid materials and estimating how they might move from observations of insect flight. Thomas hopes the simulator will take the guesswork out of the process, especially as every flying insect has uniquely shaped wings and wing beat patterns.

Building miniature aircraft is of great interest to the armed forces. In the UK, for example, the Ministry of Defence wants to create a device that can fly in front of a convoy and detect explosives on the road ahead. In the US, the Pentagon's research arm DARPA is funding development of a "nano air vehicle" (NAV) for surveillance that it states must weigh no more than 10 grams and have only a 7.5-centimetre wingspan.

Last month, DARPA contractor AeroVironment of Monrovia, California, demonstrated the first two-winged robot capable of hovering flight (see video at http://bit.ly/18LR8U). It achieved a stable take-off and hovered for 20 seconds. Other DARPA-funded projects by Micropropulsion and Daedalus Flight Systems are also thought to have achieved hovering robotic flight this year.

"Getting stable hover at the 10-gram size scale with beating wings is an engineering breakthrough, requiring much new understanding and invention," says Ronald Fearing, a micromechanics and flight researcher at the University of California, Berkeley. "The next step will be to get the flight efficiency up so hover can work for several minutes."

But how can such machines be made more efficient? Better batteries and lighter materials will help, but most important will be improving wing structure so the aircraft more accurately imitate - or even improve upon - the way insects fly.

So how do insects fly? For a long time no one really knew. In 1919, German aeronautical engineer Wilhelm Hoff calculated that a pollen-laden bumblebee should not have enough lift to get airborne according to the rules of aerodynamics as understood at the time.

It wasn't until 1981 that Tony Maxworthy of the University of Southern California hit on a possible reason: his working model of a fly's wings, immersed in oil, showed large vortices were spinning off the leading edge of the wing as it beat (Annual Review of Fluid Mechanics, vol 13, p 329). Within the vortices air is moving at high velocity, and is therefore at low pressure, hinting at a lift-creating mechanism unlike that of conventional aircraft, in which an angled wing travelling forward deflects air downwards, creating an opposing upward force.

In 1996 Thomas was a member of Charles Ellington's team at the University of Cambridge, which identified the mechanism by which bugs created high lift forces - using a model of a hawkmoth. "We found a leading-edge vortex that was stable over the whole of the downstroke," says Thomas.

The nature of the leading-edge vortex is dependent on the size of the wings, their number, the pattern described by the beating wing and the wing structure.

This work has laid the foundations for researchers such as Robert Wood and his team at Harvard University, who are investigating ways to make insect wings (Bioinspiration and Biomimetics, DOI: 10.1088/1748-3182/4/3/036002). They have developed a new way to build flexible wings from moulds using microchip manufacturing techniques. Using elastic polymers and elegant, vein-like supporting structures, the researchers can build wings with variable camber, and with different corrugations embossed in them, in an attempt to mimic the in-flight aerodynamics and deformation of real insect wings.

Thomas is also focusing on the way insect wings deform in flight. "If we use a wing model with all the complex curves, twists and corrugations of the real insect it is 50 per cent more efficient than a model with rigid flat-plate wings, for the same lift generation. That would be a huge saving in power for a micro air vehicle," he says.

Although the Oxford team's simulator is geared for locust wings at present, the researchers are adjusting the software to model the hoverfly - with other insect types to follow.

"What we've shown is that modern aerodynamics really can accurately model insect flight," Thomas says. "That old myth about aerodynamics not being able to model bumblebee flight really is dead now."

oRIGINAL ARTICLE WRITTEN BY pAUL mARKS FOR nEW sCIENTIST

September 28, 2009

The Reality of Robot Surrogates


How far are we from sending robots into the world in our stead?

Imagine a world where you're stronger, younger, better looking, and don't age. Well, you do, but your robot surrogate—which you control with your mind from a recliner at home while it does your bidding in the world—doesn't.

It's a bit like The Matrix, but instead of a computer-generated avatar in a graphics-based illusion, in Surrogates—which opens Friday and stars Bruce Willis—you have a real titanium-and-fluid copy impersonating your flesh and blood and running around under your mental control. Other recent films have used similar concepts to ponder issues like outsourced virtual labor (Sleep Dealer) and incarceration (Gamer).

The real technology behind such fantastical fiction is grounded both in far-out research and practical robotics. So how far away is a world of mind-controlled personal automatons?

"We're getting there, but it will be quite a while before we have anything that looks like Bruce Willis," says Trevor Blackwell, the founder and CEO of Anybots, a robotics company in Mountain View, Calif., that builds "telepresence" robots controlled remotely like the ones in Surrogates.

Telepresence is action at a distance, or the projection of presence where you physically aren't. Technically, phoning in to your weekly staff meeting is a form of telepresence. So is joysticking a robot up to a suspected IED in Iraq so a soldier can investigate the scene while sitting in the (relative) safety of an armored vehicle.

Researchers are testing brain-machine interfaces on rats and monkeys that would let the animals directly control a robot, but so far the telepresence interfaces at work in the real world are physical. Through wireless Internet connections, video cameras, joysticks, and sometimes audio, humans move robots around at the office, in the operating room, underwater, on the battlefield, and on Mars.

A recent study by NextGen Research, a market research firm, projects that in the next five years, telepresence will become a significant feature of the US $1.16 billion personal robotics market, meaning robots for you or your home.

According to the study's project manager, Larry Fisher, telepresence "makes the most sense" for security and surveillance robots that would be used to check up on pets or family members from far away. Such robots could also allow health-care professionals to monitor elderly people taking medication at home to ensure the dosage and routine are correct.

Right now, most commercial teleoperated robots are just mobile webcams with speakers, according to NextGen. They can be programmed to roam a set path, or they can be controlled over the Internet by an operator. iRobot, the maker of the Roomba floor cleaner, canceled its telepresence robot, ConnectR, in January, choosing to wait until such a robot would be easier to use. But plenty of companies, such as Meccano/Erector and WowWee, are marketing personal telepresence bots.

Blackwell's Anybots, for example, has developed an office stand-in called QA. It's a Wi-Fi enabled, vaguely body-shaped wheeled robot with an ET-looking head that has cameras for eyes and a display in its chest that shows an image of the person it's standing in for. You can slap on virtual-reality goggles, sensor gloves, and a backpack of electronics to link to it over the Internet for an immersive telepresence experience. Or you can just connect to the robot through your laptop's browser.

For the rest of the article go to ieee spectrum

Original article posted by Anne-Marie Corley // September 2009

September 24, 2009

Stimulating Sight: Retinal Implant Could Help Restore Useful Level Of Vision To Certain Groups Of Blind People


Retinal Implant receives visual data from a camera mounted on a pair of glasses. The coil sends the images to a chip attached to the side of the eyeball, which processes the data and sends it to electrodes implanted below the retina. (Credit: Courtesy of Shawn Kelly)
Inspired by the success of cochlear implants that can restore hearing to some deaf people, researchers at MIT are working on a retinal implant that could one day help blind people regain a useful level of vision.

The eye implant is designed for people who have lost their vision from retinitis pigmentosa or age-related macular degeneration, two of the leading causes of blindness. The retinal prosthesis would take over the function of lost retinal cells by electrically stimulating the nerve cells that normally carry visual input from the retina to the brain.

Such a chip would not restore normal vision but it could help blind people more easily navigate a room or walk down a sidewalk.

"Anything that could help them see a little better and let them identify objects and move around a room would be an enormous help," says Shawn Kelly, a researcher in MIT's Research Laboratory for Electronics and member of the Boston Retinal Implant Project.

The research team, which includes scientists, engineers and ophthalmologists from Massachusetts Eye and Ear Infirmary, the Boston VA Medical Center and Cornell as well as MIT, has been working on the retinal implant for 20 years. The research is funded by the VA Center for Innovative Visual Rehabilitation, the National Institutes of Health, the National Science Foundation, the Catalyst Foundation and the MOSIS microchip fabrication service.

Led by John Wyatt, MIT professor of electrical engineering, the team recently reported a new prototype that they hope to start testing in blind patients within the next three years.

Electrical stimulation

Patients who received the implant would wear a pair of glasses with a camera that sends images to a microchip attached to the eyeball. The glasses also contain a coil that wirelessly transmits power to receiving coils surrounding the eyeball.

When the microchip receives visual information, it activates electrodes that stimulate nerve cells in the areas of the retina corresponding to the features of the visual scene. The electrodes directly activate optical nerves that carry signals to the brain, bypassing the damaged layers of retina.

One question that remains is what kind of vision this direct electrical stimulation actually produces. About 10 years ago, the research team started to answer that by attaching electrodes to the retinas of six blind patients for several hours.

When the electrodes were activated, patients reported seeing a small number of "clouds" or "drops of blood" in their field of vision, and the number of clouds or blood drops they reported corresponded to the number of electrodes that were stimulated. When there was no stimulus, patients accurately reported seeing nothing. Those tests confirmed that retinal stimulation can produce some kind of organized vision in blind patients, though further testing is needed to determine how useful that vision can be.

After those initial tests, with grants from the Boston Veteran's Administration Medical Center and the National Institutes of Health, the researchers started to build an implantable chip, which would allow them to do more long-term tests. Their goal is to produce a chip that can be implanted for at least 10 years.

One of the biggest challenges the researchers face is designing a surgical procedure and implant that won't damage the eye. In their initial prototypes, the electrodes were attached directly atop the retina from inside the eye, which carries more risk of damaging the delicate retina. In the latest version, described in the October issue of IEEE Transactions on Biomedical Engineering, the implant is attached to the outside of the eye, and the electrodes are implanted behind the retina.

That subretinal location, which reduces the risk of tearing the retina and requires a less invasive surgical procedure, is one of the key differences between the MIT implant and retinal prostheses being developed by other research groups.

Another feature of the new MIT prototype is that the chip is now contained in a hermetically sealed titanium case. Previous versions were encased in silicone, which would eventually allow water to seep in and damage the circuitry.

While they have not yet begun any long-term tests on humans, the researchers have tested the device in Yucatan miniature pigs, which have roughly the same size eyeballs as humans. Those tests are only meant to determine whether the implants remain functional and safe and are not designed to observe whether the pigs respond to stimuli to their optic nerves.

So far, the prototypes have been successfully implanted in pigs for up to 10 months, but further safety refinements need to be made before clinical trials in humans can begin.

Wyatt and Kelly say they hope that once human trials begin and blind patients can offer feedback on what they're seeing, they will learn much more about how to configure the algorithm implemented by the chip to produce useful vision.

Patients have told them that what they would like most is the ability to recognize faces. "If they can recognize faces of people in a room, that brings them into the social environment as opposed to sitting there waiting for someone to talk to them," says Kelly.


Journal reference:

  1. Shire, D. B.; Kelly, S. K.; Chen , J.; Doyle , P.; Gingerich, M. D.; Cogan, S. F.; Drohan, W. A.; Mendoza, O.; Theogarajan, L.; Wyatt, J. L.; Rizzo, J. F. Development and Implantation of a Minimally Invasive Wireless Subretinal Neurostimulator. IEEE Transactions on Biomedical Engineering, October 2009 DOI: 10.1109/TBME.2009.2021401
Adapted from materials provided by Massachusetts Institute of Technology. Original article written by Anne Trafton, MIT News Office.

September 17, 2009

The Eyeborg Project (Eye Socket Camera)

(Not The Movie Eyeborgs)

Eyeborg Phase II from eyeborg on Vimeo.



Is Rob Spence's( a filmaker) and Kosta Grammatis's( a former SpaceX avionics systems engineer) project to embed a video camera and transmitter in a prosthetic eye that will then record the world from a perspective never seen before. The only thing I'd be concerned with is it getting hacked into since it has a wireless transmitter.

Check it out at Eyeborgproject .com

Check out their blog -->here<--

If the video loads too slowly check it out at youtube -->here<--