Showing posts with label Tech Breakthroughs. Show all posts
Showing posts with label Tech Breakthroughs. Show all posts

April 11, 2012

Interview with Professor Piotr J. Durka About the BCI Appliance Developed at University of Warsaw


At CeBIT 2012 University of Warsaw presented a wireless brain-computer interface (BCI) system, called BCI Appliance. The device is a tablet-sized box with just one button, running entirely on Open Source software. Neurogadget.com could ask some questions via email about the BCI Appliance to the leader of the project. Welcome to an exclusive interview with Piotr J. Durka, professor of the University of Warsaw, Department of Physics.

Read the rest of the article at Brain Machine Interfacing

August 3, 2011

Trends that’ll change the world in 10 years

Sensor networks, 3D printers, virtual humans and other technologies under development will drastically change our world in the decade to come, according to Cisco chief futurist and chief technologist Dave Evans
Virtual species

Virtual humans, both physical (robots) and online avatars will be added to the workforce. By 2020, robots will be physically superior to humans. IBM’s Blue Brain project, for instance, is a 10-year mission to create a human brain using hardware and software.

“They believe that within a decade they’ll start to see consciousness emerge with this brain,” Evans says. By 2025, the robot population will surpass the number of humans in the developed world. By 2032, robots will be mentally superior to humans. And by 2035, robots could completely replace humans in the workforce. Beyond that is the creation of sophisticated avatars.

Evans points to IBM’s Watson as a template for the virtual human. Watson was able to answer a question by returning a single, accurate result. A patient may use a virtual machine instead of a WebMD search. Or hospitals can augment patient care with virtual machines. Augmented reality and gesture-based computing will enter our classrooms, medical facilities and communications, and transform them as well.

The Internet Of Things

We have passed the threshold where more things than people are connected to the Net. The transition to IPv6 supports limitless connectivity. By 2020, there will be more than six Net-linked devices for every person on Earth. Currently, most of us are connected to Net full-time through three or more devices like PC, phones, TV etc. Next up are sensor networks, using low-power sensors that “collect, transmit, analyze and distribute data on a massive scale,” says Evans.

An ‘Internet of things’ means that everything from electronic dust motes to “connected shoes” to household appliances can be connected to a network and assigned an IP address. Sensors are being embedded in shoes, asthma inhalers, and surgery devices. There’s even a tree in Sweden wired with sensors that tweets its mood and thoughts, with a bit of translation help from an interpretive engine developed by Ericsson (@connectedtree or #ectree).

Quantum networking

Connectivity will continue to evolve, Evans predicts, and networks of tomorrow will be orders of magnitude faster than they are today. The network connectivity 10 years from now will see improvement by 30 lakh times.

Multi-terabit networks using lasers are being explored. And early work is happening on a concept called “quantum networking” based on quantum physics. This involves “quantum entanglement” in which two particles are entangled after which they can be separated by any distance, and when one is changed, the other also changes instantly. Production, though, is not imminent.

Zettabyte Era

By 2015, one zettabyte of data will flow over the Internet. One zettabyte equals stack of books from Earth to Pluto 20 times. “This is the same as every person on Earth tweeting for 100 years, or 125 million years of your favourite one-hour TV show,” says Evans. Our love of high-definition video accounts for much of the increase. By Cisco’s count, 91% of Internet data in 2015 will be video.

And what’s more, he said, the data itself is becoming richer, with every surface — from tables to signs — becoming a digital display, and images evolving from megapixel, to gigapixel, to terapixel definition. So, the so-called “zettaflood” will require vastly improved networks to move more data, and not drop the ball (or the packets) of our beloved video.

Adaptive technology

Technology is finally adapting to us. Evans cites image recognition, puzzle resolution, augmented reality and gesture-based computing as key examples of such technologies.

A technology called 3D printing will allow us to instantly manufacture any physical item, from food to bicycles, using printer technology. Through 3D printing, people in the future will download things as easily as they download music.

“3D printing is the process of joining materials to make objects from 3D model data, usually layer upon layer,” says Evans, adding: “It is not far that we will be able to print human organs.” In March, Dr Anthony Atala from Wake Forest Institute for Regenerative Medicine printed a proof-of-concept kidney mold onstage at TED. It was not living tissue, but the point was well-made.

A better you

“We think nothing of using pacemakers,” Evans points out. In the next 10 years, medical technologies will grow vastly more sophisticated as computing power becomes available in smaller forms. Devices like nanobots and the ability to grow replacement organs from our own tissues will be the norm. “The ultimate integration may be brain-machine interfaces that eventually allow people with spinal cord injuries to live normal lives,” he says.

Today we have mind-controlled video games and wheelchairs, software by Intel that can scan the brain and tell what you are thinking and tools that can actually predict what you are going to do before you do it.

Cloud computing

By 2020, one-third of all data will live in or pass through the cloud. IT spending on innovation and cloud computing could top $1 trillion by 2014.

Right now, the voice search on an Android phone sends the query to Google cloud to decipher and return results. “We’ll see more intelligence built into communication. Things like contextual and location-based information.”

With an always-connected device, the network can be more granular with presence information, tapping into a personal sensor to know that a person’s asleep, and route an incoming call to voicemail. Or knowing that person is traveling at 60 mph in a car, and that this is not the time for a video call.

Power of Power

How are all networked devices going to be powered, and who or what is going to power them? The answer, says Evans, lies in small things. Solar arrays will become increasingly important.

Technologies to make this more economically pragmatic are on their way. Sandia produces solar cells with 100 times less material/same efficiency. MIT technology allows windows to generate power without blocking view.

Inkjet printer produces solar cells with 90 per cent decrease in waste at significantly lower costs. Anything that generates or needs energy, Evans says, will be connected to or managed by an intelligent network.

World Is Flat

The ability of people to connect with each other all around the world, within seconds, via social media isn’t just a social phenomenon, Evans says it’s a flattening out of who has access to technology. He cited the example of Wael Ghonim, the Middle East-based Google engineer whose Facebook page, “We are all Khaled Saeed,” was a spark in the Egyptian uprising and one of the key events of the Arab Spring.

A smaller world also means faster information dissemination. The capture, dissemination and consumption of events are going from “near time” to “real time.” This in turn will drive more rapid influence among cultures.

Self-designed evolution

March 2010: Retina implant restores vision to blind patients.

April 2010: Trial of artificial pancreas starts

June 2011: Spinning heart (no pulse, no clogs and no breakdowns) developed.

Stephen Hawking says, “Humans are entering a stage of self-designed evolution.”

Taking the medical technology idea to the next level, healthy humans will be given the tools to augment themselves. While the early use of these technologies will be to repair unhealthy tissue or fix the consequences of brain injury, eventually designer enhancements will be available to all.

Ultimately, humans will use so much technology to mend, improve or enhance our bodies, that we will become Cyborgs. Futurist Ray Kurzweil is pioneering this idea with a concept he calls singularity, the point at which man and machine merge and become a new species. (Kurzweil says this will happen by 2054).


—Compiled by Beena Kuruvilla

July 28, 2011

The Walk Again Project

Over the past decade, neuroscientists at the Duke University Center for Neuroengineering (DUCN) have developed the field of brain-machine interface (BMI) into one of the most exciting—and promising—areas of basic and applied research in modern neuroscience. By creating a way to link living brain tissue to a variety of artificial tools, BMIs have made it possible for non-human primates to use the electrical activity produced by hundreds of neurons, located in multiple regions of their brains, to directly control the movements of a variety of robotic devices, including prosthetic arms and legs.

As a result, BMI research raises the hope that in the not-too-distant future, patients suffering from a variety of neurological disorders that lead to devastating levels of paralysis may be able to recover their mobility by harnessing their own brain impulses to directly control sophisticated neuroprostheses.
The Walk Again Project, an international consortium of leading research centers around the world represents a new paradigm for scientific collaboration among the world’s academic institutions, bringing together a global network of scientific and technological experts, distributed among all the continents, to achieve a key humanitarian goal.

The project’s central goal is to develop and implement the first BMI capable of restoring full mobility to patients suffering from a severe degree of paralysis. This lofty goal will be achieved by building a neuroprosthetic device that uses a BMI as its core, allowing the patients to capture and use their own voluntary brain activity to control the movements of a full-body prosthetic device. This “wearable robot,” also known as an “exoskeleton,” will be designed to sustain and carry the patient’s body according to his or her mental will.

In addition to proposing to develop new technologies that aim at improving the quality of life of millions of people worldwide, the Walk Again Project also innovates by creating a complete new paradigm for global scientific collaboration among leading academic institutions worldwide. According to this model, a worldwide network of leading scientific and technological experts, distributed among all the continents, come together to participate in a major, non-profit effort to make a fellow human being walk again, based on their collective expertise. These world renowned scholars will contribute key intellectual assets as well as provide a base for continued fundraising capitalization of the project, setting clear goals to establish fundamental advances toward restoring full mobility for patients in need.

Walk again Project Homepage

July 17, 2011

June 2, 2011

Archinoetics-Helping patients with Locked-In-Syndrome

The Challenge

Imagine waking up one day and not being able to move any part of your body. You try to call for help but discover you cannot speak. You attempt to look around the room to see if anyone is nearby, but your eyes won’t move either. Your mind is working perfectly, but you are trapped in your own body.
People with this rare horrific condition, known as “locked-in syndrome”, can become this way almost instantly from a brainstem stroke or severe trauma, or more either gradually from amyotrophic lateral sclerosis (ALS, otherwise known as Lou Gerhig’s Disease). In some cases, people will continue to maintain control over the eyes, allowing them to use assistive technologies like eye-tracking devices to communicate through a computer. Unfortunately, for those who lose eye control, communication is significantly more challenging and solutions are severely limited.

Our Approach

There’s no success more gratifying than helping a locked-in man communicate with his family for the first time in years.
Archinoetics developed brain-machine interfaces (BMI, also known as brain-computer interfaces or BCI) that enables people to interact with and control machines through our custom-designed functional brain imaging system. These systems, which use functional near-infrared imaging (fNIR), monitor the brain’s activity in real-time to detect what types of mental tasks a person is doing. By giving the subject a choice of a few tasks to select from, the person is able to create a signal that can be interpreted by our software and thereby allow them to manipulate basic computer interfaces. In our research lab testing our system on healthy people, everything appeared to function perfectly. The real test was when we visited a man who, because he was locked-in, had not been able to communicate with his family in years. The Archinoetics team looked on anxiously as the sensors were placed on his head and the computer started receiving data. As with many studies involving human subjects, our first tests did not work. But, over the course of several days, we worked through a number of challenges and were able to help this man answer several yes or no questions that his family wanted to ask him.
For the first time in years, he communicated!
After our initial success, we continued to improve our software for communication to improve its reliability and the speed with which someone could communicate. In parallel, we also worked on some fun applications to give locked-in people a way to entertain themselves while practicing the required mental tasks that allow them to control the system for communications. The applications included video games and painting applications. A screenshot of the video game appears here and shows a dolphin that the person controls in an attempt to eat the fish that swim by. The painting application is discussed more below.
Brain Painting
Archinoetics has developed a BCI called “brain painting”. This application allows someone to paint through consciously modifying the level of activity in a region of his or her brain. Typically this means either “singing in your head” or repeating nonsense syllables in your head (such as “la la la”). The first activity activates the language area, thereby raising the signal measured by OTIS, whereas the second activity lowers the signal. In addition to being a fun creative tool, brain painting also helps people learn the skills necessary to use a BCI effectively for communication.
For Brain Painting, Archinoetics worked closely with the late artist, Peggy Chun, whose tropical watercolor paintings made her a household name in Hawaii. Peggy was diagnosed with ALS in 2002, but never let the disease stop her from painting. As she became paralyzed, she switched to painting with her left hand, then by holding the paintbrush in her teeth. Even when she was only able to move her eyes, Peggy used an eye-tracking system to communicate and paint. At Archinoetics, we helped Peggy become the world’s first ‘brain painter’ (see her most famous brain painting on the laptop screen in the photo, entitled ‘Navajo Nightfall’). Sadly, Peggy passed away in 2008, but her memory and spirit live on in her beautiful paintings.
To view or purchase Peggy’s artwork, please visit her website at www.peggychun.com.

Support

This research is in collaboration with the University of Virginia and Georgia Tech University, and has received support from the National Science Foundation under Grant No. 0705804 and Grant No. 0512003.
This article from the Archinoetics website

March 1, 2011

Punk rock skeleton demos mind control system

Who says punk is dead? In the video above, a skeleton with a mohawk is helping to visualise how a new neural implant device reads brain signals and interprets them to control a prosthetic arm. The yellow spikes radiating from the skeleton's head represent the firing of motor neurons in the brain. Each neuron is tuned to recognise a different direction in space, so as the arm moves, the spikes change to reflect the changing direction. By adding together the output of all the neurons, the direction of the arm's movement - represented by the blue arrow - can be predicted.
Mind control devices are quite the rage these days, with systems designed to control everything from iPad apps, to prosthetic limbs, to cars. This system, developed by Daniel Moran of Washington University in St. Louis uses a grid of disc-shaped electrodes, inserted between the brain and the skull, to read electrical activity in the brain. It's more precise than electrodes placed outside of the skull, and less invasive than probes inserted into the brain itself.
With further refinements, the system could give amputees better control over prosthetic limbs without overly invasive surgical implants.

Original article from New Scientists magazine

January 26, 2011

Plastic artificial retina is a hit with nerve cells

Light-sensitive plastic might be key to repairing damaged retinas. Creating neuro-prosthetic devices such as retinal implants is tricky because biological tissue doesn't mix well with electronics. Metals and inorganic semiconductor materials can adversely affect the health or function of nerve cells, says Fabio Benfenati at the Italian Institute of Technology in Milan. And over time the body's natural defences can be incredibly hostile and corrosive to such materials.
The emergence of flexible, organic semiconductor materials now offers an alternative. To test them, Benfenati and colleagues seeded nerve cells onto the surface of a light-sensitive semiconducting polymer similar to those used in some solar cells. The cells grew into extensive networks containing thousands of neurons. "We have proved that the materials are highly biocompatible," says Benfenati.
What's more, the presence of the cells did not interfere with the optical properties of the polymer. The team were able to use the neuron-coated polymer as an electrode in a light-driven electrolytic cell.

Artificial colour vision

When short pulses of light were aimed at specific sections of the polymer, only local neurons fired, suggesting the material has the spatial selectivity needed for artificial retinas, says Benfenati.
"It's very elegant science," says Robert Greenberg, whose company Second Sight is close to receiving clinical approval for its retinal prosthesis. But Greenberg questions whether the electrical currents generated would be sufficient to stimulate nerve cells in the eye.
It's still too early to tell, says Benfenati. But he thinks the new material is worth further study, because of another benefit. It can be tuned to respond only to specific wavelengths of light, raising the prospect of creating artificial colour vision, he says.

November 5, 2010

Awsome Interactive Anatomy Software

I just found this really amazing site that allows you to see and interact with the human anatomy in 3D. Its called visible body.
If you want to check out the free trial go to Interactive Anatomy

If you are looking for the delux version of this software then the  Real Anatomy Software
was my hands-down favorite.

There is even a free download at humananatomycourse.com which has the Ultimate Home study course for human anatomy and physiology
.Human Anatomy Banner

December 13, 2009

Stanford researchers develop the next generation of retinal implants

A team of Stanford researchers has developed a new generation of retinal implants that aims to provide higher resolution and make artificial vision more natural.

This could be a boon to the several million people in the United States who are blind or visually impaired as a result of retinal degeneration. Every year, 50,000 people in the United States become blind, according to the National Federation of the Blind. But only a couple of dozen Americans have retinal implants.

The team, consisting of ophthalmology Associate Professor Daniel Palanker, electrical engineering Assistant Professor Peter Peumans and neurobiology Assistant Professor Stephen Baccus of Stanford, and biophysics Assistant Professor Alexander Sher of the University of California-Santa Cruz, presented their research Dec. 9 at the International Electron Devices Meeting in Baltimore.

Retinal implants are arrays of , placed at the back of the eye, which partially restore vision to people with diseases that cause their light-sensing photoreceptors to die. Typically, a camera embedded in glasses collects and sends it to a computer that converts the images to electrical signals, which are then transmitted to the implant and interpreted by the brain. There are several private companies and universities working on different versions, but most people with implants can only make out fuzzy borders between light and dark areas.

Analogous to high-definition TV

The Stanford implant would allow patients to make out the shape of objects and see meaningful images. "A good analogy is high-def TV," Baccus said. "If you only have a few pixels of stimulation, you're not going to see much. One clear advantage of our implant is high resolution." The Stanford implant has approximately 1,000 electrodes, compared to 60 electrodes commonly found in fully implantable systems.

What's more, patients would not have to move their heads to see, as they do with older implants. Although we don't notice it, images fade when we do not move our eyes, and we make several tiny eye movements each second to prevent fading. With older retinal implants, the camera moves when the head moves, but not when the eyes move.

The Stanford implant, on the other hand, retains the natural link between eye movements and vision, Palanker said. A patient would wear a video camera that transmits images to a processor, which displays the images on an LCD screen on the inside of patient's goggles. The LCD display transmits infrared light pulses that project the image to photovoltaic cells implanted underneath the retina. The photovoltaic cells convert light signals into electrical impulses that in turn stimulate retinal neurons above them.

As patients move their eyes, the light falls on a different part of the implant, just as visible light falls on different parts of the retina. "The Palanker group has developed a device that actually allows patients to see infrared light on the implant and visible light through the normal optics of the eye," Baccus said.

"It's a sophisticated approach," said Shelley Fried, a research scientist working on the Boston Project. "It should definitely be helpful."

This is also the first flexible implant, and it makes use of a material commonly used in computer chips and solar cells. Peumans and his team at the Stanford Nanofabrication Facility engineered a silicon implant with tiny bridges that allow it to fold over the shape of the eye. "The advantage of having it flexible is that relatively large implants can be placed under the retina without being deformed, and the whole image would stay in focus," Palanker said. A set of flexible implants can cover an even larger portion of the retina, allowing patients to see the entire visual field presented on the display.

"It's really a very interesting idea," Fried said. "The ability to get all the electrodes to sit perfectly on the retina would be a very nice advantage." He said that a spring technology allows their device to conform to the contour of the eye, maintaining close contact between electrodes and neurons.

The tiny crevices between the bridges serve a useful function. Distant retinal cells migrate to the implant and fill in the spaces between the electrodes. Previously, one major challenge was to get cells close enough to the device to receive signals, Fried said. "If we can find a way to bring the retinal neurons closer to the electrode, that would have a huge advantage," he said.

Implanted under the retina

The Stanford device is implanted under the retina, at the earliest possible stage in the visual pathway. "In many degenerative diseases where the photoreceptors are lost, you lose the first and second cells in the pathway," Baccus said. "Ideally you want to talk to the next cell that's still there." The goal is to preserve the complex circuitry of the retina so that images appear more natural.

"With most of the current devices, we are replicating only very few elements of normal retinal signaling," Fried said.

To further enhance the naturalness of restored vision, Baccus and Palanker are developing software that performs functions that the retina normally performs. For example, cells in the retina tend to enhance the appearance of edges, or boundaries between objects. What's more, objects that we focus on are seen in better detail than objects that appear at the corners of our eyes.

The researchers hope to incorporate these features in the next generation of retinal implants. Baccus envisions a day when patients will be able to adjust their implants to see objects better, just like an optometrist adjusts the lens while we read a letter chart.

Palanker and his team will test the ability of animals with retinal diseases similar to those in humans to use the implant to discriminate visual patterns.

One of the major challenges is to understand how the retina works, especially after it is damaged. "We operate on the assumption that the photoreceptors are gone, but otherwise it's a normal retina," Baccus said. "This is almost certainly not true."

Future devices should learn, patient by patient, the new language needed to communicate with the altered circuitry of the damaged retina, he said. Even if the retinal circuitry were unaltered, the brain would still have to learn how to interpret the signals. By mimicking normal vision, retinal implants may overcome these obstacles and bring enhanced vision to blind patients.

Provided by Stanford University

October 14, 2009

One step closer to an artificial nerve cell

Scientists at Karolinska Institutet and Linköping University (Sweden) are well on the way to creating the first artificial nerve cell that can communicate specifically with nerve cells in the body using neurotransmitters. The technology has been published in an article in Nature Materials.

The methods that are currently used to stimulate nerve signals in the nervous system are based on electrical stimulation. Examples of this are cochlear implants, which are surgically inserted into the cochlea in the inner ear, and electrodes that are used directly in the brain. One problem with this method is that all cell types in the vicinity of the electrode are activated, which gives undesired effects.

Scientists have now used an electrically conducting plastic to create a new type of "delivery electrode" that instead releases the neurotransmitters that brain cells use to communicate naturally. The advantage of this is that only neighbouring cells that have receptors for the specific neurotransmitter, and that are thus sensitive to this substance, will be activated.

The scientists demonstrate in the article in Nature Materials that the delivery electrode can be used to control the hearing function in the brains of guinea pigs.

"The ability to deliver exact doses of neurotransmitters opens completely new possibilities for correcting the signalling systems that are faulty in a number of neurological disease conditions", says Professor Agneta Richter-Dahlfors who has led the work, together with Professor Barbara Canlon.

The scientists intend to continue with the development of a small unit that can be implanted into the body. It will be possible to program the unit such that the release of neurotransmitters takes place as often or as seldom as required in order to treat the individual patient. Research projects that are already under way are targeted towards hearing, epilepsy and Parkinson's disease.

The research is being carried out in collaboration between the research groups of Professor Agneta Richter-Dahlfors and Professor Barbara Canlon, together with Professor Magnus Berggren's group at Linköping University. The work falls under the auspices of the Center of Excellence in Organic Bioelectronics, financed by the Swedish Foundation for Strategic Research and led by Magnus Berggren and Agneta Richter-Dahlfors.

More information:

Daniel T. Simon, Sindhulakshmi Kurup, Karin C. Larsson, Ryusuke Hori, Klas Tybrandt, Michel Goiny, Edwin W. H. Jager, Magnus Berggren, Barbara Canlon and Agneta Richter-Dahlfors
Organic electronics for precise delivery of neurotransmitters to modulate mammalian sensory function
Nature Materials, Advance Online Publication, 5 June 2009.

Provided by Karolinska Institutet

October 10, 2009

Brain-to-brain communication demonstrated

Brain-to-brain ("B2B") communication has been achieved for the first time by Dr. Christopher James of the University of Southampton.



While attached to an EEG amplifier, the first person generated and transmitted a series of binary digits by imagining moving their left arm for zero and their right arm for one. That data was sent via the Internet to another PC. The second person was also attached to an EEG amplifier and their PC flashed an LED lamp at two different frequencies, one for zero and the other one for one.

The pattern of the flashing LEDs was too subtle to be detected by the second person, but was picked up by electrodes detecting visual cortex activity. The PC deciphered whether a zero or a one was transmitted, with an end-to-end bandwidth of about .14 bit/sec.

"B2B could be of benefit such as helping people with severe debilitating muscle wasting diseases, or with the so-called 'locked-in' syndrome, to communicate and it also has applications for gaming," said James.

Possible extensions of the research include two-way and multiuser B2B communication with faster, broader-bandwidth transmission by using more complex signal generation and pattern recognition. - Ed.

Source: University of Southampton news release


October 7, 2009

Nissan's robot cars mimic fish to avoid crashing


Nissan has developed a mini robotic car that can move autonomously in groups while avoiding crashing into obstacles (including other cars).

The Eporo, Nissan says, is the first robot car designed to move in a group by sharing its position and other information. The aim is to incorporate the technology into passenger cars to reduce accidents and traffic jams.

Although a group of Eporos may look like a gang of cybernetic Jawa, Nissan says the cars' design was inspired by the way fish move in schools.

An evolution of the bumblebee-inspired BR23C robot car unveiled last year, the Eporo uses Nissan's collision avoidance technology to travel in groups. Check out BR23C trying to get away from a Japanese lady in this video.

Eporo can dodge obstacles just like fish.

The automaker studied how large schools of fish can move without colliding. It says Eporo imitates three rules of fish movement: avoiding crashes, traveling side by side, and keeping close to other members of the school.

The robots use laser range finders and ultra-wideband radio to determine distance to obstacles. They also communicate with each other to form the most efficient group formation to maneuver through tight spots.

Eporo stands for "Episode O (Zero) Robot." That zinger of a mouthful means zero episodes, as in zero accidents and zero emissions.

Nissan intends to show off Eporo at the Ceatec trade show next week in Tokyo.

Original article by Tim Hornyak for Crave

October 6, 2009

Understanding A Cell's Split Personality Aids Synthetic Circuits


In this colony, the bacteria lighting up in green are those being "turned on," while those in red remain "off."
As scientists work toward making genetically altered bacteria create living "circuits" to produce a myriad of useful proteins and chemicals, they have logically assumed that the single-celled organisms would always respond to an external command in the same way.

Alas, some bacteria apparently have an individualistic streak that makes them zig when the others zag.

A new set of experiments by Duke University bioengineers has uncovered the existence of "bistability," in which an individual cell has the potential to live in either of two states, depending on which state it was in when stimulated.

Taking into account the effects of this phenomenon should greatly enhance the future efficiency of synthetic circuits, said biomedical engineer Lingchong You of Duke's Pratt School of Engineering and the Duke Institute for Genome Sciences & Policy.

In principle, re-programmed bacteria in a synthetic circuit can be useful for producing proteins, enzymes or chemicals in a coordinated way, or even delivering different types of drugs or selectively killing cancer cells, the scientists said.

Researchers in this new field of synthetic biology "program" populations of genetically altered bacteria to direct their actions in much the same way that a computer program directs a computer. In this analogy, the genetic alteration is the software, the cell the computer. The Duke researchers found that not only does the software drive the computer's actions, but the computer in turn influences the running of the software.

"In the past, synthetic biologists have often assumed that the components of the circuit would act in a predictable fashion every time and that the cells carrying the circuit would just serve as a passive reactor," You said. "In essence, they have taken a circuit-centric view for the design and optimization process. This notion is helpful in making the design process more convenient."

But it's not that simple, say You and his graduate student Cheemeng Tan, who published the results of their latest experiments early online in the journal Nature Chemical Biology.

"We found that there can be unintended consequences that haven't been appreciated before," said You. "In a population of identical cells, some can act one way while others act in another. However, this process appears to occur in a predictable manner, which allows us to take into account this effect when we design circuits."

Bistability is not unique to biology. In electrical engineering, for example, bistability describes the functioning of a toggle switch, a hinged switch that can assume either one of two positions – on or off.

"The prevailing wisdom underestimated the complexity of these synthetic circuits by assuming that the genetic changes would not affect the operation of the cell itself, as if the cell were a passive chassis," said Tan. "The expression of the genetic alteration can drastically impact the cell, and therefore the circuit.

"We now know that when the circuit is activated, it affects the cell, which in turn acts as an additional feedback loop influencing the circuit," Tan said. "The consequences of this interplay have been theorized but not demonstrated experimentally."

The scientists conducted their experiments using a genetically altered colony of the bacteria Escherichia coli (E.coli) in a simple synthetic circuit. When the colony of bacteria was stimulated by external cues, some of the cells went to the "on" position and grew more slowly, while the rest went to the "off" position and grew faster.

"It is as if the colony received the command not to expand too fast when the circuit is on," Tan explained. "Now that we know that this occurs, we used computer modeling to predict how many of the cells will go to the 'on' or 'off' state, which turns out to be consistent with experimental measurements"

The experiments were supported by the National Science Foundation, the National Institutes of Health and a David and Lucille Packard Fellowship. Duke's Philippe Marguet was also a member of the research team.


Adapted from materials provided by Duke University, via EurekAlert!, a service of AAAS.

October 3, 2009

It's tempting to call them lords of the flies. For the first time, researchers have controlled the movements of free-flying insects from afar, as if t

Green beetles

The Berkeley team implanted electrodes into the brain and muscles of two species: green June beetles called Cotinus texana from the southern US, and the much larger African species Mecynorrhina torquata. Both responded to stimulation in much the same way, but the weight of the electronics and their battery meant that only Mecynorrhina – which can grow to the size of a human palm – was strong enough to fly freely under radio control.

A particular series of electrical pulses to the brain causes the beetle to take off. No further stimulation is needed to maintain the flight. Though the average length of flights during trials was just 45 seconds, one lasted for more than 30 minutes. A single pulse causes a beetle to land again.

The insects' flight can also be directed. Pulses sent to the brain trigger a descent, on average by 60 centimetres. The beetles can be steered by stimulating the wing muscle on the opposite side from the direction they are required to turn, though this works only three-quarters of the time. After each manoeuvre, the beetles quickly right themselves and continue flying parallel to the ground.

Brain insights

Tyson Hedrick, a biomechanist at the University of North Carolina, Chapel Hill, who was not involved in the research, says he is surprised at the level of control achieved, because the controlling impulses were delivered to comparatively large regions of the insect brain.

Precisely stimulating individual neurons or circuits may harness the beetles more precisely, he told New Scientist, but don't expect aerial acrobatics. "It's not entirely clear how much control a beetle has over its own flight," Hedrick says. "If you've ever seen a beetle flying in the wild, they're not the most graceful insects."

The research may be more successful in revealing just how the brain, nerves and muscles of insects coordinate flight and other behaviours than at bringing six-legged cyborg spies into service, Hedrick adds. "It may end up helping biologists more than it will help DARPA."

Brain-recording backpacks

It's a view echoed by Reid Harrison, an electrical engineer at the University of Utah, Salt Lake City, who has designed brain-recording backpacks for insects. "I'm sceptical about their ability to do surveillance for the following reason: no one has solved the power issue."

Batteries, solar cells and piezoelectrics that harvest energy from movement cannot provide enough power to run electrodes and radio transmitters for very long, Harrison says. "Maybe we'll have some advances in those technologies in the near future, but based on what you can get off the shelf now it's not even close."

Journal reference: Frontiers in Integrative Neuroscience, DOI: 10.3389/neuro.07.024.2009

Original article by Ewen Callaway for New Scientist

A Startup That Builds Biological Parts

Ginkgo BioWorks aims to push synthetic biology to the factory level.

In a warehouse building in Boston, wedged between a cruise-ship drydock and Au Bon Pain's corporate headquarters, sits Ginkgo BioWorks, a new synthetic-biology startup that aims to make biological engineering easier than baking bread. Founded by five MIT scientists, the company offers to assemble biological parts--such as strings of specific genes--for industry and academic scientists.

Biological parts: Ginkgo BioWorks, a synthetic-biology startup, is automating the process of building biological machines. Shown here is a liquid-handling robot that can prepare hundreds of reactions.
Credit: Ginkgo BioWorks

"Think of it as rapid prototyping in biology--we make the part, test it, and then expand on it," says Reshma Shetty, one of the company's cofounders. "You can spend more time thinking about the design, rather than doing the grunt work of making DNA." A very simple project, such as assembling two pieces of DNA, might cost $100, with prices increasing from there.

Synthetic biology is the quest to systematically design and build novel organisms that perform useful functions, such as producing chemicals, using genetic-engineering tools. The field is often considered the next step beyond metabolic engineering because it aims to completely overhaul existing systems to create new functionality rather than improve an existing process with a number of genetic tweaks.

Scientists have so far created microbes that can produce drugs and biofuels, and interest among industrial chemical makers is growing. While companies already exist to synthesize pieces of DNA, Ginkgo assembles synthesized pieces of DNA to create functional genetic pathways. (Assembling specific genes into long pieces of DNA is much cheaper than synthesizing that long piece from scratch.)

Ginkgo will build on technology developed by Tom Knight, a research scientist at MIT and one of the company's cofounders, who started out his scientific career as an engineer. "I'm interested in transitioning biology from being sort of a craft, where every time you do something it's done slightly differently, often in ad hoc ways, to an engineering discipline with standardized methods of arranging information and standardized sets of parts that you can assemble to do things," says Knight.

Scientists generally create biological parts by stitching together genes with specific functions, using specialized enzymes to cut and sew the DNA. The finished part is then inserted into bacteria, where it can perform its designated task. Currently, this process is mostly done by a lab technician or graduate student; consequently, the process is slow, and the resulting construct isn't optimized for use in other projects. Knight developed a standardized way of putting together pieces of DNA, called the BioBricks standard, in which each piece of DNA is tagged on both sides with DNA connectors that allow pieces to be easily interchanged.

"If your part obeys those rules, we can use identical reactions every time to assemble those fragments into larger constructs," says Knight. "That allows us to standardize and automate the process of assembly. If we want to put 100 different versions of a system together, we can do that straightforwardly, whereas it would be a tedious job to do with manual techniques." The most complicated part that Ginkgo has built to date is a piece of DNA with 15 genes and a total of 30,000 DNA letters. The part was made for a private partner, and its function has not been divulged.

Assembling parts is only part of the challenge in building biological machines. Different genes can have unanticipated effects on each other, interfering with the ultimate function. "One of the things we'll be able to do is to assemble hundreds or thousands of versions of a specific pathway with slight variations," says Knight. Scientists can then determine which version works best.

So far, Knight says, the greatest interest has come from manufacturing companies making chemicals for cosmetics, perfumes, and flavorings. "Many of them are trying to replace a dirty chemical process with an environmentally friendly, biologically based process," he says.

Ginkgo is one of just a handful of synthetic-biology companies. Codon Devices, a well-funded startup that synthesized DNA, ceased operations earlier this year. "The challenge now is not to synthesize genes; there are a few companies that do that," says Shetty. "It's to build pathways that can make specific chemicals, such as fuels." And unlike Codon, Ginkgo is starting small. The company is funded by seed money and a $150,000 loan from Lifetech Boston, a program to attract biotech to Boston. Its lab space is populated with banks of PCR machines, which amplify DNA, and liquid-handling robots, mostly bought on eBay or from other biotech firms that have gone out of business. And the company already has a commercial product--a kit sold through New England Biolabs that allows scientists to put together parts on their own.

"If successful, they will be providing a very important service for synthetic biology," says Chris Voigt, a synthetic biologist at the University of California, San Francisco. "There isn't anybody else who would be characterizing and providing parts to the community. I think that this type of research needs to occur outside of the academic community--at either a company or a nonprofit institute."

Original article by Emily Singer for MIT Technology Review

October 2, 2009

Locust flight simulator helps robot insects evolve


Right:Smoke signals helps robots fly better (Image: Simon Walker, Animal Flight Group, Oxford University)

A LOCUST flight simulator could be the key to perfecting the ultimate surveillance machine: an artificial flying insect. The simulator can model the way wings of varying shapes and surface features beat, as well as how they change their shape during flight.

The device was created using extremely high-speed flash photography to track the way smoke particles flow over a locust's wings in a wind tunnel - a technique called particle flow velocimetry. This allowed researchers at the University of Oxford to build a computer model of the insect's wing motion. They then built software that mimicked not only this motion, but also how wing surface features, such as structural veins and corrugations, and the wings' deformation as they flap, change aerodynamic performance.

The work has shown that wings' surface structures are crucial to efficient lift generation, says lead researcher Adrian Thomas (Science, DOI: 10.1126/science.1175928).

The simulator could be a big step forward for the many teams around the world who are designing robotic insects, mainly for military purposes, though Thomas expects them to have a massive role as toys, too. "Imagine sitting in your living room doing aerial combat with radio-controlled dragonflies. Everybody would love that," he says.

Imagine sitting in your living room doing aerial combat with remote-controlled dragonflies

Until now, modelling insect wings involved building physical replicas from rigid materials and estimating how they might move from observations of insect flight. Thomas hopes the simulator will take the guesswork out of the process, especially as every flying insect has uniquely shaped wings and wing beat patterns.

Building miniature aircraft is of great interest to the armed forces. In the UK, for example, the Ministry of Defence wants to create a device that can fly in front of a convoy and detect explosives on the road ahead. In the US, the Pentagon's research arm DARPA is funding development of a "nano air vehicle" (NAV) for surveillance that it states must weigh no more than 10 grams and have only a 7.5-centimetre wingspan.

Last month, DARPA contractor AeroVironment of Monrovia, California, demonstrated the first two-winged robot capable of hovering flight (see video at http://bit.ly/18LR8U). It achieved a stable take-off and hovered for 20 seconds. Other DARPA-funded projects by Micropropulsion and Daedalus Flight Systems are also thought to have achieved hovering robotic flight this year.

"Getting stable hover at the 10-gram size scale with beating wings is an engineering breakthrough, requiring much new understanding and invention," says Ronald Fearing, a micromechanics and flight researcher at the University of California, Berkeley. "The next step will be to get the flight efficiency up so hover can work for several minutes."

But how can such machines be made more efficient? Better batteries and lighter materials will help, but most important will be improving wing structure so the aircraft more accurately imitate - or even improve upon - the way insects fly.

So how do insects fly? For a long time no one really knew. In 1919, German aeronautical engineer Wilhelm Hoff calculated that a pollen-laden bumblebee should not have enough lift to get airborne according to the rules of aerodynamics as understood at the time.

It wasn't until 1981 that Tony Maxworthy of the University of Southern California hit on a possible reason: his working model of a fly's wings, immersed in oil, showed large vortices were spinning off the leading edge of the wing as it beat (Annual Review of Fluid Mechanics, vol 13, p 329). Within the vortices air is moving at high velocity, and is therefore at low pressure, hinting at a lift-creating mechanism unlike that of conventional aircraft, in which an angled wing travelling forward deflects air downwards, creating an opposing upward force.

In 1996 Thomas was a member of Charles Ellington's team at the University of Cambridge, which identified the mechanism by which bugs created high lift forces - using a model of a hawkmoth. "We found a leading-edge vortex that was stable over the whole of the downstroke," says Thomas.

The nature of the leading-edge vortex is dependent on the size of the wings, their number, the pattern described by the beating wing and the wing structure.

This work has laid the foundations for researchers such as Robert Wood and his team at Harvard University, who are investigating ways to make insect wings (Bioinspiration and Biomimetics, DOI: 10.1088/1748-3182/4/3/036002). They have developed a new way to build flexible wings from moulds using microchip manufacturing techniques. Using elastic polymers and elegant, vein-like supporting structures, the researchers can build wings with variable camber, and with different corrugations embossed in them, in an attempt to mimic the in-flight aerodynamics and deformation of real insect wings.

Thomas is also focusing on the way insect wings deform in flight. "If we use a wing model with all the complex curves, twists and corrugations of the real insect it is 50 per cent more efficient than a model with rigid flat-plate wings, for the same lift generation. That would be a huge saving in power for a micro air vehicle," he says.

Although the Oxford team's simulator is geared for locust wings at present, the researchers are adjusting the software to model the hoverfly - with other insect types to follow.

"What we've shown is that modern aerodynamics really can accurately model insect flight," Thomas says. "That old myth about aerodynamics not being able to model bumblebee flight really is dead now."

oRIGINAL ARTICLE WRITTEN BY pAUL mARKS FOR nEW sCIENTIST

Nanotech researchers develop artificial pore


CINCINNATI—Using an RNA-powered nanomotor, University of Cincinnati (UC) biomedical engineering researchers have successfully developed an artificial pore able to transmit nanoscale material through a membrane.

In a study led by UC biomedical engineering professor Peixuan Guo, PhD, members of the UC team inserted the modified core of a nanomotor, a microscopic biological machine, into a lipid membrane. The resulting channel enabled them to move both single- and double-stranded DNA through the membrane.

Their paper, “Translocation of double-stranded DNA through membrane-adapted phi29 motor protein nanopores,” will appear in the journal Nature Nanotechnology, Sept. 27, 2009. The engineered channel could have applications in nano-sensing, gene delivery, drug loading and DNA sequencing," says Guo.

Guo derived the nanomotor used in the study from the biological motor of bacteriophage phi29, a virus that infects bacteria. Previously, Guo discovered that the bacteriophage phi29 DNA-packaging motor uses six molecules of the genetic material RNA to power its DNA genome through its protein core, much like a screw through a bolt.

"The re-engineered motor core itself has shown to associate with lipid membranes, but we needed to show that it could punch a hole in the lipid membrane," says David Wendell, PhD, co-first author of the paper and a research assistant professor in UC’s biomedical engineering department. "That was one of the first challenges, moving it from its native enclosure into this engineered environment."

In this study, UC researchers embedded the re-engineered nanomotor core into a lipid sheet, creating a channel large enough to allow the passage of double-stranded DNA through the channel.

Guo says past work with biological channels has been focused on channels large enough to move only single-stranded genetic material.

"Since the genomic DNA of human, animals, plants, fungus and bacteria are double stranded, the development of single pore system that can sequence double-stranded DNA is very important," he says.

By being placed into a lipid sheet, the artificial membrane channel can be used to load double-stranded DNA, drugs or other therapeutic material into the liposome, other compartments, or potentially into a cell through the membrane.

Guo also says the process by which the DNA travels through the membrane can have larger applications.

"The idea that a DNA molecule travels through the nanopore, advancing nucleotide by nucleotide, could lead to the development of a single pore DNA sequencing apparatus, an area of strong national interest," he says.

Using stochastic sensing, a new analytical technique used in nanopore work, Wendell says researchers can characterize and identify material, like DNA, moving through the membrane.

Co-first author and UC postdoctoral fellow Peng Jing, PhD, says that, compared with traditional research methods, the successful embedding of the nanomotor into the membrane may also provide researchers with a new way to study the DNA packaging mechanisms of the viral nanomotor.

"Specifically, we are able to investigate the details concerning how double-stranded DNA translocates through the protein channel," he says.

The study is the next step in research on using nanomotors to package and deliver therapeutic agents directly to infected cells. Eventually, the team's work could enable use of nanoscale medical devices to diagnose and treat diseases.

"This motor is one of the strongest bio motors discovered to date," says Wendell, "If you can use that force to move a nanoscale rotor or a nanoscale machine … you're converting the force of the motor into a machine that might do something useful."

Funding for this study comes from the National Institutes of Health's Nanomedicine Development Center. Guo is the director of one of eight NIH Nanomedicine Development Centers and an endowed chair in biomedical engineering at UC.

Coauthors of the study include UC research assistant professor David Wendell, PhD, postdoctoral fellow Peng Jing, PhD, graduate students Jia Geng and Tae Jin Lee and former postdoctoral fellow Varuni Subramaniam from Guo’s previous lab at Purdue University. Carlo Montemagno, dean of the College of Engineering and College of Applied Science, also contributed to the study.

September 24, 2009

Stimulating Sight: Retinal Implant Could Help Restore Useful Level Of Vision To Certain Groups Of Blind People


Retinal Implant receives visual data from a camera mounted on a pair of glasses. The coil sends the images to a chip attached to the side of the eyeball, which processes the data and sends it to electrodes implanted below the retina. (Credit: Courtesy of Shawn Kelly)
Inspired by the success of cochlear implants that can restore hearing to some deaf people, researchers at MIT are working on a retinal implant that could one day help blind people regain a useful level of vision.

The eye implant is designed for people who have lost their vision from retinitis pigmentosa or age-related macular degeneration, two of the leading causes of blindness. The retinal prosthesis would take over the function of lost retinal cells by electrically stimulating the nerve cells that normally carry visual input from the retina to the brain.

Such a chip would not restore normal vision but it could help blind people more easily navigate a room or walk down a sidewalk.

"Anything that could help them see a little better and let them identify objects and move around a room would be an enormous help," says Shawn Kelly, a researcher in MIT's Research Laboratory for Electronics and member of the Boston Retinal Implant Project.

The research team, which includes scientists, engineers and ophthalmologists from Massachusetts Eye and Ear Infirmary, the Boston VA Medical Center and Cornell as well as MIT, has been working on the retinal implant for 20 years. The research is funded by the VA Center for Innovative Visual Rehabilitation, the National Institutes of Health, the National Science Foundation, the Catalyst Foundation and the MOSIS microchip fabrication service.

Led by John Wyatt, MIT professor of electrical engineering, the team recently reported a new prototype that they hope to start testing in blind patients within the next three years.

Electrical stimulation

Patients who received the implant would wear a pair of glasses with a camera that sends images to a microchip attached to the eyeball. The glasses also contain a coil that wirelessly transmits power to receiving coils surrounding the eyeball.

When the microchip receives visual information, it activates electrodes that stimulate nerve cells in the areas of the retina corresponding to the features of the visual scene. The electrodes directly activate optical nerves that carry signals to the brain, bypassing the damaged layers of retina.

One question that remains is what kind of vision this direct electrical stimulation actually produces. About 10 years ago, the research team started to answer that by attaching electrodes to the retinas of six blind patients for several hours.

When the electrodes were activated, patients reported seeing a small number of "clouds" or "drops of blood" in their field of vision, and the number of clouds or blood drops they reported corresponded to the number of electrodes that were stimulated. When there was no stimulus, patients accurately reported seeing nothing. Those tests confirmed that retinal stimulation can produce some kind of organized vision in blind patients, though further testing is needed to determine how useful that vision can be.

After those initial tests, with grants from the Boston Veteran's Administration Medical Center and the National Institutes of Health, the researchers started to build an implantable chip, which would allow them to do more long-term tests. Their goal is to produce a chip that can be implanted for at least 10 years.

One of the biggest challenges the researchers face is designing a surgical procedure and implant that won't damage the eye. In their initial prototypes, the electrodes were attached directly atop the retina from inside the eye, which carries more risk of damaging the delicate retina. In the latest version, described in the October issue of IEEE Transactions on Biomedical Engineering, the implant is attached to the outside of the eye, and the electrodes are implanted behind the retina.

That subretinal location, which reduces the risk of tearing the retina and requires a less invasive surgical procedure, is one of the key differences between the MIT implant and retinal prostheses being developed by other research groups.

Another feature of the new MIT prototype is that the chip is now contained in a hermetically sealed titanium case. Previous versions were encased in silicone, which would eventually allow water to seep in and damage the circuitry.

While they have not yet begun any long-term tests on humans, the researchers have tested the device in Yucatan miniature pigs, which have roughly the same size eyeballs as humans. Those tests are only meant to determine whether the implants remain functional and safe and are not designed to observe whether the pigs respond to stimuli to their optic nerves.

So far, the prototypes have been successfully implanted in pigs for up to 10 months, but further safety refinements need to be made before clinical trials in humans can begin.

Wyatt and Kelly say they hope that once human trials begin and blind patients can offer feedback on what they're seeing, they will learn much more about how to configure the algorithm implemented by the chip to produce useful vision.

Patients have told them that what they would like most is the ability to recognize faces. "If they can recognize faces of people in a room, that brings them into the social environment as opposed to sitting there waiting for someone to talk to them," says Kelly.


Journal reference:

  1. Shire, D. B.; Kelly, S. K.; Chen , J.; Doyle , P.; Gingerich, M. D.; Cogan, S. F.; Drohan, W. A.; Mendoza, O.; Theogarajan, L.; Wyatt, J. L.; Rizzo, J. F. Development and Implantation of a Minimally Invasive Wireless Subretinal Neurostimulator. IEEE Transactions on Biomedical Engineering, October 2009 DOI: 10.1109/TBME.2009.2021401
Adapted from materials provided by Massachusetts Institute of Technology. Original article written by Anne Trafton, MIT News Office.

September 23, 2009

Video surveillance system that reasons like a human brain

BRS Labs announced a video-surveillance technology called Behavioral Analytics, which leverages cognitive reasoning, and processes visual data on a level similar to the human brain.

It is impossible for humans to monitor the tens of millions of cameras deployed throughout the world, a fact long recognized by the international security community. Security video is either used for forensic analysis after an incident has occurred, or it employs a limited-capability technology known as Video Analytics – a video-motion and object-classification-based software technology that attempts to watch video streams and then sends an alarm on specific pre-programmed events. The problem is that this legacy solution generates a great number of false alarms that effectively renders it useless in the real world.

BRS Labs has created a technology it calls Behavioral Analytics. It uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming.

The system learns autonomously, and builds cognitive “memories” while continuously monitoring a scene through the “eyes” of a CCTV security camera. It sees and then registers the context of what constitutes normal behavior, and the software distinguishes and alerts on abnormal behavior without requiring any special programming, definition of rules or virtual trip lines.

AISight is currently fielded across a wide variety of global critical infrastructure assets, protecting major international hotels, banking institutions, seaports, nuclear facilities, airports and dense urban areas plagued by criminal activity.

Original article by Helpnet security

September 17, 2009

The Eyeborg Project (Eye Socket Camera)

(Not The Movie Eyeborgs)

Eyeborg Phase II from eyeborg on Vimeo.



Is Rob Spence's( a filmaker) and Kosta Grammatis's( a former SpaceX avionics systems engineer) project to embed a video camera and transmitter in a prosthetic eye that will then record the world from a perspective never seen before. The only thing I'd be concerned with is it getting hacked into since it has a wireless transmitter.

Check it out at Eyeborgproject .com

Check out their blog -->here<--

If the video loads too slowly check it out at youtube -->here<--