August 31, 2009

Fishy Sixth Sense: Mathematical Keys To Fascinating Sense Organ

Fish and some amphibians possess a unique sensory capability in the so-called lateral-line system. It allows them, in effect, to "touch" objects in their surroundings without direct physical contact or to "see" in the dark. Professor Leo van Hermmen and his team in the physics department of the Technische Universitaet Muenchen are exploring the fundamental basis for this sensory system. What they discover might one day, through biomimetic engineering, better equip robots to orient themselves in their environments.

With our senses we take in only a small fraction of the information that surrounds us. Infrared light, electromagnetic waves, and ultrasound are just a few examples of the external influences that we humans can grasp only with the help of technological measuring devices – whereas some other animals use special sense organs, their own biological equipment, for the purpose. One such system found in fish and some amphibians is under investigation by the research team of Professor Leo van Hemmen, chair of theoretical biophysics at TUM, the Technische Universitaet Muenchen.
Even in murky waters hardly penetrated by light, pike and pickerel can feel out their prey before making contact. The blind Mexican cave fish can perceive structures in its surroundings and can effortlessly avoid obstacles. Catfish on the hunt follow invisible tracks that lead directly to their prey. The organ that makes this possible is the lateral-line system, which registers changes in currents and even smaller disturbances, providing backup support for the sense of sight particularly in dark or muddy waters.
This remote sensing system, at first glance mysterious, rests on measurement of the pressure distribution and velocity field in the surrounding water. The lateral-line organs responsible for this are aligned along the left and right sides of the fish's body and also surround the eyes and mouth. They consist of gelatinous, flexible, flag-like units about a tenth of a millimeter long. These so-called neuromasts – which sit either directly on the animal's skin or just underneath, in channels that water can permeate through pores – are sensitive to the slightest motion of the water. Coupled to them are hair cells similar to the acoustic pressure sensors in the human inner ear. Nerves deliver signals from the hair cells for processing in the brain, which localizes and identifies possible sources of the changes detected in the water's motion.
These changes can arise from various sources: A fish swimming by produces vibrations or waves that are directly conveyed to the lateral-line organ. Schooling fishes can recognize a nearby attacker and synchronize their swimming motion so that they resemble a single large animal. The Mexican cave fish pushes a bow wave ahead of itself, which is reflected from obstacles. The catfish takes advantage of the fact that a swimming fish that beats its tail fin leaves a trail of eddies behind. This so-called "vortex street" persists for more than a minute and can betray the prey.
For the past five years, Leo van Hemmen and his team have been investigating the capabilities of the lateral-line system and assessing the potential to translate it into technology. How broad is the operating range of such a sense organ, and what details can it reveal about moving objects? Which stimuli does the lateral-line system receive from the eddy trail of another fish, and how are these stimuli processed? To get to the bottom of these questions, the scientists develop mathematical models and compare these with experimentally observed electrical nerve signals called action potentials. The biophysicists acquire the experimental data – measurements of lateral-line organ activity in clawed frogs and cave fish – through collaboration with biologists. "Biological systems follow their own laws," van Hemmen says, "but laws that are universally valid within biology and can be described mathematically – once you find the right biophysical or biological concepts, and the right formula."
The models yield surprisingly intuitive-sounding conclusions: Fish can reliably fix the positions of other fish in terms of a distance corresponding to their own body length. Each fish broadcasts definite and distinguishing information about itself into the field of currents. So if, for example, a prey fish discloses its size and form to a possible predator within the radius of its body length, the latter can decide if a pursuit is worth the effort. This is a key finding of van Hemmen's research team.
The TUM researchers have discovered another interesting formula. With this one, the angle between a fish's axis and a vortex street can be computed from the signals that a lateral-line system acquires. The peak capability of this computation matches the best that a fish's nervous system can do. The computed values for nerve signals from an animal's sensory organ agree astonishingly well with the actual measured electrical impulses from the discharge of nerve cells. "The lateral-line sense fascinated me from the start because it's fundamentally different from other senses such as vision or hearing, not just at first glance but also the second," van Hemmen says. "It's not just that it describes a different quality of reality, but also that in place of just two eyes or ears this sense is fed by many discrete lateral-line organs – from 180 in the clawed frog to several thousand in a fish, each of which in turn is composed of several neuromasts. The integration behind it is a tour de force."
The neuronal processing and integration of diverse sense impressions into a unified mapping of reality is a major focus for van Hemmen's group. They are pursuing this same fundamental investation through the study of desert snakes' infrared perception, vibration sensors in scorpions' feet, and barn owls' hearing.
"Technology has overtaken nature in some domains," van Hemmen says, "but lags far behind in the cognitive processing of received sense impressions. My dream is to endow robots with multiple sensory modalities. Instead of always building in more cameras, we should also along the way give them additional sensors for sound and touch." With a sense modeled on the lateral-line system, but which would function as well in air as under water, robots might for example move safely among crowds of people. But such a system also offers many promising applications in the water. Underwater robots could use it to orient themselves during the exploration of inaccessible cave systems and deep-sea volcanoes. Autonomous submarines could also locate obstacles in turbid water. Such an underwater vehicle is currently being developed within the framework of the EU project CILIA, in collaboration with the TUM chair for guidance and control technology.
Further research includes collaborations with the excellence cluster CoTeSys (Cognition for Technical Systems) and the newly created Leonardo da Vinci Center for Bionics at TUM, as well as with the chair for humanoid robots and the Bernstein Center for Computational Neuroscience.
ScienceDaily (Aug. 30, 2009)

August 29, 2009

Brain develops motor memory for prosthetics, study finds

July 21st, 2009 Brain develops motor memory for prosthetics, study finds


Signals from the brain's motor cortex were translated by a "decoder" into deliberate movements of a computer cursor creating a kind of brain cybernetics. The task involved moving the cursor from a central starting point to a nearby target. UC Berkeley researchers have learned that the brain is capable of developing a motor memory of the task, much like it masters other physical skills such as riding a bike or playing tennis, but with the added distinction that the control is of a device separate from one's own body. Credit: Illustration by John Blanchard

"Practice makes perfect" is the maxim drummed into students struggling to learn a new motor skill - be it riding a bike or developing a killer backhand in tennis. Stunning new research now reveals that the brain can also achieve this motor memory with a prosthetic device(brain cybernetics), providing hope that physically disabled people can one day master control of artificial limbs with greater ease.

In this study, to be published July 21 in the open-access journal , macaque monkeys using brain signals learned how to move a computer cursor to various targets. What the researchers learned was that the brain could develop a mental map of a solution to achieve the task with high proficiency, and that it adhered to that neural pattern without deviation, much like a driver sticks to a given route commuting to work.

The study, conducted by scientists at the University of California, Berkeley, addresses a fundamental question about whether the brain can establish a stable, neural map of a motor task to make control of an artificial limb more intuitive.

"When your own body performs repeatedly, the movements become almost automatic," said study principal investigator Jose Carmena, a UC Berkeley assistant professor with joint appointments in the Department of Electrical Engineering and Computer Sciences, the Helen Wills Neuroscience Institute, and the Program in Cognitive Science. "The profound part of our study is that this is all happening with something that is not part of one's own body. We have demonstrated that the brain is able to form a motor memory to control a disembodied device in a way that mirrors how it controls its own body. That has never been shown before."

Researchers in the field of brain-machine interfaces, including Carmena, have made significant strides in recent years in the effort to improve the lives of people with physical disabilities. An April 2009 survey by the Christopher and Dana Reeve Foundation found that nearly 1.3 million people in the United States suffer from some form of paralysis caused by spinal cord injury. When other causes of restricted movement are considered, such as stroke, multiple sclerosis and cerebral palsy, the number of Americans affected jumps to 5.6 million, the survey found.

Already, researchers have demonstrated that rodents, non-human primates and humans are able to control robotic devices or computer cursors in real time using only . But what had not been clear before was whether such a skill had been consolidated as a motor memory. The new study suggests that the brain is capable of creating a stable, mental representation of a disembodied device so that it can be controlled with little effort.

To demonstrate this, Carmena and Karunesh Ganguly, a post-doctoral fellow in Carmena's laboratory, used a mathematical model, or "decoder," that remained static during the length of the study, and they paired it with a stable group of neurons in the brain. The decoder, analogous to a simplified spinal cord, translated the signals from the brain's motor cortex into movement of the cursor.

It took about four to five days of practice for the monkeys to master precise control of the cursor. Once they did, they completed the task easily and quickly for the next two weeks.

As the tasks were being completed, the researches were able to monitor the changes in activity of individual neurons involved in controlling the cursor. They could tell which cells were firing when the cursor moved in specific directions. The researchers noticed that when the animals became proficient at the task, the neural patterns involved in the "solution" stabilized.

"The solution adopted is what the brain returned to repeatedly," said Carmena.

That stability is one of three major features scientists associate with motor memory, and it is all too familiar to music teachers and athletic coaches who try to help their students "unlearn" improper form or techniques, as once a motor memory has been consolidated, it can be difficult to change.

Other characteristics of motor memory include the ability for it to be rapidly recalled upon demand and its resistance to interference when new skills are learned. All three elements were demonstrated in the UC Berkeley study.

In the weeks after they achieved proficiency, the primates exhibited rapid recall by immediately completing their learned task on the first try. "They did it from the get-go; there was no need to retrain them," said Carmena.

Real-life examples of resistance to interference, the third feature of motor memory, include people who return to an automatic transmission car after learning how to drive stick-shift. In the study, the researchers presented a new decoder - marked by a different colored cursor - two weeks after the monkeys showed mastery of the first decoder.

As the monkeys were mastering the new decoder, the researchers would suddenly switch back to the original decoder and saw that the monkeys could immediately perform the task without missing a beat. The monkeys could easily switch back and forth between the two decoders, showing a level of neural plasticity never before associated with the control of a prosthetic device, the researchers said.

"This is a study that says that maybe one day, we can really think of the ultimate neuroprosthetic device that humans can use to perform many different tasks in a more natural way," said Carmena.

Yet, the researchers acknowledged that prosthetic devices will not match what millions of years of evolution have accomplished to enable animal brains to control body movement. The complexity of wiring one's brain to properly control the body is made clear whenever one watches an infant's haphazard attempts to find its own hands and feet.

"Nevertheless, beyond its clinical applications, which are very clear, this line of research sheds light on how the brain assembles and organizes neurons, and how it forms a motor memory to control the prosthetic device," Carmena said. "These are important, fundamental questions about how the brain learns in general."

Source: University of California - Berkeley (news : web)
found at

August 28, 2009

Singularity University graduates solutions for the future

The inaugural graduates of Singularity University, a Silicon Valley school backed by NASA, Google Inc., and tech industry luminaries like Ray Kurzweil, unveiled their grand visions on Thursday for leveraging emerging technologies to solve humanity's great challenges.

Before a filled conference room at NASA Ames Research Center in Moffett Field, the students faced the dual pressures of presenting what were both final class projects for the faculty on hand, as well as business pitches to the venture capitalists and business leaders in attendance. Most, if not all, of the four teams hope to secure the funding necessary to transform their ideas into viable ventures.

The stated mission of the unaccredited university, founded in 2008, is to foster leaders who will build on rapid advances in and convergence across areas like biotechnology, supercomputing, nanotechnology and robotics to address intractable problems.

"It is only the scale of these exponentially growing technologies that has the ability to address the major challenges of humanity, whether it's energy and the environment, or poverty and disease," said Kurzweil, a renowned futurist and author of "The Singularity Is Near."

During the nine-week interdisciplinary graduate studies program, the 40 students were asked to develop projects that could help 1 billion people within 10 years. The individuals divided themselves into four teams focused on different challenges.

Four teams

Team Xidar Global Disaster Response developed new systems to facilitate communications in the aftermath of a disaster, including smart phone applications that provide GPS-based evacuation guidance or relay vital signs from "eTriage" bracelets.

"We're calling for an entirely new communication architecture," said Christian Tom, 22, who recently graduated from Stanford.

Lest it all seem pie in the sky, he noted the team members are in the process of incorporating a company and applying for patents.

Team Domus 3D Printing presented a plan to scale up advances in 3-D printing technologies, already employed to create miniature prototypes of buildings and consumer products, to create the actual components of affordable housing from materials like cement or polymers.

Team One Global Voice devised a text message-based information sharing system that enables marketplaces, job boards and other means of accelerating economic development in developing countries.

Finally, Team Gettaround proposed an "intelligent transportation grid" that would make vehicle use safer and more efficient by using censors and cell phones to provide real-time travel updates, enabling owners to rent their autos when they're not using them and, eventually, taking advantage of "autonomous" or self-driving vehicles.

40 top students

The 40 students - some recent college graduates, some the chief executives of existing companies - were accepted into the course from a field of more than a thousand applicants. Their bios are rife with advanced degrees from Harvard, MIT, Stanford and the like.

The university's board of trustees include: Kurzweil; Peter Diamandis, CEO of the X PRIZE Foundation, which awards multimillion-dollar prizes to organizations that achieve breakthroughs in genomics, energy, medicine and other fields; and Robert Richards, the founder of Odyssey Moon Ltd., which is attempting to commercialize trips to the moon.

Subsequent graduate programs at Singularity University will include around 120 students. Tuition is $25,000. The school is also gearing up to offer three- and nine-day executive programs, limited to 25 and 50 individuals, respectively.

Margo Lipstin, 23, a Team Domus member who studied the ethics of science at Stanford, said she was drawn to Singularity University because of its emphasis on real life applications. Technology is developing so rapidly and changing the world so dramatically that it's no longer possible to separate ideas from practice, she said.

"The theories are very powerful and we need to understand what values they espouse, and what is the vision for the world we're trying to reach with them," she said.

E-mail James Temple at

This article appeared on page C - 1 of the San Francisco Chronicle

August 27, 2009

Changing A Cell's Biological Battery

Oregon Health & Science University
researchers have replaced four baby
monkeys' defective mitochondrial DNA
(inherited from the mother) with
that from a healthy donor, opening
the door to controversial germline
engineering. Spindler, a baby rhesus
macaque, is one of only four monkeys
born with DNA from three parents...

Technology Review Aug. 26, 2009

August 26, 2009

Robot with bones moves like you do

Video: Bony robot

YOU may have more in common with this robot than any other - it was designed using your anatomy as a blueprint.
Conventional humanoid robots may look human, but the workings under their synthetic skins are radically different from our anatomy. A team with members across five European countries says this makes it difficult to build robots able to move like we do.
Their project, the Eccerobot, has been designed to duplicate the way human bones, muscles and tendons work and are linked together. The plastic bones copy biological shapes and are moved by kite-line that is tough like tendons, while elastic cords mimic the bounce of muscle.
Mimicking human anatomy is no shortcut to success, though, as even simple human actions like raising an arm involve a complex series of movements from many of the robot's bones, muscles and tendons. However, the team is convinced that solving these problems will enable the construction of a machine that interacts with its environment in a more human manner.
Simple human actions like raising an arm involve a complex series of movements for the robot
"We want to develop these ideas into a new kind of 'anthropomimetic robot' which can deal with and respond to the world in ways closer to the ways that humans do," says Owen Holland at the University of Sussex, UK, who is leading the project.
The team also intends to endow the robot with some human-like artificial intelligence.
Holland's Sussex group are joined on the project by researchers from the Technical University of Munich, Germany; University of Zurich, Switzerland; University of Belgrade, Serbia; and French firm The Robot Studio.
Originally posted in New scientist

August 25, 2009

Robofish and microchips

Robofish and microchips - August 25, 2009

Robotic fish – probably the best small robotic fish you’ve ever seen – have been made by clever engineers at the Massachusetts Institute of Technology. You can even see a video of them doing their thing.

The fish, about 30 cm long, are ancestors of robotuna – a giant autonomous robotic fish also made at MIT in the 1990s.

The difference is that these fish are much simpler – they are small and powered by a single motor, unlike robotuna’s six motors, and made from just 10 parts. All these parts are encapsulated in a flexible rubber casing that is moved by a motor sending a wave along the body. They're small size will apparently make them more able to swim into small crevices.

And they can certainly swim well. I’m just a bit concerned about how useful they are. They're being developed apparently to go places where other autonomous robotic fish can’t go. Maybe I’m way out of touch, but I wasn’t aware that this was a major problem.

"The fish were a proof of concept application, but we are hoping to apply this idea to other forms of locomotion, so the methodology will be useful for mobile robotics research - land, air and underwater - as well," said Valdivia Y Alvarado, whose PhD thesis was devoted to the little robotic critters (press release).

But wait a minute, my scepticism may be short lived. I am behind the times after all. Only in March this year, a robotic carp was unveiled by researchers at Essex University, UK. Five of the monstrous 1.5 metre-long robotic carp are scheduled to be released into Spanish waters, equipped with chemical sensors to sniff out pollution.

The MIT group claims that fleets of their robofish could be deployed to inspect pipelines, lakes, rivers and boats. Whatever they’re used for, you can’t escape the fact that robo fish are actually quite cool. Maybe they’ll become the next rubber duckie.

August 24, 2009

Craig Venter: Programming algae to pump out oil

Genome pioneer Craig Venter has teamed up with Exxon Mobil to turn living algae into mini oil wells. How will they do it?
Algae that can turn carbon dioxide back into fossil fuel - it sounds too good to be true. How is this going to work?
Algae use carbon dioxide to generate a number of oil molecules, via photosynthesis, as a way of storing energy. People have been trying to make them overproduce the oil and store it. We're changing the algae's gene structure to get them to produce hydrocarbons similar to those that come out of the ground and to trick them into pumping these hydrocarbons out instead of accumulating them. As other groups get CO2 sequestration techniques going, we'd like to take that CO2 and get the algae to convert it back into oil. The aim is to prevent it from further increasing carbon in the atmosphere.
How do you get from algae oil to oil you can put in a car or jet engine?
The next stage is to take the algae's biocrude, put it into Exxon Mobil's existing refineries, and try to make the same products that you get from oil that comes out of the ground. So the goal is to make gasoline, diesel fuel and jet fuel out of the same hydrocarbons we use now - just from a different source. Instead of pulling the carbon out of the ground we're pulling it out of the atmosphere.
How soon do you think that can happen?
There have been a lot of announcements from small demonstration projects claiming they're going to have major new fuels in one or two years. Our aim is to have a real and significant impact on the billions of gallons that are consumed worldwide. Materials used to make a vast range of products - clothing, carpets, medicines, plastics - come from oil. The goal is to try and replace as many of these as possible. The expectation is that doing it on this scale will take five to 10 years.
So will Exxon be producing nothing but algal power in 10 years' time?
I think that's highly unlikely. The real test is going to be how simply this can be produced so it can compete with oil prices. The challenge is not just doing it but doing it in a cost-effective fashion.
What makes you think that you, unlike anyone else, can do this?
Well, we've had some breakthroughs in terms of getting the algae to secrete pure lipids [oils] but I think the real trick is the partnership that we have - the financial resources we now have available to us and the engineering and oil-processing skills of Exxon.
Exxon has a poor reputation on climate-change issues. Won't partnering with them damage the project's green credentials?
Quite the opposite. I think the fact that the largest company in the world has gone in this direction after several years of study is good for all of us. I've said many times this change can't happen without the oil industry. They have a reputation for studying things for quite a while and acting in a large fashion once they become convinced of an approach. I don't see how it can be bad news if somebody makes a major change in direction for the benefit of the planet.


Craig Venter made his name sequencing the human genome. He is founder/CEO of Synthetic Genomics, which has begun a $600 million project with Exxon to transform the oil industry
This article was originally written by Catherine Brahic on july 25,2009 for new scientist

Recommended Reading

Biodesign: The Process of Innovating Medical Technologies


Over the last couple months i've recieved several emails from people searching for a copy of the documentary featuring ray kurzweil called "Transcendent man". So what i will do is update the information as it comes. The film was screened at the tribeca film festival and is still in the theaters. You can find out where its being screened at the official transcendent man site. There is also a preview and some pics at this site.

I have been scouring the net daily for a downloadable version of the film for sale or for free and will update my blog with the link to the site when it becomes available. I usually update this blog at least every week or so as soon as i find it you will be able to find it. The other option is to sign up to my email list to the left of this post and you will receive  the link for download instantly 100% free or catch me on twitter where i will also be updating my progress in finding the transcendent man torrent.

Senior Editor


The torrent is now available. Click on the transcendent man image to the left of this post to get it for free as well as other Ray Kurzweil freebies.

Smart machines: What's the worst that could happen?

An invasion led by artificially intelligent machines. Conscious computers. A smartphone virus so smart that it can start mimicking you. You might think that such scenarios are laughably futuristic, but some of the world's leading artificial intelligence (AI) researchers are concerned enough about the potential impact of advances in AI that they have been discussing the risks over the past year. Now they have revealed their conclusions.

Until now, research in artificial intelligence has been mainly occupied by myriad basic challenges that have turned out to be very complex, such as teaching machines to distinguish between everyday objects. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration.

Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware.

The panel drew inspiration from the 1975 Asilomar Conference on Recombinant DNA in California, in which over 140 biologists, physicians, and lawyers considered the possibilities and dangers of the then emerging technology for creating DNA sequences that did not exist in nature. Delegates at that conference foresaw that genetic engineering would become widespread, even though practical applications – such as growing genetically modified crops – had not yet been developed.

Unlike recombinant DNA in 1975, however, AI is already out in the world. Robots like Roombas and Scoobas help with the mundane chores of vacuuming and mopping, while decision-making devices are assisting in complex, sometimes life-and-death situations. For example, Poseidon Technologies, sells AI systems that help lifeguards identify when a person is drowning in a swimming pool, and Microsoft's Clearflow system helps drivers pick the best route by analysing traffic behaviour.

At the moment such systems only advise or assist humans, but the AAAI panel warns that the day is not far off when machines could have far greater ability to make and execute decisions on their own, albeit within a narrow range of expertise. As such AI systems become more commonplace, what breakthroughs can we reasonably expect, and what effects will they have on society? What's more, what precautions should we be taking?

These are among the many questions that the panel tackled, under the chairmanship of Eric Horvitz, president of the AAAI and senior researcher with Microsoft Research. The group began meeting by phone and teleconference in mid-2008, then in February this year its members gathered at Asilomar, a quiet town on the north California coast, for a weekend to debate and seek consensus. They presented their initial findings at the International Joint Conference for Artificial Intelligence (IJCAI) in Pasadena, California, on 15 July.

Panel members told IJCAI that they unanimously agreed that creating human-level artificial intelligence – a system capable of expertise across a range of domains – is possible in principle, but disagreed as to when such a breakthrough might occur, with estimates varying wildly between 20 and 1000 years.

Panel member Tom Dietterich of Oregon State University in Corvallis pointed out that much of today's AI research is not aimed at building a general human-level AI system, but rather focuses on "idiot-savants" systems good at tasks in a very narrow range of application, such as mathematics.

The panel discussed at length the idea of an AI "singularity" – a runaway chain reaction of machines capable of building ever-better machines. While admitting that it was theoretically possible, most members were skeptical that such an exponential AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves. "Perhaps the singularity is not the biggest of our worries," said Dietterich.

A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans. According to the panel, identity thieves might feasibly plant a virus on a person's smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves. Most researchers think that they can develop such a virus. "If we could do it, they could," said Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, referring to organised crime syndicates.

Peter Szolovits, an AI researcher at the Massachusetts Institute of Technology, who was not on the panel, agrees that common everyday computer systems such as smartphones have layers of complexity that could lead to unintended consequences or allow malicious exploitation. "There are a few thousand lines of code running on my cell phone and I sure as hell haven't verified all of them," he says.

"These are potentially powerful technologies that could be used in good ways and not so good ways," says Horvitz, and cautions that besides the threat posed by malware, we are close to creating systems so complex and opaque that we don't understand them.

Given such possibilities, "what's the responsibility of an AI researcher?" says Bart Selman of Cornell, co-chair of the panel. "We're starting to think about it."

At least for now we can rest easy on one score. The panel concluded that the internet is not about to become self-aware.

August 23, 2009

A Modular Robot That Puts Itself Back Together Again

New York Times, July 27, 2009

University of Pennsylvania researchers have developed a walking robot constructed from modules that are designed to separate on impact, find each other, and reassemble into a working robot.

Read original article -->here<--- a="">

August 22, 2009

Cyborg-walkers stride toward Japan's robotics future

Japan's robotics venture Cyberdyne employee wearing the robot-suit "HAL" (Hybrid Assistive Limb) walk on a street in Tokyo. The three Japanese cyborg look-alikes turned heads on busy Tokyo streets and subway trains Monday as they made their way to a robotics conference on a hot summer's day -- without breaking a sweat.

Three Japanese cyborg look-alikes turned heads on busy Tokyo streets and subway trains Monday as they made their way to a robotics conference on a hot summer's day -- without breaking a sweat.

Two men and a woman, wearing what looked like white plastic exoskeletons over black outfits, were testing -- at a pace of 1.8 kilometres (1.1 miles) an hour -- robotic suits designed to give mobility to the injured and disabled.

"What on earth is it?" asked Hisako Ueda, 43, digital camera in hand, as she and her 10-year-old daughter Ayaka gazed at the trio striding through Akihabara district, a high-tech geeks' paradise also called Electric Town.

Undeterred by the onlookers' stares, the three completed their mechanically assisted trek by train, taxi and on foot from the suburb of Tsukuba, 50 kilometres (30 miles) north of central Tokyo, to the robo-meeting.

One of the three robotics company employees, 32-year-old Takatoshi Hisano, said the futuristic 11-kilogramme (24-pound) outfit -- which can detect and preempt its user's movements -- made the two-hour commute that much easier.

"I'm not tired at all," he said with a smile when they arrived at the building where the robotics industry meeting was about to start in a fourth-floor room. "Let's take the stairs instead of the lift."

The high-tech suits were developed by Yoshiyuki Sankai, a professor at Tsukuba University, whose company is already leasing them to several hospitals and nursing homes and has just received an order from Denmark.

At the meeting, the three were greeted by assorted robots -- including Toyota's personal transport assistance "Winglet" and Fuji Heavy Industries' automatic floor cleaning machine -- alongside plenty of humans with high hopes of turning the sector into the new face of Inc.

Japan has launched a five-year project of putting so-called people-assisting robots into widespread practical use, the government-backed New Energy and Industrial Technology Organisation (NEDO) said in a statement.

"We believe that the robotics industry in this people-assisting field will expand one hundredfold," said NEDO senior official Katsuya Okano. "But for this goal, what's lacking is a safety standard, which we aim to set up."

A government survey in Japan, now a fast ageing society, estimates that the global market for such robots, including nurses and domestic maids, will expand to 6.2 trillion yen (65 billion dollars) by 2025.

Original article by Kyoko Hasegawa
August 3rd,2009

DNA computation gets logical

Biomolecular computers, made of DNA and other biological molecules, only exist today in a few specialized labs, remote from the regular computer user. Nonetheless, Tom Ran and Shai Kaplan, research students in the lab of Prof. Ehud Shapiro of the Weizmann Institute's Biological Chemistry, and Computer Science and Applied Mathematics Departments have found a way to make these microscopic computing devices 'user friendly,' even while performing complex computations and answering complicated queries.

Shapiro and his team at Weizmann introduced the first autonomous programmable DNA computing device in 2001. So small that a trillion fit in a drop of water, that device was able to perform such simple calculations as checking a list of 0s and 1s to determine if there was an even number of 1s. A newer version of the device, created in 2004, detected cancer in a test tube and released a molecule to destroy it. Besides the tantalizing possibility that such biology-based devices could one day be injected into the body - a sort of 'doctor in a cell' locating disease and preventing its spread - biomolecular computers could conceivably perform millions of calculations in parallel.

Now, Shapiro and his team, in a paper published online today in , have devised an advanced program for biomolecular computers that enables them to 'think' logically. The train of deduction used by this futuristic device is remarkably familiar. It was first proposed by Aristotle over 2000 years ago as a simple if…then proposition: 'All men are mortal. Socrates is a man. Therefore, Socrates is mortal.' When fed a rule (All men are mortal) and a fact (Socrates is a man), the computer answered the question 'Is Socrates Mortal?' correctly. The team went on to set up more complicated queries involving multiple rules and facts, and the DNA computing devices were able to deduce the correct answers every time.

At the same time, the team created a compiler - a program for bridging between a high-level computer programming language and DNA computing code. Upon compiling, the query could be typed in something like this: Mortal(Socrates)?. To compute the answer, various strands of DNA representing the rules, facts and queries were assembled by a robotic system and searched for a fit in a hierarchical process. The answer was encoded in a flash of green light: Some of the strands had a biological version of a flashlight signal - they were equipped with a naturally glowing fluorescent molecule bound to a second protein which keeps the light covered. A specialized enzyme, attracted to the site of the correct answer, removed the 'cover' and let the light shine. The tiny water drops containing the biomolecular data-bases were able to answer very intricate queries, and they lit up in a combination of colors representing the complex answers.

Article originally written by Weizmann Institute of Science (news : web)

August 21, 2009

Cell on a Chip

The first artificial cell organelle may help researchers find a way to make bioengineered heparin and other synthetic drugs.
The drug heparin is widely used to prevent blood from clotting in medical procedures ranging from dialysis to open-heart surgery. With a $6 billion market, it is one of the most common drugs used in hospitals today. But its widespread use belies its crude origins: more than 90 years after it was discovered, heparin is still made from pig intestines. But a new microfluidics chip, which mimics the actions of one of the cell's most mysterious organs, may help change that. Researchers at Rensselaer Polytechnic Institute in Troy, NY, have created the first artificial cellular organelle and are using it to better understand how the human body makes heparin.
Fake cell: This microfluidics chip can replicate the activity of one of the eukaryotic cell's most important, yet least understood, organelles--the Golgi apparatus. Researchers hope that it can help them understand how to create synthetic versions of important drugs such as heparin.
Credit: Courtesy JACS
Scientists have been working to create a synthetic version of the medication, because the current production method leaves it susceptible to contamination--in 2008, such an incident was responsible for killing scores of people. But the drug has proven incredibly difficult to create in a lab.
Much of the mystery of heparin production stems from the site of its natural synthesis: a cellular organelle called the Golgi apparatus, which processes and packages proteins for transport out of the cell, decorating the proteins with sugars to make glycoproteins. Precisely how it does this has eluded generations of scientists. "The Golgi was discovered over 100 years ago, but what happens inside it is still a black box," says Robert Linhardt, a biotechnologist at Rensselaer who's been working on heparin for nearly 30 years and is lead author of the new study. "Proteins go in, glycoproteins come out. We know the enzymes that are involved now, but we don't really know how they're controlled."
To better understand what was going on inside the Golgi, Linhardt and his colleagues decided to create their own version. The result: the first known artificial cell organelle, a small microfluidics chip that mimics some of the Golgi's actions. The digital device allows the researchers to control the movement of a single microscopic droplet while they add enzymes and sugars, split droplets apart, and slowly build a molecule chain like heparin. "We can essentially control the process, like the Golgi controls the process," Linhardt says. "I think we have a truly artificial version of the Golgi. We could actually design something that functions like an organelle and control it. The next step is to make more complicated reaction combinations."
"People have had bits and pieces of the toolbox for making these important carbohydrates, but one thing you should potentially do is try to emulate nature, or at least figure out how it works," says Paul DeAngelis, a biochemist and molecular biologist at the University of Oklahoma who was not involved in the research. "The miniaturization that they're doing--having little bubbles of liquid fuse and go to different compartments with different catalysts under different conditions--that's how your body and the Golgi apparatus works. It's a nice model."
Currently, researchers know what heparin looks like and what enzymes are required to make it, but they don't quite know how it's made. "It's like having all the materials and tools required to build a house and knowing what the final house looks like, and then having someone say, 'Okay, go build the house,'" Linhardt says. "What we need is a blueprint. We need to know how these tools function together, how the house is assembled." He likens the microfluidics chip to a house-building DIY reel, one that "tells us how to hammer nails, how to saw, how to assemble struts, how to put walls in." By testing reagents in different amounts, with different reaction times, the artificial Golgi may be able to teach them how to synthesize heparin and other molecules in a laboratory setting.
"It's a fusion of engineering and biology," says Jeffrey Esko, a glycobiologist at the University of California, San Diego. "One can do this in test tubes, but the chip provides a way to automate the process on a microscale." The chip also allows for precise control over each individual interaction, and at a small scale.
With the help of their microchip and substantial funding from the National Institutes of Health, Linhardt believes that they should be able to bring bioengineered heparin into clinical trials within the next five years.
By Lauren Gravitz

August 20, 2009

Robots 'Evolve' the Ability to Deceive

Tues August 18,2009

An experiment shows how "deceptive" behavior can emerge from simple rules.

Researchers at the Ecole Polytechnique Fédérale de Lausanne in Switzerland have found that robots equipped with artificial neural networks and programmed to find "food" eventually learned to conceal their visual signals from other robots to keep the food for themselves. The results are detailed in an upcoming PNAS study.

The team programmed small, wheeled robots with the goal of finding food: each robot received more points the longer it stayed close to "food" (signified by a light colored ring on the floor) and lost points when it was close to "poison" (a dark-colored ring). Each robot could also flash a blue light that other robots could detect with their cameras.

"Over the first few generations, robots quickly evolved to successfully locate the food, while emitting light randomly. This resulted in a high intensity of light near food, which provided social information allowing other robots to more rapidly find the food," write the authors.

The team "evolved" new generations of robots by copying and combining the artificial neural networksof the most successful robots. The scientists also added a few random changes to their code to mimic biological mutations.

Because space is limited around the food, the bots bumped and jostled each other after spotting the blue light. By the 50th generation, some eventually learned to not flash their blue light as much when they were near the food so as to not draw the attention of other robots, according to the researchers. After a few hundred generations, the majority of the robots never flashed light when they were near the food. The robots also evolved to become either highly attracted to, slightly attracted to, or repelled by the light.

Because robots were competing for food, they were quickly selected to conceal this information," the authors add.

The researchers suggest that the study may help scientists better understand the evolution of biological communication systems.

Article by Kristina Grifantini for Technology

August 18, 2009

IBM Scientists Build Computer Chips From DNA

PC World Aug. 16, 2009
Add caption
Scientists at IBM are experimenting with using DNA molecules as a way to create tiny circuits that could form the basis of smaller, more powerful computer chips.
The company is researching ways in which DNA can arrange itself into patterns on the surface of a chip, and then act as a kind of scaffolding on to which millions of tiny carbon nanotubes and nanoparticles are deposited. That network of nanotubes and nanoparticles could act as the wires and transistors on future computer chips, the IBM scientists said.
For decades chip makers have been etching smaller and smaller patterns onto the surface of chips to speed performance and reduce power consumption. The fastest PC chips today are manufactured using a 45 nanometer process, but as the process dips below 22 nanometers in a few years, the assembly and fabrication of chips becomes far more difficult and expensive, said Bob Allen senior manager of chemistry and materials at IBM Research.

Read rest of article here<---- a="">

August 17, 2009

FastForward Radio -- The Technological Singularity

The Speculist, Aug. 17, 2009

Phil Bowermaster and Stephen Gordon will interview Ray Kurzweil Tuesday evening on FastForward Radio at 10:30 Eastern/7:30 Pacific, with audience participation by text chat.

August 12, 2009

Nanowires That Behave Like Cells

Transistors with lipid membranes could make better interfaces for neural prosthetics.

Technology Review August 11, 2009

Researchers at the Lawrence Livermore National Laboratory have sealed silicon-nanowire transistors in a membrane similar to those that surround biological cells. These hybrid devices, which operate similarly to nerve cells, might be used to make better interfaces for prosthetic limbs and cochlear implants. They might also work well as biosensors for medical diagnostics.

Hybrid nanowire: The silicon nanowire shown in the microscope image (top) is covered in a fatty membrane similar to those that surround biological cells. The bottom image is an illustration depicting the two layers of lipid molecules that surround the nanowire, sealing it from the surrounding environment. Ions can pass through the membrane via an ion channel, depicted here in lavender.
Credit: Aleksandr Noy

Biological communication is sophisticated and remains unmatched in today's electronics, which rely on electrical fields and currents. Cells in the human body use many additional means of communication including hormones, neurotransmitters, and ions such as calcium. The nexus of biological communication is the cell membrane, a double layer of fatty molecules studded with proteins that act as gatekeepers and perform the first steps in biological signal processing.

Read original article here

August 11, 2009

IBM gets $16 million to bolster its brain-on-a-chip technology

Networld World Aug. 8, 2009

The DARPA SyNAPSE project seeks to build systems that rapidly understand tons of data

The quest to mimic the best parts of human brain function on a highly intelligent computer to decypher tons of data quickly is heating up.

IBM this week got $16.1 million to kick up its part of a Defense Advanced Research Projects Agency research program aimed at rapidly and efficiently put brain-like senses into actual hardware and software so that computers can process and understand data more rapidly.

Read original article -->here<--


August 10, 2009

Get smarter

Originally published by The atlantic

Seventy-four thousand years ago, humanity nearly went extinct. A super-volcano at what’s now Lake Toba, in Sumatra, erupted with a strength more than a thousand times that of Mount St. Helens in 1980. Some 800 cubic kilometers of ash filled the skies of the Northern Hemisphere, lowering global temperatures and pushing a climate already on the verge of an ice age over the edge. Some scientists speculate that as the Earth went into a deep freeze, the population of Homo sapiens may have dropped to as low as a few thousand families.

The Mount Toba incident, although unprecedented in magnitude, was part of a broad pattern. For a period of 2 million years, ending with the last ice age around 10,000 B.C., the Earth experienced a series of convulsive glacial events. This rapid-fire climate change meant that humans couldn’t rely on consistent patterns to know which animals to hunt, which plants to gather, or even which predators might be waiting around the corner.

How did we cope? By getting smarter. The neuro­physi­ol­ogist William Calvin argues persuasively that modern human cognition—including sophisticated language and the capacity to plan ahead—evolved in response to the demands of this long age of turbulence. According to Calvin, the reason we survived is that our brains changed to meet the challenge: we transformed the ability to target a moving animal with a thrown rock into a capability for foresight and long-term planning. In the process, we may have developed syntax and formal structure from our simple language.

Read rest of article -->here<--

August 6, 2009

Robot Can Crawl Through Human Body

Technion-Israel Institute of
Technology researchers have created
a prototype micro robot that can
crawl through the human body. It is
only a millimeter in diameter and 14
millimeters long, so it can get into
the body's smallest areas. It is
powered by either actuation through
magnetic force located outside the
body, or through an on-board...

See original article at American Technion Society

August 4, 2009

Blind students test-drive experimental vehicle

Seattle Times August 1, 2009

Virginia Tech engineers have
developed the first vehicle that can
be independently operated by a blind
driver. Using data from a laser scan
of obstacles, a computer voice
signals the driver through
headphones how to steer to avoid a
collision -- one click to the left,
for example; three clicks to the
right -- and the vehicle's computer...

Click -->here<-- for rest of article