April 3, 2012
Build a better brain?
In Seattle, Microsoft co-founder Paul Allen pledged US$300-million last week for a series of “brain observatories” that will model the visual parts of mice brains, and allow researchers to “capture fundamental aspects of higher brain function: from perception to conscious awareness, decision-making and action.”
In Toronto, a team at Baycrest’s Rotman Research Institute is about to release a working model of its Virtual Brain, which aims to recreate the structure and function of grey matter, including its “plasticity,” or the capacity to reorganize after damage.
Read the rest of article here by the National Post
May 10, 2011
Talk with a dolphin via underwater translation machine

May 8, 2010
Army of smartphone chips could emulate the human brain

To coordinate its 'neurons' the chip mimics the way real neurons communicate using 'spikes' in voltage
- 04 May 2010 by Paul Marks for New Scientist Magazine
October 2, 2009
Nanotech researchers develop artificial pore

CINCINNATI—Using an RNA-powered nanomotor, University of Cincinnati (UC) biomedical engineering researchers have successfully developed an artificial pore able to transmit nanoscale material through a membrane.
In a study led by UC biomedical engineering professor Peixuan Guo, PhD, members of the UC team inserted the modified core of a nanomotor, a microscopic biological machine, into a lipid membrane. The resulting channel enabled them to move both single- and double-stranded DNA through the membrane.
Their paper, “Translocation of double-stranded DNA through membrane-adapted phi29 motor protein nanopores,” will appear in the journal Nature Nanotechnology, Sept. 27, 2009. The engineered channel could have applications in nano-sensing, gene delivery, drug loading and DNA sequencing," says Guo.
Guo derived the nanomotor used in the study from the biological motor of bacteriophage phi29, a virus that infects bacteria. Previously, Guo discovered that the bacteriophage phi29 DNA-packaging motor uses six molecules of the genetic material RNA to power its DNA genome through its protein core, much like a screw through a bolt.
"The re-engineered motor core itself has shown to associate with lipid membranes, but we needed to show that it could punch a hole in the lipid membrane," says David Wendell, PhD, co-first author of the paper and a research assistant professor in UC’s biomedical engineering department. "That was one of the first challenges, moving it from its native enclosure into this engineered environment."
In this study, UC researchers embedded the re-engineered nanomotor core into a lipid sheet, creating a channel large enough to allow the passage of double-stranded DNA through the channel.
Guo says past work with biological channels has been focused on channels large enough to move only single-stranded genetic material.
"Since the genomic DNA of human, animals, plants, fungus and bacteria are double stranded, the development of single pore system that can sequence double-stranded DNA is very important," he says.
By being placed into a lipid sheet, the artificial membrane channel can be used to load double-stranded DNA, drugs or other therapeutic material into the liposome, other compartments, or potentially into a cell through the membrane.
Guo also says the process by which the DNA travels through the membrane can have larger applications.
"The idea that a DNA molecule travels through the nanopore, advancing nucleotide by nucleotide, could lead to the development of a single pore DNA sequencing apparatus, an area of strong national interest," he says.
Using stochastic sensing, a new analytical technique used in nanopore work, Wendell says researchers can characterize and identify material, like DNA, moving through the membrane.
Co-first author and UC postdoctoral fellow Peng Jing, PhD, says that, compared with traditional research methods, the successful embedding of the nanomotor into the membrane may also provide researchers with a new way to study the DNA packaging mechanisms of the viral nanomotor.
"Specifically, we are able to investigate the details concerning how double-stranded DNA translocates through the protein channel," he says.
The study is the next step in research on using nanomotors to package and deliver therapeutic agents directly to infected cells. Eventually, the team's work could enable use of nanoscale medical devices to diagnose and treat diseases.
"This motor is one of the strongest bio motors discovered to date," says Wendell, "If you can use that force to move a nanoscale rotor or a nanoscale machine … you're converting the force of the motor into a machine that might do something useful."
Funding for this study comes from the National Institutes of Health's Nanomedicine Development Center. Guo is the director of one of eight NIH Nanomedicine Development Centers and an endowed chair in biomedical engineering at UC.
Coauthors of the study include UC research assistant professor David Wendell, PhD, postdoctoral fellow Peng Jing, PhD, graduate students Jia Geng and Tae Jin Lee and former postdoctoral fellow Varuni Subramaniam from Guo’s previous lab at Purdue University. Carlo Montemagno, dean of the College of Engineering and College of Applied Science, also contributed to the study.
September 23, 2009
Video surveillance system that reasons like a human brain

It is impossible for humans to monitor the tens of millions of cameras deployed throughout the world, a fact long recognized by the international security community. Security video is either used for forensic analysis after an incident has occurred, or it employs a limited-capability technology known as Video Analytics – a video-motion and object-classification-based software technology that attempts to watch video streams and then sends an alarm on specific pre-programmed events. The problem is that this legacy solution generates a great number of false alarms that effectively renders it useless in the real world.
BRS Labs has created a technology it calls Behavioral Analytics. It uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming.
The system learns autonomously, and builds cognitive “memories” while continuously monitoring a scene through the “eyes” of a CCTV security camera. It sees and then registers the context of what constitutes normal behavior, and the software distinguishes and alerts on abnormal behavior without requiring any special programming, definition of rules or virtual trip lines.
AISight is currently fielded across a wide variety of global critical infrastructure assets, protecting major international hotels, banking institutions, seaports, nuclear facilities, airports and dense urban areas plagued by criminal activity.
Original article by Helpnet security
August 31, 2009
Fishy Sixth Sense: Mathematical Keys To Fascinating Sense Organ

Fish and some amphibians possess a unique sensory capability in the so-called lateral-line system. It allows them, in effect, to "touch" objects in their surroundings without direct physical contact or to "see" in the dark. Professor Leo van Hermmen and his team in the physics department of the Technische Universitaet Muenchen are exploring the fundamental basis for this sensory system. What they discover might one day, through biomimetic engineering, better equip robots to orient themselves in their environments.
With our senses we take in only a small fraction of the information that surrounds us. Infrared light, electromagnetic waves, and ultrasound are just a few examples of the external influences that we humans can grasp only with the help of technological measuring devices – whereas some other animals use special sense organs, their own biological equipment, for the purpose. One such system found in fish and some amphibians is under investigation by the research team of Professor Leo van Hemmen, chair of theoretical biophysics at TUM, the Technische Universitaet Muenchen.
This remote sensing system, at first glance mysterious, rests on measurement of the pressure distribution and velocity field in the surrounding water. The lateral-line organs responsible for this are aligned along the left and right sides of the fish's body and also surround the eyes and mouth. They consist of gelatinous, flexible, flag-like units about a tenth of a millimeter long. These so-called neuromasts – which sit either directly on the animal's skin or just underneath, in channels that water can permeate through pores – are sensitive to the slightest motion of the water. Coupled to them are hair cells similar to the acoustic pressure sensors in the human inner ear. Nerves deliver signals from the hair cells for processing in the brain, which localizes and identifies possible sources of the changes detected in the water's motion.
These changes can arise from various sources: A fish swimming by produces vibrations or waves that are directly conveyed to the lateral-line organ. Schooling fishes can recognize a nearby attacker and synchronize their swimming motion so that they resemble a single large animal. The Mexican cave fish pushes a bow wave ahead of itself, which is reflected from obstacles. The catfish takes advantage of the fact that a swimming fish that beats its tail fin leaves a trail of eddies behind. This so-called "vortex street" persists for more than a minute and can betray the prey.
For the past five years, Leo van Hemmen and his team have been investigating the capabilities of the lateral-line system and assessing the potential to translate it into technology. How broad is the operating range of such a sense organ, and what details can it reveal about moving objects? Which stimuli does the lateral-line system receive from the eddy trail of another fish, and how are these stimuli processed? To get to the bottom of these questions, the scientists develop mathematical models and compare these with experimentally observed electrical nerve signals called action potentials. The biophysicists acquire the experimental data – measurements of lateral-line organ activity in clawed frogs and cave fish – through collaboration with biologists. "Biological systems follow their own laws," van Hemmen says, "but laws that are universally valid within biology and can be described mathematically – once you find the right biophysical or biological concepts, and the right formula."
The models yield surprisingly intuitive-sounding conclusions: Fish can reliably fix the positions of other fish in terms of a distance corresponding to their own body length. Each fish broadcasts definite and distinguishing information about itself into the field of currents. So if, for example, a prey fish discloses its size and form to a possible predator within the radius of its body length, the latter can decide if a pursuit is worth the effort. This is a key finding of van Hemmen's research team.
The TUM researchers have discovered another interesting formula. With this one, the angle between a fish's axis and a vortex street can be computed from the signals that a lateral-line system acquires. The peak capability of this computation matches the best that a fish's nervous system can do. The computed values for nerve signals from an animal's sensory organ agree astonishingly well with the actual measured electrical impulses from the discharge of nerve cells. "The lateral-line sense fascinated me from the start because it's fundamentally different from other senses such as vision or hearing, not just at first glance but also the second," van Hemmen says. "It's not just that it describes a different quality of reality, but also that in place of just two eyes or ears this sense is fed by many discrete lateral-line organs – from 180 in the clawed frog to several thousand in a fish, each of which in turn is composed of several neuromasts. The integration behind it is a tour de force."
The neuronal processing and integration of diverse sense impressions into a unified mapping of reality is a major focus for van Hemmen's group. They are pursuing this same fundamental investation through the study of desert snakes' infrared perception, vibration sensors in scorpions' feet, and barn owls' hearing.
"Technology has overtaken nature in some domains," van Hemmen says, "but lags far behind in the cognitive processing of received sense impressions. My dream is to endow robots with multiple sensory modalities. Instead of always building in more cameras, we should also along the way give them additional sensors for sound and touch." With a sense modeled on the lateral-line system, but which would function as well in air as under water, robots might for example move safely among crowds of people. But such a system also offers many promising applications in the water. Underwater robots could use it to orient themselves during the exploration of inaccessible cave systems and deep-sea volcanoes. Autonomous submarines could also locate obstacles in turbid water. Such an underwater vehicle is currently being developed within the framework of the EU project CILIA, in collaboration with the TUM chair for guidance and control technology.
Further research includes collaborations with the excellence cluster CoTeSys (Cognition for Technical Systems) and the newly created Leonardo da Vinci Center for Bionics at TUM, as well as with the chair for humanoid robots and the Bernstein Center for Computational Neuroscience.
ScienceDaily (Aug. 30, 2009)
August 24, 2009
Smart machines: What's the worst that could happen?

An invasion led by artificially intelligent machines. Conscious computers. A smartphone virus so smart that it can start mimicking you. You might think that such scenarios are laughably futuristic, but some of the world's leading artificial intelligence (AI) researchers are concerned enough about the potential impact of advances in AI that they have been discussing the risks over the past year. Now they have revealed their conclusions.
Until now, research in artificial intelligence has been mainly occupied by myriad basic challenges that have turned out to be very complex, such as teaching machines to distinguish between everyday objects. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration.
Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware.
The panel drew inspiration from the 1975 Asilomar Conference on Recombinant DNA in California, in which over 140 biologists, physicians, and lawyers considered the possibilities and dangers of the then emerging technology for creating DNA sequences that did not exist in nature. Delegates at that conference foresaw that genetic engineering would become widespread, even though practical applications – such as growing genetically modified crops – had not yet been developed.
Unlike recombinant DNA in 1975, however, AI is already out in the world. Robots like Roombas and Scoobas help with the mundane chores of vacuuming and mopping, while decision-making devices are assisting in complex, sometimes life-and-death situations. For example, Poseidon Technologies, sells AI systems that help lifeguards identify when a person is drowning in a swimming pool, and Microsoft's Clearflow system helps drivers pick the best route by analysing traffic behaviour.
At the moment such systems only advise or assist humans, but the AAAI panel warns that the day is not far off when machines could have far greater ability to make and execute decisions on their own, albeit within a narrow range of expertise. As such AI systems become more commonplace, what breakthroughs can we reasonably expect, and what effects will they have on society? What's more, what precautions should we be taking?
These are among the many questions that the panel tackled, under the chairmanship of Eric Horvitz, president of the AAAI and senior researcher with Microsoft Research. The group began meeting by phone and teleconference in mid-2008, then in February this year its members gathered at Asilomar, a quiet town on the north California coast, for a weekend to debate and seek consensus. They presented their initial findings at the International Joint Conference for Artificial Intelligence (IJCAI) in Pasadena, California, on 15 July.
Panel members told IJCAI that they unanimously agreed that creating human-level artificial intelligence – a system capable of expertise across a range of domains – is possible in principle, but disagreed as to when such a breakthrough might occur, with estimates varying wildly between 20 and 1000 years.
Panel member Tom Dietterich of Oregon State University in Corvallis pointed out that much of today's AI research is not aimed at building a general human-level AI system, but rather focuses on "idiot-savants" systems good at tasks in a very narrow range of application, such as mathematics.
The panel discussed at length the idea of an AI "singularity" – a runaway chain reaction of machines capable of building ever-better machines. While admitting that it was theoretically possible, most members were skeptical that such an exponential AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves. "Perhaps the singularity is not the biggest of our worries," said Dietterich.
A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans. According to the panel, identity thieves might feasibly plant a virus on a person's smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves. Most researchers think that they can develop such a virus. "If we could do it, they could," said Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, referring to organised crime syndicates.
Peter Szolovits, an AI researcher at the Massachusetts Institute of Technology, who was not on the panel, agrees that common everyday computer systems such as smartphones have layers of complexity that could lead to unintended consequences or allow malicious exploitation. "There are a few thousand lines of code running on my cell phone and I sure as hell haven't verified all of them," he says.
"These are potentially powerful technologies that could be used in good ways and not so good ways," says Horvitz, and cautions that besides the threat posed by malware, we are close to creating systems so complex and opaque that we don't understand them.
Given such possibilities, "what's the responsibility of an AI researcher?" says Bart Selman of Cornell, co-chair of the panel. "We're starting to think about it."
At least for now we can rest easy on one score. The panel concluded that the internet is not about to become self-aware.
- Posted July 2009 by MacGregor Campbell for new scientist
May 6, 2009
Emotional Robot Tries to Create Relationships

The world's first robot with its own Facebook page (and that can use its information in conversations with "friends") has been developed by the Interactive Robots and Media Lab at the United Arab Emirates University.
We all love robots, right? And yet that special relationship never seems to materialise. However intensely they begin, our relationships with robots gradually wane as the realisation dawns that it wasn’t love that brought us together, but mere infatuation.
The relationship quickly and inevitably breaks down, like the morning after a Las Vegas wedding. (Japanese researchers have even measured the decline in interaction levels as humans lose interest in robot toys.)
But building a meaningful relationship with a robot may soon get easier if Nikolaos Mavridis and pals from the Interactive Robots and Media Lab at the United Arab Emirates University have anything to do with it. They say the key to building a longer, meaningful relationship with a robot is to become embedded in the same network of shared friends and together build a pool of shared memories that you can both refer to. Just like a real friend.
So the team has created the world’s first robot that does both these things–it has its own Facebook page and it can use the information it gathers from this social network in conversations with “friends”.
That seems a useful approach provided the robot can make sense of the interactions it has with humans. One key challenge will be determining the emotional state of the friends it meets and then empathising with them in an appropriate way. That’s not always easy for humans.
Nevertheless, humans are fairly predictable animals. There are numerous shortcuts and tricks that a clever programmer can exploit to give the impression of empathy and understanding appearing like an emotional robot.
One of the most extraordinary things about ELIZA, the world’s first chatbot created in 1966, was the number of people reported to have fallen in love with it. The program used a simple trick to give the impression of empathy and compassion: it asked general leading questions and then turned the responses into questions which it repeated back to its conversational partner. Here’s real example from the ELIZA archives:
young woman: Men are all alike.
eliza: In what way?
young woman: They’re always bugging us about something specific or other.
eliza: Can you think of a specific example?
young woman: Well, my boyfriend made me come here.
eliza: Your boyfriend made you come here?
young woman: He says I’m depressed much of the time.
And so on. Perhaps the new UAE robot might continue with: “Remember when we both…”.
Sadly, the UAE team are about to make their work much harder. They’re planning to implement their programme in a humanoid robot called IbnSina (see picture), that they have developed at their lab. That will introduce an entirely new problem into any prospective relationship–the uncanny valley that various Japanese roboticists talk about. This is the feeling of revulsion that almost-but-not-quite humanoids seem to generate in humans.
And revulsion is about as big a barrier to a meaningful relationship as you can get.
Original article by the physics arXiv blog, May 5, 2009