Showing posts with label AI research. Show all posts
Showing posts with label AI research. Show all posts

April 3, 2012

Build a better brain?

Artificial brains are on the rise, powered by huge investment, exponential growth in computer power, and new insights in everything from statistics to biochemistry.

In Seattle, Microsoft co-founder Paul Allen pledged US$300-million last week for a series of “brain observatories” that will model the visual parts of mice brains, and allow researchers to “capture fundamental aspects of higher brain function: from perception to conscious awareness, decision-making and action.”

In Toronto, a team at Baycrest’s Rotman Research Institute is about to release a working model of its Virtual Brain, which aims to recreate the structure and function of grey matter, including its “plasticity,” or the capacity to reorganize after damage.

Read the rest of article here by the National Post

May 10, 2011

Talk with a dolphin via underwater translation machine

A DIVER carrying a computer that tries to recognise dolphin sounds and generate responses in real time will soon attempt to communicate with wild dolphins off the coast of Florida. If the bid is successful, it will be a big step towards two-way communication between humans and dolphins.

Since the 1960s, captive dolphins have been communicating via pictures and sounds. In the 1990s, Louis Herman of the Kewalo Basin Marine Mammal Laboratory in Honolulu, Hawaii, found that bottlenose dolphins can keep track of over 100 different words. They can also respond appropriately to commands in which the same words appear in a different order, understanding the difference between "bring the surfboard to the man" and "bring the man to the surfboard", for example.

But communication in most of these early experiments was one-way, says Denise Herzing, founder of the Wild Dolphin Project in Jupiter, Florida. "They create a system and expect the dolphins to learn it, and they do, but the dolphins are not empowered to use the system to request things from the humans," she says.
Since 1998, Herzing and colleagues have been attempting two-way communication with dolphins, first using rudimentary artificial sounds, then by getting them to associate the sounds with four large icons on an underwater "keyboard".

By pointing their bodies at the different symbols, the dolphins could make requests - to play with a piece of seaweed or ride the bow wave of the divers' boat, for example. The system managed to get the dolphins' attention, Herzing says, but wasn't "dolphin-friendly" enough to be successful.

Herzing is now collaborating with Thad Starner, an artificial intelligence researcher at the Georgia Institute of Technology in Atlanta, on a project named Cetacean Hearing and Telemetry (CHAT). They want to work with dolphins to "co-create" a language that uses features of sounds that wild dolphins communicate with naturally.

Read the rest of the original article by MacGregor Campbell at New scientists magazine

May 8, 2010

Army of smartphone chips could emulate the human brain

IF YOU have a smartphone, you probably have a slice of Steve Furber's brain in your pocket. By the time you read this, his 1-billion-neuron silicon brain will be in production at a microchip plant in Taiwan.
Computer engineers have long wanted to copy the compact power of biological brains. But the best mimics so far have been impractical, being simulations running on supercomputers.
Furber, a computer scientist at the University of Manchester, UK, says that if we want to use computers with even a fraction of a brain's flexibility, we need to start with affordable, practical, low-power components.
"We're using bog-standard, off-the-shelf processors of fairly modest performance," he says.
Furber won't come close to copying every property of real neurons, says Henry Markram, head of Blue Brain. This is IBM's attempt to simulate a brain with unsurpassed accuracy on a Blue Gene supercomputer at the Swiss Institute for Technology, Lausanne. "It's a worthy aim, but brain-inspired chips can only produce brain-like functions," he says.
That's good enough for Furber, who wants to start teaching his brain-like computer about the world as soon as possible. His first goal is to teach it how to control a robotic arm, before working towards a design to control a humanoid. A robot controller with even a dash of brain-like properties should be much better at tasks like image recognition, navigation and decision-making, says Furber.
"Robots offer a natural, sensory environment for testing brain-like computers," says Furber. "You can instantly tell if it is being useful."
Called Spinnaker - for Spiking Neural Network Architecture - the brain is based on a processor created in 1987 by Furber and colleagues at Acorn Computers in Cambridge, UK, makers of the seminal BBC Microcomputer.
Although the chip was made for a follow-up computer that flopped, the ARM design at its heart lived on, becoming the most common "embedded" processor in devices like e-book readers and smartphones.
But coaxing any computer into behaving like a brain is tough. Both real neurons and computer circuits communicate using electrical signals, but in biology the "wires" carrying them do not have fixed roles as in electronics. The importance of a particular neural connection, or synapse, varies as the network learns by balancing the influence of the different signals being received. This synaptic "weighting" must be dynamic in a silicon brain, too.
To coordinate its 'neurons' the chip mimics the way real neurons communicate using 'spikes' in voltage
The chips under construction in Taiwan contain 20 ARM processor cores, each modelling 1000 neurons. With 20,000 neurons per chip, 50,000 chips will be needed to reach the target of 1 billion neurons.
A memory chip next to each processor stores the changing synaptic weights as simple numbers that represent the importance of a given connection at any moment. Initially, those will be loaded from a PC, but as the system gets bigger and smarter, says Furber, "the only computer able to compute them will be the machine itself".
Another brain-like behaviour his chips need to master is to communicate coordinated "spikes" of voltage. A computer has no trouble matching the speed at which individual neurons spike - about 10 times per second - but neurons work in very much larger, parallel groups than silicon logic gates.
In a brain there is no top-down control to coordinate their actions because the basic nature of individual neurons means that they work together in an emergent, bottom-up way.
Spinnaker cannot mimic that property, so it relies on a miniature controller to direct spike traffic, similar to one of the routers in the internet's backbone. "We can route to more than 4 billion neurons," says Furber, "many more than we need."
While the Manchester team await the arrival of their chips, they have built a cut-down version with just 50 neurons and have put the prototype through its paces in the lab. They have created a virtual environment in which the silicon brain controls a Pac-Man-like program that learns to hunt for a virtual doughnut.
"It shows that our four years designing the system haven't been wasted," says Furber. He hopes to have a 10,000-processor version working later this year.
As they attempt to coax brain-like behaviour from phone chips, others are working with hardware which may have greater potential.
The Defense Advanced Research Projects Agency, the Pentagon's research arm, is funding a project called Synapse. Wei Lu of the University of Michigan at Ann Arbor, is working on a way of providing synaptic weights with memristors, first made in 2008 (New Scientist, 3 May 2008, p 26).
Handily, their most basic nature is brain-like: at any one moment a memristor's resistance depends on the last voltage placed across it. This rudimentary "memory" means that simple networks of memristors form weighted connections like those of neurons. This memory remains without drawing power, unlike the memory chips needed in Spinnaker. "Memristors are pretty neat," says Lu.
Their downside is that they are untested, though. "Synapse is an extremely ambitious project," says Furber. "But ambition is what drives this field. No one knows the right way to go."

Original article posted on
 

October 2, 2009

Nanotech researchers develop artificial pore


CINCINNATI—Using an RNA-powered nanomotor, University of Cincinnati (UC) biomedical engineering researchers have successfully developed an artificial pore able to transmit nanoscale material through a membrane.

In a study led by UC biomedical engineering professor Peixuan Guo, PhD, members of the UC team inserted the modified core of a nanomotor, a microscopic biological machine, into a lipid membrane. The resulting channel enabled them to move both single- and double-stranded DNA through the membrane.

Their paper, “Translocation of double-stranded DNA through membrane-adapted phi29 motor protein nanopores,” will appear in the journal Nature Nanotechnology, Sept. 27, 2009. The engineered channel could have applications in nano-sensing, gene delivery, drug loading and DNA sequencing," says Guo.

Guo derived the nanomotor used in the study from the biological motor of bacteriophage phi29, a virus that infects bacteria. Previously, Guo discovered that the bacteriophage phi29 DNA-packaging motor uses six molecules of the genetic material RNA to power its DNA genome through its protein core, much like a screw through a bolt.

"The re-engineered motor core itself has shown to associate with lipid membranes, but we needed to show that it could punch a hole in the lipid membrane," says David Wendell, PhD, co-first author of the paper and a research assistant professor in UC’s biomedical engineering department. "That was one of the first challenges, moving it from its native enclosure into this engineered environment."

In this study, UC researchers embedded the re-engineered nanomotor core into a lipid sheet, creating a channel large enough to allow the passage of double-stranded DNA through the channel.

Guo says past work with biological channels has been focused on channels large enough to move only single-stranded genetic material.

"Since the genomic DNA of human, animals, plants, fungus and bacteria are double stranded, the development of single pore system that can sequence double-stranded DNA is very important," he says.

By being placed into a lipid sheet, the artificial membrane channel can be used to load double-stranded DNA, drugs or other therapeutic material into the liposome, other compartments, or potentially into a cell through the membrane.

Guo also says the process by which the DNA travels through the membrane can have larger applications.

"The idea that a DNA molecule travels through the nanopore, advancing nucleotide by nucleotide, could lead to the development of a single pore DNA sequencing apparatus, an area of strong national interest," he says.

Using stochastic sensing, a new analytical technique used in nanopore work, Wendell says researchers can characterize and identify material, like DNA, moving through the membrane.

Co-first author and UC postdoctoral fellow Peng Jing, PhD, says that, compared with traditional research methods, the successful embedding of the nanomotor into the membrane may also provide researchers with a new way to study the DNA packaging mechanisms of the viral nanomotor.

"Specifically, we are able to investigate the details concerning how double-stranded DNA translocates through the protein channel," he says.

The study is the next step in research on using nanomotors to package and deliver therapeutic agents directly to infected cells. Eventually, the team's work could enable use of nanoscale medical devices to diagnose and treat diseases.

"This motor is one of the strongest bio motors discovered to date," says Wendell, "If you can use that force to move a nanoscale rotor or a nanoscale machine … you're converting the force of the motor into a machine that might do something useful."

Funding for this study comes from the National Institutes of Health's Nanomedicine Development Center. Guo is the director of one of eight NIH Nanomedicine Development Centers and an endowed chair in biomedical engineering at UC.

Coauthors of the study include UC research assistant professor David Wendell, PhD, postdoctoral fellow Peng Jing, PhD, graduate students Jia Geng and Tae Jin Lee and former postdoctoral fellow Varuni Subramaniam from Guo’s previous lab at Purdue University. Carlo Montemagno, dean of the College of Engineering and College of Applied Science, also contributed to the study.

September 23, 2009

Video surveillance system that reasons like a human brain

BRS Labs announced a video-surveillance technology called Behavioral Analytics, which leverages cognitive reasoning, and processes visual data on a level similar to the human brain.

It is impossible for humans to monitor the tens of millions of cameras deployed throughout the world, a fact long recognized by the international security community. Security video is either used for forensic analysis after an incident has occurred, or it employs a limited-capability technology known as Video Analytics – a video-motion and object-classification-based software technology that attempts to watch video streams and then sends an alarm on specific pre-programmed events. The problem is that this legacy solution generates a great number of false alarms that effectively renders it useless in the real world.

BRS Labs has created a technology it calls Behavioral Analytics. It uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming.

The system learns autonomously, and builds cognitive “memories” while continuously monitoring a scene through the “eyes” of a CCTV security camera. It sees and then registers the context of what constitutes normal behavior, and the software distinguishes and alerts on abnormal behavior without requiring any special programming, definition of rules or virtual trip lines.

AISight is currently fielded across a wide variety of global critical infrastructure assets, protecting major international hotels, banking institutions, seaports, nuclear facilities, airports and dense urban areas plagued by criminal activity.

Original article by Helpnet security

August 31, 2009

Fishy Sixth Sense: Mathematical Keys To Fascinating Sense Organ


Fish and some amphibians possess a unique sensory capability in the so-called lateral-line system. It allows them, in effect, to "touch" objects in their surroundings without direct physical contact or to "see" in the dark. Professor Leo van Hermmen and his team in the physics department of the Technische Universitaet Muenchen are exploring the fundamental basis for this sensory system. What they discover might one day, through biomimetic engineering, better equip robots to orient themselves in their environments.

With our senses we take in only a small fraction of the information that surrounds us. Infrared light, electromagnetic waves, and ultrasound are just a few examples of the external influences that we humans can grasp only with the help of technological measuring devices – whereas some other animals use special sense organs, their own biological equipment, for the purpose. One such system found in fish and some amphibians is under investigation by the research team of Professor Leo van Hemmen, chair of theoretical biophysics at TUM, the Technische Universitaet Muenchen.
Even in murky waters hardly penetrated by light, pike and pickerel can feel out their prey before making contact. The blind Mexican cave fish can perceive structures in its surroundings and can effortlessly avoid obstacles. Catfish on the hunt follow invisible tracks that lead directly to their prey. The organ that makes this possible is the lateral-line system, which registers changes in currents and even smaller disturbances, providing backup support for the sense of sight particularly in dark or muddy waters.
This remote sensing system, at first glance mysterious, rests on measurement of the pressure distribution and velocity field in the surrounding water. The lateral-line organs responsible for this are aligned along the left and right sides of the fish's body and also surround the eyes and mouth. They consist of gelatinous, flexible, flag-like units about a tenth of a millimeter long. These so-called neuromasts – which sit either directly on the animal's skin or just underneath, in channels that water can permeate through pores – are sensitive to the slightest motion of the water. Coupled to them are hair cells similar to the acoustic pressure sensors in the human inner ear. Nerves deliver signals from the hair cells for processing in the brain, which localizes and identifies possible sources of the changes detected in the water's motion.
These changes can arise from various sources: A fish swimming by produces vibrations or waves that are directly conveyed to the lateral-line organ. Schooling fishes can recognize a nearby attacker and synchronize their swimming motion so that they resemble a single large animal. The Mexican cave fish pushes a bow wave ahead of itself, which is reflected from obstacles. The catfish takes advantage of the fact that a swimming fish that beats its tail fin leaves a trail of eddies behind. This so-called "vortex street" persists for more than a minute and can betray the prey.
For the past five years, Leo van Hemmen and his team have been investigating the capabilities of the lateral-line system and assessing the potential to translate it into technology. How broad is the operating range of such a sense organ, and what details can it reveal about moving objects? Which stimuli does the lateral-line system receive from the eddy trail of another fish, and how are these stimuli processed? To get to the bottom of these questions, the scientists develop mathematical models and compare these with experimentally observed electrical nerve signals called action potentials. The biophysicists acquire the experimental data – measurements of lateral-line organ activity in clawed frogs and cave fish – through collaboration with biologists. "Biological systems follow their own laws," van Hemmen says, "but laws that are universally valid within biology and can be described mathematically – once you find the right biophysical or biological concepts, and the right formula."
The models yield surprisingly intuitive-sounding conclusions: Fish can reliably fix the positions of other fish in terms of a distance corresponding to their own body length. Each fish broadcasts definite and distinguishing information about itself into the field of currents. So if, for example, a prey fish discloses its size and form to a possible predator within the radius of its body length, the latter can decide if a pursuit is worth the effort. This is a key finding of van Hemmen's research team.
The TUM researchers have discovered another interesting formula. With this one, the angle between a fish's axis and a vortex street can be computed from the signals that a lateral-line system acquires. The peak capability of this computation matches the best that a fish's nervous system can do. The computed values for nerve signals from an animal's sensory organ agree astonishingly well with the actual measured electrical impulses from the discharge of nerve cells. "The lateral-line sense fascinated me from the start because it's fundamentally different from other senses such as vision or hearing, not just at first glance but also the second," van Hemmen says. "It's not just that it describes a different quality of reality, but also that in place of just two eyes or ears this sense is fed by many discrete lateral-line organs – from 180 in the clawed frog to several thousand in a fish, each of which in turn is composed of several neuromasts. The integration behind it is a tour de force."
The neuronal processing and integration of diverse sense impressions into a unified mapping of reality is a major focus for van Hemmen's group. They are pursuing this same fundamental investation through the study of desert snakes' infrared perception, vibration sensors in scorpions' feet, and barn owls' hearing.
"Technology has overtaken nature in some domains," van Hemmen says, "but lags far behind in the cognitive processing of received sense impressions. My dream is to endow robots with multiple sensory modalities. Instead of always building in more cameras, we should also along the way give them additional sensors for sound and touch." With a sense modeled on the lateral-line system, but which would function as well in air as under water, robots might for example move safely among crowds of people. But such a system also offers many promising applications in the water. Underwater robots could use it to orient themselves during the exploration of inaccessible cave systems and deep-sea volcanoes. Autonomous submarines could also locate obstacles in turbid water. Such an underwater vehicle is currently being developed within the framework of the EU project CILIA, in collaboration with the TUM chair for guidance and control technology.
Further research includes collaborations with the excellence cluster CoTeSys (Cognition for Technical Systems) and the newly created Leonardo da Vinci Center for Bionics at TUM, as well as with the chair for humanoid robots and the Bernstein Center for Computational Neuroscience.
ScienceDaily (Aug. 30, 2009)


August 24, 2009

Smart machines: What's the worst that could happen?


An invasion led by artificially intelligent machines. Conscious computers. A smartphone virus so smart that it can start mimicking you. You might think that such scenarios are laughably futuristic, but some of the world's leading artificial intelligence (AI) researchers are concerned enough about the potential impact of advances in AI that they have been discussing the risks over the past year. Now they have revealed their conclusions.

Until now, research in artificial intelligence has been mainly occupied by myriad basic challenges that have turned out to be very complex, such as teaching machines to distinguish between everyday objects. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration.

Now, for the first time, a panel of 25 AI scientists, roboticists, and ethical and legal scholars has been convened to address these issues, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI) in Menlo Park, California. It looked at the feasibility and ramifications of seemingly far-fetched ideas, such as the possibility of the internet becoming self-aware.

The panel drew inspiration from the 1975 Asilomar Conference on Recombinant DNA in California, in which over 140 biologists, physicians, and lawyers considered the possibilities and dangers of the then emerging technology for creating DNA sequences that did not exist in nature. Delegates at that conference foresaw that genetic engineering would become widespread, even though practical applications – such as growing genetically modified crops – had not yet been developed.

Unlike recombinant DNA in 1975, however, AI is already out in the world. Robots like Roombas and Scoobas help with the mundane chores of vacuuming and mopping, while decision-making devices are assisting in complex, sometimes life-and-death situations. For example, Poseidon Technologies, sells AI systems that help lifeguards identify when a person is drowning in a swimming pool, and Microsoft's Clearflow system helps drivers pick the best route by analysing traffic behaviour.

At the moment such systems only advise or assist humans, but the AAAI panel warns that the day is not far off when machines could have far greater ability to make and execute decisions on their own, albeit within a narrow range of expertise. As such AI systems become more commonplace, what breakthroughs can we reasonably expect, and what effects will they have on society? What's more, what precautions should we be taking?

These are among the many questions that the panel tackled, under the chairmanship of Eric Horvitz, president of the AAAI and senior researcher with Microsoft Research. The group began meeting by phone and teleconference in mid-2008, then in February this year its members gathered at Asilomar, a quiet town on the north California coast, for a weekend to debate and seek consensus. They presented their initial findings at the International Joint Conference for Artificial Intelligence (IJCAI) in Pasadena, California, on 15 July.

Panel members told IJCAI that they unanimously agreed that creating human-level artificial intelligence – a system capable of expertise across a range of domains – is possible in principle, but disagreed as to when such a breakthrough might occur, with estimates varying wildly between 20 and 1000 years.

Panel member Tom Dietterich of Oregon State University in Corvallis pointed out that much of today's AI research is not aimed at building a general human-level AI system, but rather focuses on "idiot-savants" systems good at tasks in a very narrow range of application, such as mathematics.

The panel discussed at length the idea of an AI "singularity" – a runaway chain reaction of machines capable of building ever-better machines. While admitting that it was theoretically possible, most members were skeptical that such an exponential AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves. "Perhaps the singularity is not the biggest of our worries," said Dietterich.

A more realistic short-term concern is the possibility of malware that can mimic the digital behavior of humans. According to the panel, identity thieves might feasibly plant a virus on a person's smartphone that would silently monitor their text messages, email, voice, diary and bank details. The virus could then use these to impersonate that individual with little or no external guidance from the thieves. Most researchers think that they can develop such a virus. "If we could do it, they could," said Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, referring to organised crime syndicates.

Peter Szolovits, an AI researcher at the Massachusetts Institute of Technology, who was not on the panel, agrees that common everyday computer systems such as smartphones have layers of complexity that could lead to unintended consequences or allow malicious exploitation. "There are a few thousand lines of code running on my cell phone and I sure as hell haven't verified all of them," he says.

"These are potentially powerful technologies that could be used in good ways and not so good ways," says Horvitz, and cautions that besides the threat posed by malware, we are close to creating systems so complex and opaque that we don't understand them.

Given such possibilities, "what's the responsibility of an AI researcher?" says Bart Selman of Cornell, co-chair of the panel. "We're starting to think about it."

At least for now we can rest easy on one score. The panel concluded that the internet is not about to become self-aware.

May 6, 2009

Emotional Robot Tries to Create Relationships



The world’s first robot with its own Facebook page is part of an ambitious experiment to build long-term meaningful relationships with humans.

The world's first robot with its own Facebook page (and that can use its information in conversations with "friends") has been developed by the Interactive Robots and Media Lab at the United Arab Emirates University.

  We all love robots, right? And yet that special relationship never seems to materialise. However intensely they begin, our relationships with robots gradually wane as the realisation dawns that it wasn’t love that brought us together, but mere infatuation.

The relationship quickly and inevitably breaks down, like the morning after a Las Vegas wedding. (Japanese researchers have even measured the decline in interaction levels as humans lose interest in robot toys.)

But building a meaningful relationship with a robot may soon get easier if Nikolaos Mavridis and pals from the Interactive Robots and Media Lab at the United Arab Emirates University have anything to do with it. They say the key to building a longer, meaningful relationship with a robot is to become embedded in the same network of shared friends and together build a pool of shared memories that you can both refer to. Just like a real friend.

So the team has created the world’s first robot that does both these things–it has its own Facebook page and it can use the information it gathers from this social network in conversations with “friends”.

That seems a useful approach provided the robot can make sense of the interactions it has with humans. One key challenge will be determining the emotional state of the friends it meets and then empathising with them in an appropriate way. That’s not always easy for humans.

Nevertheless, humans are fairly predictable animals. There are numerous shortcuts and tricks that a clever programmer can exploit to give the impression of empathy and understanding appearing like an emotional robot.

One of the most extraordinary things about ELIZA, the world’s first chatbot created in 1966, was the number of people reported to have fallen in love with it. The program used a simple trick to give the impression of empathy and compassion: it asked general leading questions and then turned the responses into questions which it repeated back to its conversational partner. Here’s real example from the ELIZA archives:

young woman: Men are all alike.
eliza: In what way?
young woman: They’re always bugging us about something specific or other.
eliza: Can you think of a specific example?
young woman: Well, my boyfriend made me come here.
eliza: Your boyfriend made you come here?
young woman: He says I’m depressed much of the time.

And so on. Perhaps the new UAE robot might continue with: “Remember when we both…”.

Sadly, the UAE team are about to make their work much harder. They’re planning to implement their programme in a humanoid robot called IbnSina (see picture), that they have developed at their lab. That will introduce an entirely new problem into any prospective relationship–the uncanny valley that various Japanese roboticists talk about. This is the feeling of revulsion that almost-but-not-quite humanoids seem to generate in humans.

And revulsion is about as big a barrier to a meaningful relationship as you can get.


Original article by the physics arXiv blog, May 5, 2009