Showing posts with label Neuroscience. Show all posts
Showing posts with label Neuroscience. Show all posts

Sunday, December 11, 2011

Various approaches to constructing AI

Self-Improving Artificial Intelligence




Whole Brain Emulation: The Logical Endpoint of Neuroinformatics?



Saturday, December 10, 2011

Swarms of bees could unlock secrets to human brains

Scientists at the University of Sheffield believe decision making mechanisms in the human brain could mirror how swarms of bees choose new nest sites.

Striking similarities have been found in decision making systems between humans and insects in the past but now researchers believe that bees could teach us about how our brains work.
Experts say the insects even appear to have solved indecision, an often paralysing thought process in humans, with scouts who seek out any honeybees advertising rival nest sites and butt against them with their heads while producing shrill beeping sounds.

Dr James Marshall, of the University of Sheffield's Department of Computer Science, who led the UK involvement in the project and has also previously worked on similarities between how brains and insect colonies make decisions, said: "Up to now we've been asking if honeybee colonies might work in the same way as brains; now the new mathematical modelling we've done makes me think we should be asking whether our brains might work like honeybee colonies.

"Many people know about the waggle dance that honeybees use to direct hive mates to rich flower patches and new nest sites. Our research published in the journal Science (on December 9), shows that this isn't the only way that honeybees communicate with each other when they are choosing a new nest site; they also disrupt the waggle dances of bees that are advertising alternative sites."

Biologists from Cornell University, New York, University of California Riverside and the University of Bristol set up two nest boxes for a homeless honeybee swarm to choose between and recorded how bees that visited each box interacted with bees from the rival box. They found that bees that visited one site, which were marked with pink paint, tended to inhibit the dances of bees advertising the other site, which were marked with yellow paint, and vice versa.

Tom Seeley of Cornell University, author of the best-selling book Honeybee Democracy said "We were amazed to discover that the bees from one nest box would seek out bees performing waggle dances for the other nest box and butt against them with their heads while simultaneously producing shrill beeping sounds. We call this rough treatment the 'stop signal' because most bees that receive this signal will cease dancing a few seconds later."

Dr Patrick Hogan of the University of Sheffield, who constructed the mathematical model of the bees, added: "The bees target their stop signal only at rivals within the colony, preventing the colony as a whole from becoming deadlocked with indecision when choosing a new home. This remarkable behaviour emerges naturally from the very simple interactions observed between the individual bees in the colony."

Monday, June 27, 2011

The first advertising campaign for non-human primates

Keith Olwell and Elizabeth Kiehner had an epiphany last year. At a TED talk, the two New York advertising executives learned that captive monkeys understand money, and that when faced with economic games they will behave in similar ways to humans. So if they can cope with money, how would they respond to advertising?

Laurie Santos, the Yale University primatologist who gave the TED talk, studies monkeys as a way of exploring the evolution of the human mind. A partnership was soon born between Santos, and Olwell and Kiehner's company Proton. The resulting monkey ad campaign was unveiled on Saturday at the Cannes Lions Festival, the creative festival for the advertising industry.

Monkey brands

The objective, says Olwell, is to see if advertising can make brown capuchins change their behaviour. The team will create two brands of food – the team is considering making two colours of jello – specifically targeted at brown capuchins, one supported by an ad campaign and the other not.
How do you advertise to monkeys? Easy: create a billboard campaign that hangs outside the monkeys' enclosure.

"The foods will be novel to them and are equally delicious," Olwell says. Brand A will be advertised and brand B will not. After a period of exposure to the campaign, the monkeys will be offered a choice of both brands.
Santos plans to kick off the experimental campaign in the coming weeks. "If they tend toward one and not the other we'll be witnessing preference shifting due to our advertising," Olwell says.

Sex sells

Olwell says that developing a campaign for non-humans threw up some special challenges. "They do not have language or culture and they have very short attention spans," he says. "We really had to strip out any hip and current thinking and get to the absolute core of what is advertising.
"We're used to doing fairly complex and nuanced work. For this exploration we had to constantly ask ourselves, 'Could we be less finessed?'. We wanted the most visceral approaches."

New Scientist has seen the resulting two billboards. We are unable to show them until Santos and her team have completed their study, but we can reveal that its message is most certainly visceral.
One billboard shows a graphic shot of a female monkey with her genitals exposed, alongside the brand A logo. The other shows the alpha male of the capuchin troop associated with brand A.
Olwell expects brand A to be the capuchins' favoured product. "Monkeys have been shown in previous studies to really love photographs of alpha males and shots of genitals, and we think this will drive their purchasing habits."

The team wanted shots for the campaign that were as natural as possible. "After we settled on what they were being sold and that we were going to be doing 'sex sells', we really wanted to make a very direct ad. We wanted to shoot our subjects involved in normal day-to-day life."

Source New Scientist

Monday, June 20, 2011

Einstein's and Fourier's ideas as keys to new humanlike computer vision

WEST LAFAYETTE, Ind. - Two new techniques for computer-vision technology mimic how humans perceive three-dimensional shapes by instantly recognizing objects no matter how they are twisted or bent, an advance that could help machines see more like people.
The techniques, called heat mapping and heat distribution, apply mathematical methods to enable machines to perceive three-dimensional objects, said Karthik Ramani, Purdue University's Donald W. Feddersen Professor of Mechanical Engineering.

This graphic illustrates a new computer-vision technology that builds on the basic physics and mathematical equations related to how heat diffuses over surfaces. The technique mimics how humans perceive three-dimensional shapes by instantly recognizing objects no matter how they are twisted or bent, an advance that could help machines see more like people. Here, a "heat mean signature" of a human hand model is used to perceive the six segments of the overall shape and define the fingertips.

"Humans can easily perceive 3-D shapes, but it's not so easy for a computer," he said. "We can easily separate an object like a hand into its segments - the palm and five fingers - a difficult operation for computers."
Both of the techniques build on the basic physics and mathematical equations related to how heat diffuses over surfaces. 

"Albert Einstein made contributions to diffusion, and 18th century physicist Jean Baptiste Joseph Fourier developed Fourier's law, used to derive the heat equation," Ramani said. "We are standing on the shoulders of giants in creating the algorithms for these new approaches using the heat equation."
As heat diffuses over a surface it follows and captures the precise contours of a shape. The system takes advantage of this "intelligence of heat," simulating heat flowing from one point to another and in the process characterizing the shape of an object, he said.

Findings will be detailed in two papers being presented during the IEEE Computer Vision and Pattern Recognition conference on June 21-23 in Colorado Springs. The paper was written by Ramani, Purdue doctoral students Yi Fang and Mengtian Sun, and Minhyong Kim, a professor of pure mathematics at the University College London.
A major limitation of existing methods is that they require "prior information" about a shape in order for it to be analyzed.

Researchers developing a new machine-vision technique tested their method on certain complex shapes, including the human form or a centaur – a mythical half-human, half-horse creature. The heat mapping allows a computer to recognize the objects no matter how the figures are bent or twisted and is able to ignore "noise" introduced by imperfect laser scanning or other erroneous data. 

"For example, in order to do segmentation you have to tell the computer ahead of time how many segments the object has," Ramani said. "You have to tell it that you are expecting, say, 10 segments or 12 segments."
The new methods mimic the human ability to properly perceive objects because they don't require a preconceived idea of how many segments exist.
"We are trying to come as close as possible to human segmentation," Ramani said. "A hot area right now is unsupervised machine learning. This means a machine, such as a robot, can perceive and learn without having any previous training. We are able to estimate the segmentation instead of giving a predefined number of segments." 

The work is funded partially by the National Science Foundation. A patent on the technology is pending.
The methods have many potential applications, including a 3-D search engine to find mechanical parts such as automotive components in a database; robot vision and navigation; 3-D medical imaging; military drones; multimedia gaming; creating and manipulating animated characters in film production; helping 3-D cameras to understand human gestures for interactive games; contributing to progress of areas in science and engineering related to pattern recognition; machine learning; and computer vision.

The heat-mapping method works by first breaking an object into a mesh of triangles, the simplest shape that can characterize surfaces, and then calculating the flow of heat over the meshed object. The method does not involve actually tracking heat; it simulates the flow of heat using well-established mathematical principles, Ramani said.
Heat mapping allows a computer to recognize an object, such as a hand or a nose, no matter how the fingers are bent or the nose is deformed and is able to ignore "noise" introduced by imperfect laser scanning or other erroneous data.

"No matter how you move the fingers or deform the palm, a person can still see that it's a hand," Ramani said. "But for a computer to say it's still a hand is going to be hard. You need a framework - a consistent, robust algorithm that will work no matter if you perturb the nose and put noise in it or if it's your nose or mine."
The method accurately simulates how heat flows on the object while revealing its structure and distinguishing unique points needed for segmentation by computing the "heat mean signature." Knowing the heat mean signature allows a computer to determine the center of each segment, assign a "weight" to specific segments and then define the overall shape of the object.

"Being able to assign a weight to segments is critical because certain points are more important than others in terms of understanding a shape," Ramani said. "The tip of the nose is more important than other points on the nose, for example, to properly perceive the shape of the nose or face, and the tips of the fingers are more important than many other points for perceiving a hand."
In temperature distribution, heat flow is used to determine a signature, or histogram, of the entire object.
"A histogram is a two-dimensional mapping of a three-dimensional shape," Ramani said. "So, no matter how a dog bends or twists, it gives you the same signature."

The temperature distribution technique also uses a triangle mesh to perceive 3-D shapes. Both techniques, which could be combined in the same system, require modest computer power and recognize shapes quickly, he said.
"It's very efficient and very compact because you're just using a two-dimensional histogram," Ramani said. "Heat propagation in a mesh happens very fast because the mathematics of matrix computations can be done very quickly and well."
The researchers tested their method on certain complex shapes, including hands, the human form or a centaur, a mythical half-human, half-horse creature. 

Saturday, June 18, 2011

Memory Implant Gives Rats Sharper Recollection

Scientists have designed a brain implant that restored lost memory function and strengthened recall of new information in laboratory rats — a crucial first step in the development of so-called neuroprosthetic devices to repair deficits from dementia, stroke and other brain injuries in humans.

Though still a long way from being tested in humans, the implant demonstrates for the first time that a cognitive function can be improved with a device that mimics the firing patterns of neurons. In recent years neuroscientists have developed implants that allow paralyzed people to move prosthetic limbs or a computer cursor, using their thoughts to activate the machines. In the new work, being published Friday, researchers at Wake Forest University and the University of Southern California used some of the same techniques to read neural activity. But they translated those signals internally, to improve brain function rather than to activate outside appendages.

“It’s technically very impressive to pull something like this off, given our current level of technology,” said Daryl Kipke, a professor of bioengineering at the University of Michigan who was not involved in the experiment. “We are just scratching the surface when it comes to interacting with the brain, but this experiment shows what’s possible and the great potential of interacting with the brain in this way.”
In a series of experiments, scientists at Wake Forest led by Sam A. Deadwyler trained rats to remember which of two identical levers to press to receive water; the animals first saw one of the two levers appear and then (after being distracted) had to remember to press the other lever to be rewarded. Repeated training on this task teaches rats the general rule, but in each trial the animal has to remember which lever appeared first, to inform the later choice.

The rats were implanted with a tiny array of electrodes, which threaded from the top of the head down into two neighboring pieces of the hippocampus, a structure that is crucial for forming these new memories, in rats as in humans. The two slivers of tissue, called CA1 and CA3, communicate with each other as the brain learns and stores new information. The device transmits these exchanges to a computer.
To test the effect of the implant, the researchers used a drug to shut down the activity of CA1. Without CA1 online, the rats could not remember which lever to push to get water. They remembered the rule — push the opposite lever of the one that first appeared — but not which they had seen first.

The researchers, having recorded the appropriate signal from CA1, simply replayed it, like a melody on a player piano — and the animals remembered. The implant acted as if it were CA1, at least for this one task.
“Turn the switch on, the animal has the memory; turn it off and they don’t: that’s exactly how it worked,” said Theodore W. Berger, a professor of engineering at U.S.C. and the lead author of the study, being published in The Journal of Neural Engineering. His co-authors were Robert E. Hampson and Anushka Goonawardena, along with Dr. Deadwyler, of Wake Forest, and Dong Song and Vasilis Z. Marmarelis of U.S.C.
In rats that did not receive the drug, new memories faded by about 40 percent after a long distraction period. But if the researchers amplified the corresponding CA1 signals using the implant, the memories eroded only about 10 percent in that time.

The authors said that with wireless technology and computer chips, the system could be easily fitted for human use. But there are a number of technical and theoretical obstacles. For one, the implant must first record a memory trace before playing it back or amplifying it; in patients with significant memory problems, those signals may be too weak. In addition, human memory is a rich, diverse neural process that involves many other brain areas, not just CA3 and CA1; implants in this area will be limited.
Still, some restored memories — Where is the bathroom? Where are the pots and pans stored? — could make a big difference in the lives of someone with dementia. “If you’re caring for someone in the house, for example,” Dr. Berger said, “it might be enough to keep the person out of the nursing home.”

Source The New York Times

Thursday, June 16, 2011

Noninvasive brain implant could someday translate thoughts into movement

ANN ARBOR, Mich.---A brain implant developed at the University of Michigan uses the body's skin like a conductor to wirelessly transmit the brain's neural signals to control a computer, and may eventually be used to reactivate paralyzed limbs.

The implant is called the BioBolt, and unlike other neural interface technologies that establish a connection from the brain to an external device such as a computer, it's minimally invasive and low power, said principal investigator Euisik Yoon, a professor in the U-M College of Engineering, Department of Electrical Engineering and Computer Science.
Currently, the skull must remain open while neural implants are in the head, which makes using them in a patient's daily life unrealistic, said Kensall Wise, the William Gould Dow Distinguished University professor emeritus in engineering.

BioBolt does not penetrate the cortex and is completely covered by the skin to greatly reduce risk of infectionResearchers believe it's a critical step toward the Holy Grail of brain-computer interfacing: allowing a paralyzed person to "think" a movement.
"The ultimate goal is to be able to reactivate paralyzed limbs," by picking the neural signals from the brain cortex and transmitting those signals directly to muscles, said Wise, who is also founding director of the NSF Engineering Research Center for Wireless Integrated MicroSystems (WIMS ERC). That technology is years away, the researchers say.

Another promising application for the BioBolt is controlling epilepsy, and diagnosing certain diseases like Parkinson's.
The concept of BioBolt is filed for patent and will be presented on June 16 at the 2011 Symposium on VLSI Circuits in Kyoto, Japan. Sun-Il Chang, a PhD student in Yoon's research group, is lead author on the presentation.
The BioBolt looks like a bolt and is about the circumference of a dime, with a thumbnail-sized film of microcircuits attached to the bottom. The BioBolt is implanted in the skull beneath the skin and the film of microcircuits sits on the brain. The microcircuits act as microphones to 'listen' to the overall pattern of firing neurons and associate them with a specific command from the brain. Those signals are amplified and filtered, then converted to digital signals and transmitted through the skin to a computer, Yoon said.

Another hurdle to brain interfaces is the high power requirement for transmitting data wirelessly from the brain to an outside source. BioBolt keeps the power consumption low by using the skin as a conductor or a signal pathway, which is analogous to downloading a video into your computer simply by touching the video.
Eventually, the hope is that the signals can be transmitted through the skin to something on the body, such as a watch or a pair of earrings, to collect the signals, said Yoon, eliminating the need for an off-site computer to process the signals.

Source EurekaAlert!

Wednesday, June 15, 2011

Noninvasive brain stimulation helps curb impulsivity

NeuroImage study demonstrates significant improvement in patients' inhibitory control

London, 15 June 2011 - Inhibitory control can be boosted with a mild form of brain stimulation, according to a study published in the June 2011 issue of Neuroimage, Elsevier's Journal of Brain Function. The study's findings indicate that non-invasive intervention can greatly improve patients' inhibitory control. Conducted by a research team led by Dr Chi-Hung Juan of the Institute of Cognitive Neuroscience, National Central University in Taiwan, the research was sponsored by the National Science Council in Taiwan, the UK Medical Research Council, the Royal Society Wolfson Merit Award, and a Fulbright Award.
The study demonstrates that when a weak electrical current is applied over the front of participants' scalps for ten minutes, it greatly improved their ability to process responses – effectively jumpstarting the brain's ability to control impulsivity. The treatment has the potential to serve as a non invasive treatment for patients with conditions such as attention-deficit hyperactivity disorder (ADHD), Tourette's syndrome, drug addictions, or violent impulsivity.
Professor Chi-Hung Juan who led the research team noted, "The findings that electrical stimulation to the brain can improve control of their behavioral urges not only provide further understanding of the neural basis of inhibitory control but also suggest a possible therapeutic intervention method for clinical populations, such as those with drug additions or ADHD, in the future".

Source EurekaAlert!

Thursday, June 9, 2011

New genetic technique converts skin cells into brain cells

A research breakthrough has proven that it is possible to reprogram mature cells from human skin directly into brain cells, without passing through the stem cell stage. The unexpectedly simple technique involves activating three genes in the skin cells; genes which are already known to be active in the formation of brain cells at the foetal stage.

The new technique avoids many of the ethical dilemmas that stem cell research has faced.
For the first time, a research group at Lund University in Sweden has succeeded in creating specific types of nerve cells from human skin. By reprogramming connective tissue cells, called fibroblasts, directly into nerve cells, a new field has been opened up with the potential to take research on cell transplants to the next level. The discovery represents a fundamental change in the view of the function and capacity of mature cells. By taking mature cells as their starting point instead of stem cells, the Lund researchers also avoid the ethical issues linked to research on embryonic stem cells.

Head of the research group Malin Parmar was surprised at how receptive the fibroblasts were to new instructions.
“We didn’t really believe this would work, to begin with it was mostly just an interesting experiment to try. However, we soon saw that the cells were surprisingly receptive to instructions.”
The study, which was published in the latest issue of the scientific journal PNAS, also shows that the skin cells can be directed to become certain types of nerve cells.

In experiments where a further two genes were activated, the researchers have been able to produce dopamine brain cells, the type of cell which dies in Parkinson’s disease. The research findings are therefore an important step towards the goal of producing nerve cells for transplant which originate from the patients themselves. The cells could also be used as disease models in research on various neurodegenerative diseases.

Unlike older reprogramming methods, where skin cells are turned into pluripotent stem cells, known as IPS cells, direct reprogramming means that the skin cells do not pass through the stem cell stage when they are converted into nerve cells. Skipping the stem cell stage probably eliminates the risk of tumours forming when the cells are transplanted. Stem cell research has long been hampered by the propensity of certain stem cells to continue to divide and form tumours after being transplanted.

Before the direct conversion technique can be used in clinical practice, more research is needed on how the new nerve cells survive and function in the brain. The vision for the future is that doctors will be able to produce the brain cells that a patient needs from a simple skin or hair sample. In addition, it is presumed that specifically designed cells originating from the patient would be accepted better by the body’s immune system than transplanted cells from donor tissue.

“This is the big idea in the long run. We hope to be able to do a biopsy on a patient, make dopamine cells, for example, and then transplant them as a treatment for Parkinson’s disease”, says Malin Parmar, who is now continuing the research to develop more types of brain cells using the new technique.

Source Lund University

Monday, June 6, 2011

Attention and Awareness Aren’t The Same

Paying attention to something and being aware of it seem like the same thing -they both involve somehow knowing the thing is there. However, a new study, which will be published in an upcoming issue of Psychological Science, a journal of the Association for Psychological Science, finds that these are actually separate; your brain can pay attention to something without you being aware that it’s there.

“We wanted to ask, can things attract your attention even when you don’t see them at all?” says Po-Jang Hsieh, of Duke-NUS Graduate Medical School in Singapore and MIT. He co-wrote the study with Jaron T. Colas and Nancy Kanwisher of MIT. Usually, when people pay attention to something, they also become aware of it; in fact, many psychologists assume these two concepts are inextricably linked. But more evidence has suggested that’s not the case.

To test this, Hsieh and his colleagues came up with an experiment that used the phenomenon called “visual pop-out.” They set each participant up with a display that showed a different video to each eye. One eye was shown colorful, shifting patterns; all awareness went to that eye, because that’s the way the brain works. The other eye was shown a pattern of shapes that didn’t move. Most were green, but one was red. Then subjects were tested to see what part of the screen their attention had gone to. The researchers found that people’s attention went to that red shape – even though they had no idea they’d seen it at all.

In another experiment, the researchers found that if people were distracted with a demanding task, the red shape didn’t attract attention unconsciously anymore. So people need a little brain power to pay attention to something even if they aren’t aware of it, Hsieh and his colleagues concluded.
Hsieh suggests that this could have evolved as a survival mechanism. It might have been useful for an early human to be able to notice and process something unusual on the savanna without even being aware of it, for example. “We need to be able to direct attention to objects of potential interest even before we have become aware of those objects,” he says.

Source Association for Psychological Science

Deciding to stay or go is a deep-seated brain function

DURHAM, N.C. – Birds do it. Bees do it. Even little kids picking strawberries do it.
Every creature that forages for food decides at some point that the food source they're working on is no richer than the rest of the patch and that it's time to move on and find something better.
This kind of foraging decision is a fundamental problem that goes far back in evolutionary history and is dealt with by creatures that don't even have proper brains, said Michael Platt, a professor of neurobiology and director of the Center for Cognitive Neuroscience at Duke University.

Platt and his colleagues now say they've identified a function in the primate brain that appears to be handling this stay-or-go problem. They have found that the dorsal anterior cingulate cortex (ACC), an area of the brain known to operate while weighing conflicts, steadily increases its activity during foraging decisions until a threshold level of activity is reached, whereupon the individual decides it's time to move on.
In lab experiments with rhesus macaque monkeys, Platt and postdoctoral fellows Benjamin Hayden and John Pearson put the animals through a series of trials in which they repeatedly had to decide whether to stay with a source that was giving ever-smaller squirts of fruit juice, or move to another, possibly better, source. The animals were merely gazing at a preferred target on a display screen, not moving from one tree to the next, but the decision-making process should be the same, Platt said.

For the other variable in this basic equation, travel time, the researchers added delays when monkeys chose to leave one resource and move to another, simulating short and long travel times.
As the monkeys repeatedly chose to stay with their current source or move to another, the researchers watched a small set of neurons within the anterior cingulate cortex fire with increasing activity for each decision. The rate of firing in this group of neurons grew until a threshold was reached, at which time the monkey immediately decided to move on, Platt said. "It is as if there is a threshold for deciding it's time to leave set in the brain," he said.

When the researchers raised the "travel time" to the next foraging spot in the experiment, it raised the decision-making threshold, Platt said.
This all fits with a 1976 theory by evolutionary ecologist Eric Charnov, called the Marginal Value Theorem, Platt said. It says that all foragers make calculations of reward and cost that tell them to leave a patch when their intake diminishes to the average intake rate for the overall environment. That is, one doesn't pick a blueberry bush until it's bare, only until it looks about as abundant as the bushes on either side of it. Shorter travel time to the next patch means it costs less to move, and foragers should move more easily. This theorem has been found to hold in organisms as diverse as worms, bees, wasps, spiders, fish, birds, seals and even plants, Platt said.

"This is a really fundamental solution to a fundamental problem," Platt said.
Platt said the work also relates to recent papers on the Web-browsing habits of humans. In the case of Internet users, the cost of travel time translates to download speed. The faster the downloads, the quicker browsers are willing to forage elsewhere, Platt said.
They aren't sure yet where the brain's signaling goes after the stay-or-go threshold in the ACC is reached. Platt believes this kind of "integrate-to-threshold" mechanism would be a good way to handle a lot of functions in the brain and may be found in other kinds of systems. This particular threshold in the ACC might also be a way to explain maladaptive behaviors like attention deficit, in which a person decides to move on constantly, or compulsive behavior, in which a person can't seem to move on at all, he said.

Source  EurekaAlert!

AI programs do battle in Ms Pac-Man


(Image: CEC Ms Pac-Man versus Ghost Team Competition)

Everyone loves a few rounds of a classic video game, but why should humans have all the fun? The Ms Pac-Man vs Ghost Team Competition serves to redress the balance by putting AI controllers in charge of video game characters in an effort to see which plays the game best.

Competitors could submit AI controllers for either the titular Ms Pac-Man or the team of four ghosts and each entrant faced off against the rest to determine a winner. The Ms Pac-Man AI had to maximise its score, while the ghost AI had to prevent Ms Pac-Man from scoring. The competition was organised by Philipp Rohlfshagen and Simon Lucas, two computer scientists at the University of Essex, with the results announced today at the Congress on Evolutionary Computation in New Orleans.

How did the AI controllers do? Compared to a human, not great - the highest scoring Ms Pac-Man controller was 69,240, while the world record stands at more than 900,000 points. "I would assume that 'professional' human Ms Pac-Man players will be better than any AI controller at this stage," says Rohlfshagen, though he added that the ghost teams were also much harder to play against than those found in the original game: "The original ghost team was developed to engage and entertain the human player whereas the ghost teams submitted in the competition were designed to eat Ms Pac-Man as efficiently as possible."

Developing an AI to play video games for us isn't really the aim though, and there is some serious research behind the competition. "Games are usually seen as a valuable test-bed for new technologies in computational intelligence as they are well defined yet very challenging," explains Rohlfshagen. He says the multi-agent algorithms behind the ghost controllers could be used for transport or military applications, or even modelling biological predator-prey dynamics.

Wednesday, June 1, 2011

USC study locates the source of key brain function

Scientists at the University of Southern California have pinned down the region of the brain responsible for a key survival trait: our ability to comprehend a scene—even one never previously encountered—in a fraction of a second.

The key is to process the interacting objects that comprise a scene more quickly than unrelated objects, according to corresponding author Irving Biederman, professor of psychology and computer science in the USC Dornsife College and the Harold W. Dornsife Chair in Neuroscience.
The study appears in the June 1 issue of The Journal of Neuroscience.
The brain's ability to understand a whole scene on the fly "gives us an enormous edge on an organism that would have to look at objects one by one and slowly add them up," Biederman said. What's more, the interaction of objects in a scene actually allows the brain to identify those objects faster than if they were not interacting.

While previous research had already established the existence of this "scene-facilitation effect," the location of the part of the brain responsible for the effect remained a mystery. That's what Biederman and lead author Jiye G. Kim, a graduate doctoral student in Biederman's lab, set out to uncover with Chi-Hung Juan of the Institute of Cognitive Neuroscience at the National Central University in Taiwan.
"The 'where' in the brain gives us clues as to the 'how,'" Biederman said. This study is the latest in an ongoing effort by Biederman and Kim to unlock the complex way in which the brain processes visual experience. The goal, as Biederman puts it, is to understand "how we get mind from brain."

To find out the "where" of the scene-facilitation effect, the researchers flashed drawings of pairs of objects for just 1/20 of a second. Some of these objects were depicted as interacting, such as a hand grasping for a pen, and some were not, with the hand reaching away from the pen. The test subjects were asked to press a button if a label on the screen matched either one of the two objects, which it did on half of the presentations.
A recent study by Kim and Biederman suggested that the source of the scene-facilitation effect was the lateral occipital cortex, or LO, which is a portion of the brain's visual processing center located between the ear and the back of the skull. However, the possibility existed that the LO was receiving help from the intraparietal sulcus, or IPS, which is a groove in the brain closer to the top of the head.
The IPS is engaged with implementing visual attention, and the fact that interacting objects may attract more attention left open the possibility that perhaps it was providing the LO with assistance.
While participants took the test, electromagnetic currents were used to alternately zap subjects' LO or IPS, temporarily numbing each region in turn and preventing it from providing assistance with the task.
All of the participants were pre-screened to ensure they could safely receive the treatment, known as transcranial magnetic stimulation (TMS), which produces minimal discomfort.

By measuring how accurate participants were in detecting objects shown as interacting or not interacting when either the LO or IPS were zapped, researchers could see how much help that part of the brain was providing. The results were clear: zapping the LO eliminated the scene-facilitation effect. Zapping the IPS, however, did nothing.

When it comes to providing a competitive edge in identifying objects that are part of an interaction, the lateral occipital cortex appears to be working alone. Or, at least, without help from the intraparietal sulcus.

Source  EurekaAlert!

Sunday, May 29, 2011

Virtual natural environments and benefits to health

A new position paper by researchers at the European Centre for the Environment and Human Health (ECEHH - part of the Peninsula College of Medicine and Dentistry) and the University of Birmingham has compared the benefits of interaction with actual and virtual natural environments and concluded that the development of accurate simulations are likely to be beneficial to those who cannot interact with nature because of infirmity or other limitations: but virtual worlds are not a substitute for the real thing.
The paper includes details of an exciting project underway between the collaborating institutions to create virtual environments to help identify the clues and cues that we pick up when we spend time in nature.
The study is published in Environmental Science & Technology on 1st June 2011.
The paper discusses the potential for natural and virtual environments in promoting improved human health and wellbeing.

We have all felt the benefit of spending time in natural environments, especially when we are feeling stressed or upset. The researchers describe creating virtual environments to try to identify just how this happens. It may be that the colours, sounds, and smells of nature are all important, but to different extents, in helping to provide mental restoration and motivation to be physically active.

It was recognised that, while some studies have tried to explore this notion, much of the work is anecdotal or involves small-scale studies which often lack appropriate controls or statistical robustness. However, the researchers do identify some studies, such as those relating to Attention Restoration Theory, that are valuable.
Key to the research is an exploration of the studies that showed a direct relationship between interaction with the natural environment and improvements in health, and the potential such activity has for becoming adopted by health services around the world to the benefit of both patients and budgets. For example, a study in Philadelphia suggested that maintaining city parks could achieve yearly savings of approximately $69.4 million in health care costs.

Programmes such as the Green Gym and the Blue Gym which promote, facilitate and encourage activity in the natural environment, are already laying the groundwork for workable programmes that could be adopted throughout the world to the benefit of human health. Research teams from the ECEHH are currently undertaking a range of studies to analyse the effects of interaction with the natural environment on health which in turn could lead to prescribing clinicians being able to treat patients with natural environment activity alone or in conjunction with reduced pharmaceutical solutions – the beneficial effect on national health service drug bills around the world could be immense, and also help reduce the release of toxic pharmaceutical residues contained in sewage into our ecosystems.
The paper also examines how step-change developments in the technology used in computer-generated forms of reality means that the software and hardware required to access increasingly accurate simulated natural environments are more readily available to the general public than ever before.
In addition to recognising the value of better technology – which includes the ability to synthesise smells - the review also recognised that key to the success of virtual environments is the design of appropriate and effective content based on knowledge of human behaviour.

Teams from the ECEHH and colleagues from the University of Birmingham, which include joint authors of the paper, have constructed the first two virtual restorative environments to support their experimental studies. This pilot study is based on the South Devon Coastal Path and Burrator Reservoir located within Dartmoor National Park, both within a short distance of the urban conurbation of Plymouth (UK).
Both natural environments are being recreated using Unity, a powerful game and interactive media development tool.
The research team is attempting to achieve a close match between the virtual and the real by importing Digital Terrain Model (DTM) data and aerial photographs into the Unity toolkit and combining this with natural features and manmade artefacts including wild flowers, trees, hedgerows, fences, seating benches and buildings. High-quality digital oceanic, coastal and birdsong sounds are also incorporated.
The pilot study, part of a Virtual Restorative Environment Therapy (VRET) initiative, is also supporting efforts to establish how psychological and physiological measurement can be used as part of a real-time biofeedback system to link participants' arousal levels to features such as cloud cover, weather, wave strengths, ambient sounds and smells.
Professor Michael Depledge, Chair of Environment and Human Health at the ECEHH, commented: "Virtual environments could benefit the elderly or infirm within their homes are care units, and can be deployed within defence medical establishments to benefit those with physical and psychological trauma following operations in conflict zones. Looking ahead, the wellbeing of others removed from nature, such as submariners and astronauts confined for several months in their crafts, might also be enhanced. Once our research has been conducted and the appropriate software written, artificial environments are likely to become readily affordable and of widespread use to health services."

He added: "However, we would not wish for the availability of virtual environments to become a substitute for the real thing in instances where accessibility to the real world is achievable. Our ongoing research with both the Green Gym and the Blue Gym initiatives aims to make these options a valid and straightforward choice for the majority of the population."
Professor Bob Stone, Chair of Interactive Multimedia Systems at the University of Birmingham, and lead investigator, said: "This technology could be made available to anyone who, for whatever reason, is in hospital, bed-bound or cannot get outside. They will be able to get the benefits of the countryside and seaside by viewing the virtual scenario on screen.

"Patients will be free to choose areas that they want to spend time in; they can take a walk along coastal footpaths, sit on a beach, listen to the waves and birdsong, watch the sun go down and - in due course - even experience the smells of the land- and seascapes almost as if they were experiencing the outdoors for real."
Professor Stone continued: "We are keen to understand what effect our virtual environments have on patients and will be carrying out further studies into arousal levels and reaction. In the summer we will start to test this on a large number of people so that we can measure biofeedback and make any changes or improvements to the scenario we have chosen.'"

 Source EurekaAlert!

Friday, May 27, 2011

Disbelieving Free Will Makes Brain Less Free

If people are told that free will doesn’t exist, their brains might follow suit.
A test of people who read passages discrediting the notion of free will found an immediate decrease in brain activity related to voluntary action. The findings are just one data point in ongoing scientific investigation of a millennia-old philosophical conundrum, but they raise an intriguing possibility.
“Our results indicate that beliefs about free will can change brain processes related to a very basic motor level,” wrote researchers led by psychologist Davide Rigoni of Italy’s University of Padova in a study published in May’s Psychological Science.

Electrode readings of activity in brain regions linked to voluntary behavior in a control group (red) and people who read a passage discrediting free will (blue). Dots indicate the moment at which they decided to press a button. Psychological Science

Rigoni’s team asked 30 people to read passages from Francis Crick’s 1994 book The Astonishing Hypothesis: The Scientific Search for the Soul. Half read a passage that didn’t mention free will, while the others read a passage describing it as illusory. All were hooked to electroencephalograph machines that monitored electric activity known as “readiness potential,” which is linked to the neurological computations that occur in the milliseconds before voluntary movement.

The test subjects were then asked to press a mouse button when a cursor flashed on a computer screen for several seconds. Those who read the passage dismissing free will displayed significantly lower readiness potentials. Their actions seemed to involved less voluntary control than the control group’s.
Tested on when they decided to press the button, the non-free-will group reported doing so a fraction of a second before their counterparts. To lose confidence in free will seemingly introduced a lag between conscious choice and action.

Earlier psychological studies of free will have found that discrediting free will seems to trigger an increase in cheating aggressiveness, encourage people to be less helpful and generally sap motivation.
The latest findings extend the effects of disbelieving to a more basic physical level. Whether there’s a relationship between free will, motor activity and more complex behaviors is yet to be determined, but “abstract belief systems might have a much more fundamental effect than previously thought,” wrote the researchers.

Source WIRED

Thursday, May 26, 2011

Mind-reading scan identifies simple thoughts

A new new brain imaging system that can identify a subject's simple thoughts may lead to clearer diagnoses for Alzheimer's disease or schizophrenia – as well as possibly paving the way for reading people's minds.
Michael Greicius at Stanford University in California and colleagues used functional magnetic resonance imaging (fMRI) to identify patterns of brain activity associated with different mental states.
He asked 14 volunteers to do one of four tasks: sing songs silently to themselves; recall the events of the day; count backwards in threes; or simply relax.

Participants were given a 10-minute period during which they had to do this. For the rest of that time they were free to think about whatever they liked. The participants' brains were scanned for the entire 10 minutes, and the patterns of connectivity associated with each task were teased out by computer algorithms that compared scans from several volunteers doing the same task.
This differs from previous experiments, in which the subjects were required to perform mental activities at specific times and the scans were then compared with brain activity when they were at rest. Greicius reasons his method encourages "natural" brain activity more like that which occurs in normal thought.

Read my mind

Once the algorithms had established the brain activity necessary for each task, Greicius asked 10 new volunteers to think in turn about each of the four tasks. Without knowing beforehand what each volunteer was thinking, the system successfully identified 85 per cent of the tasks they were engaged in. "Out of 40 scans of the new people, we could identify 34 mental states correctly," he says.
It also correctly concluded that subjects were not engaged in any of the four original activities when it analysed scans of people thinking about moving around their homes.

The findings suggest that patterns for thousands of mental states might serve as a reference bank against which people's thoughts could be compared, potentially revealing what someone is thinking or how they are feeling. "In some dystopian future, you might imagine reference patterns for 10,000 mental states, but that would be a woeful application of this technology," says Greicius.
The idea of the system being used by security services or the justice system to interrogate prisoners or suspects is far-fetched, Greicius says. Thousands of reference patterns would be needed, he points out, and even these might not be enough to tell if someone is lying, for example.

Diagnostic test

Instead, he hopes it could be used in Alzheimer's and schizophrenia to help identify faults in the connections needed to perform everyday tasks. He also says the system might be useful for gauging emotional reactions to film clipsMovie Camera and adverts.
How much detail such brain scans would show remains to be seen. "There would be a pretty coarse limit on what you could distinguish," says John Duncan of the UK Medical Research Council's Cognitive and Brain Sciences Centre in Cambridge. "The distinctiveness of an activity predicts the distinctiveness of brain activity associated with it," he says.

Kay Brodersen of the Swiss Federal Institute of Technology in Zurich, Switzerland, agrees. "You might be able to tell if someone is singing to themselves," he says. "But try to distinguish a Lady Gaga song from another and you would probably fail."
"The most important potential for this is in the clinic where classifying and diagnosing and treating psychiatric disease could be really important," says Brodersen. "At the moment, psychiatry is often just trial and error."

Source New Scientist

Drug may help overwrite bad memories

MONTREAL, March 26, 2011 – Recalling painful memories while under the influence of the drug metyrapone reduces the brain's ability to re-record the negative emotions associated with them, according to University of Montreal researchers at the Centre for Studies on Human Stress of Louis-H. Lafontaine Hospital. The team's study challenges the theory that memories cannot be modified once they are stored in the brain. "Metyrapone is a drug that significantly decreases the levels of cortisol, a stress hormone that is involved in memory recall," explained lead author Marie-France Marin. Manipulating cortisol close to the time of forming new memories can decrease the negative emotions that may be associated with them.
"The results show that when we decrease stress hormone levels at the time of recall of a negative event, we can impair the memory for this negative event with a long-lasting effect," said Dr. Sonia Lupien, who directed the research.

Thirty-three men participated in the study, which involved learning a story composed of neutral and negative events. Three days later, they were divided into three groups – participants in the first group received a single dose of metyrapone, the second received double, while the third were given placebo. They were then asked to remember the story. Their memory performance was then evaluated again four days later, once the drug had cleared out.. "We found that the men in the group who received two doses of metyrapone were impaired when retrieving the negative events of the story, while they showed no impairment recalling the neutral parts of the story," Marin explained. "We were surprised that the decreased memory of negative information was still present once cortisol levels had returned to normal."
 
The research offers hope to people suffering from syndromes such as post-traumatic stress disorder. "Our findings may help people deal with traumatic events by offering them the opportunity to 'write-over' the emotional part of their memories during therapy," Marin said. One major hurdle, however, is the fact that metyrapone is no longer commercially produced. Nevertheless, the findings are very promising in terms of future clinical treatments. "Other drugs also decrease cortisol levels, and further studies with these compounds will enable us to gain a better understanding of the brain mechanisms involved in the modulation of negative memories."

Source EurekaAlert!

Wednesday, May 25, 2011

Pitt Researchers Recreate Brain Cell Networks With Unprecedented View of Activity Behind Memory Formation

A team based in Pitt’s Swanson School of Engineering produced and observed the extended electrical charge associated with working memory using living-cell models of neural networks that reveal the complex, diminutive world of brain cells.

PITTSBURGH—University of Pittsburgh researchers have reproduced the brain’s complex electrical impulses onto models made of living brain cells that provide an unprecedented view of the neuron activity behind memory formation.
The team fashioned ring-shaped networks of brain cells that were not only capable of transmitting an electrical impulse, but also remained in a state of persistent activity associated with memory formation, said lead researcher Henry Zeringue [zuh-rang], a bioengineering professor in Pitt’s Swanson School of Engineering. Magnetic resonance images have suggested that working memories are formed when the cortex, or outer layer of the brain, launches into extended electrical activity after the initial stimulus, Zeringue explained. But the brain’s complex structure and the diminutive scale of neural networks mean that observing this activity in real time can be nearly impossible, he added.

A fluorescent image of the neural network model developed at Pitt reveals the interconnection (red) between individual brain cells (blue). Adhesive proteins (green) allow the network to be constructed on silicon discs for experimentation.

The Pitt team, however, was able to generate and prolong this excited state in groups of 40 to 60 brain cells harvested from the hippocampus of rats—the part of the brain associated with memory formation. In addition, the researchers produced the networks on glass slides that allowed them to observe the cells’ interplay. The work was conducted in Zeringue’s lab by Pitt bioengineering doctoral student Ashwin Vishwanathan, who most recently reported the work in the Royal Society of Chemistry (UK) journal, Lab on a Chip. Vishwanathan coauthored the paper with Zeringue and Guo-Qiang Bi, a neurobiology professor in Pitt’s School of Medicine. The work was conducted through the Center for the Neural Basis of Cognition, which is jointly operated by Pitt and Carnegie Mellon University.

To produce the models, the Pitt team stamped adhesive proteins onto silicon discs. Once the proteins were cultured and dried, cultured hippocampus cells from embryonic rats were fused to the proteins and then given time to grow and connect to form a natural network. The  researchers disabled the cells’ inhibitory response and then excited the neurons with an electrical pulse.
Zeringue and his colleagues were able to sustain the resulting burst of network activity for up to what in neuronal time is 12 long seconds. Compared to the natural duration of .25 seconds at most, the model’s 12 seconds permitted an extensive observation of how the neurons transmitted and held the electrical charge, Zeringue said.

Unraveling the mechanics of this network communication is key to understanding the cellular and molecular basis of memory creation, Zeringue said. The format developed at Pitt makes neural networks more accessible for experimentation. For instance, the team found that when activity in one neuron is suppressed, the others respond with greater excitement.
“We can look at neurons as individuals, but that doesn’t reveal a lot,” Zeringue said. “Neurons are more connected and interdependent than any other cell in the body. Just because we know how one neuron reacts to something, a whole network can react not only differently, but sometimes in the complete opposite manner predicted.”

Zeringue will next work to understand the underlying factors that govern network communication and stimulation, such as the various electrical pathways between cells and the genetic makeup of individual cells.

Source University of Pittsburgh

Tuesday, May 24, 2011

Why people with schizophrenia may have trouble reading social cues

Understanding the actions of other people can be difficult for those with schizophrenia. Vanderbilt University researchers have discovered that impairments in a brain area involved in perception of social stimuli may be partly responsible for this difficulty.
“Misunderstanding social situations and interactions are core deficits in schizophrenia,” said Sohee Park, Gertrude Conaway Professor of Psychology and one of the co-authors on this study. “Our findings may help explain the origins of some of the delusions involving perception and thoughts experienced by those with schizophrenia.”

In findings published in the journal PLoS ONE, the researchers found that a particular brain area, the posterior superior temporal sulcus or STS, appears to be implicated in this deficit.
“Using brain imaging together with perceptual testing, we found that a brain area in a neural network involved in perception of social stimuli responds abnormally in individuals with schizophrenia,” said Randolph Blake, Centennial Professor of Psychology and co-author. “We found this brain area fails to distinguish genuine biological motion from highly distorted versions of that motion.”

The study’s lead author, Jejoong Kim, completed the experiments as part of his dissertation under the supervision of Park and Blake in Vanderbilt’s Department of Psychology. Kim is now conducting research in the Department of Brain and Cognitive Sciences at Seoul National University in Korea, where Blake is currently a visiting professor.

“We have found… that people with schizophrenia tend to ‘see’ living things in randomness and this subjective experience is correlated with an increased activity in the (posterior) STS,” the authors wrote. “In the case of biological motion perception, these self-generated, false impressions of meaning can have negative social consequences, in that schizophrenia patients may misconstrue the actions or intentions of other people.”
In their experiments, the researchers compared the performance of people with schizophrenia to that of healthy controls on two visual tasks. One task involved deciding whether or not an animated series of lights depicted the movements of an actor’s body. The second task entailed judging subtle differences in the actions depicted by two similar animations viewed side by side. On both tasks, people with schizophrenia performed less well than the healthy controls.

fMRI used to i.d. brain area

Next, the researchers measured brain activity using functional magnetic resonance imaging (fMRI) while the subjects—healthy controls and schizophrenia patients—performed a version of the side-by-side task. Once again, the individuals with schizophrenia performed worse on the task. The researchers were then able to correlate those performance deficits with the brain activity in each person.
The fMRI results showed strong activation of the posterior portion of the STS in the healthy controls when they were shown biological motion. In the individuals with schizophrenia, STS activity remained relatively constant and high regardless of what was presented to them.

Analysis of the brain activity of the schizophrenia patients also showed high STS activity on trials where they reported seeing biological motion, regardless of whether the stimulus itself was biological or not.
For reasons yet to be discovered, area STS in schizophrenia patients fails to differentiate normal human activity from non-human motion, leading Kim and colleagues to surmise that this abnormal brain activation contributes to the patients’ difficulties reading social cues portrayed by the actions of others.



Source Vanderbilt University

What makes an image memorable?

Hint: We tend to remember pictures of people much better than wide open spaces.

Can you guess which of these images would be the most memorable? Give up? It's the top left and bottom right ones. Images courtesy of the Oliva and Torralba labs.

Next time you go on vacation, you may want to think twice before shooting hundreds of photos of that scenic mountain or lake.

A new study from MIT neuroscientists shows that the most memorable photos are those that contain people, followed by static indoor scenes and human-scale objects. Landscapes? They may be beautiful, but they are, in most cases, utterly forgettable.

“Pleasantness and memorability are not the same,” says MIT graduate student Phillip Isola, one of the lead authors of the paper, which will be presented at the IEEE Conference on Computer Vision and Pattern Recognition, taking place June 20-25 in Colorado Springs.

The new paper is the first to model what makes an image memorable — a trait long thought to be impenetrable to scientific study, because visual memory can be so subjective. “People did not think it was possible to find anything consistent,” says Aude Oliva, associate professor of cognitive science and a senior author of the paper.

However, the MIT team, which also included Antonio Torralba, the Esther and Harold E. Edgerton Associate Professor of Electrical Engineering and Computer Science, and one of his graduate students, Jianxiong Xiao, was surprised to see remarkable consistency among hundreds of people who participated in the memory experiments.

Using their findings from humans, the researchers developed a computer algorithm that can rank images based on memorability. Such an algorithm could be useful to graphic designers, photo editors, or anyone trying to decide which of their vacation photos to post on Facebook, Oliva says.

Why we remember

Oliva’s previous research has shown that the human brain can remember thousands of images, with a surprising level of detail. However, not all images are equally memorable.

For the new study, the researchers built a collection of about 10,000 images of all kinds — interior-design photos, nature scenes, streetscapes and others. Human subjects in the study (who participated through Amazon’s Mechanical Turk program, which farms tasks out to people sitting at their own computers) were shown a series of images, some of which were repeated. Their task was to indicate, by pressing a key on their keyboard, when an image appeared that they had already seen.

Each image’s memorability rating was determined by how many participants correctly remembered seeing it.

In general, different research subjects tended to produce similar memorability ratings. “There are always differences between observers, but on average, there is very high consistency,” says Oliva, who is also a principal investigator in the computer vision group at MIT’s Computer Science and Artificial Intelligence Laboratory.

After gathering their data, the researchers made “memorability maps” of each image by asking people to label all the objects in the images. A computer model can then analyze those maps to determine which objects make an image memorable.

A series of memorable and forgettable photos from the study. The forgettable images are in the top row. The memorable ones are in the bottom row. 

In general, images with people in them are the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable are natural landscapes, although those can be memorable if they feature an unexpected element, such as shrubbery trimmed into an unusual shape.

Alexei Efros, associate professor of computer science at Carnegie Mellon University, says the study offers a novel way to characterize images.

“There has been a lot of work in trying to understand what makes an image interesting, or appealing, or what makes people like a particular image. But all of those questions are really hard to answer,” says Efros, who was not involved in this research. “What [the MIT researchers] did was basically approach the problem from a very scientific point of view and say that one thing we can measure is memorability.”

Predicting memorability

The researchers then used machine-learning techniques (a type of statistical analysis that allows computers to identify patterns in data) to create a computational model that analyzed the images and their memorability as rated by humans. For each image, the computational model analyzed various statistics — such as color, or the distribution of edges — and correlated them with the image’s memorability.

That allowed the researchers to generate an algorithm that can predict memorability of images the computational model hasn’t “seen” before. Such an algorithm could be used by book publishers to evaluate cover art, or news editors looking for the most memorable photograph to feature on their website. 

Oliva believes the algorithm might also be of interest to camera manufacturers, and Isola is thinking about designing an iPhone app that could immediately tell users how memorable the photo they just took will be. For that application, the main challenge is getting the algorithm to work fast enough, Isola says.

Other possible applications are clinical memory tests that more precisely reveal what aspects of visual memory are deficient in specific psychological or brain disorders, and games to help train the memory.

The researchers are now doing a follow-up study to test longer-term memorability of images. They are also working on adding more detailed descriptions of image content, such as “two people shaking hands,” or “people looking at each other,” to each image’s memorability map, in an effort to find out more about what makes the image memorable.

Source MIT News

Wednesday, May 18, 2011

Nuclear Magnetic Resonance With No Magnets

Berkeley Lab nuclear physicists and materials scientists contribute to a remarkable advance in NMR.

Nuclear magnetic resonance (NMR), a scientific technique associated with outsized, very low-temperature, superconducting magnets, is one of the principal tools in the chemist’s arsenal, used to study everything from alcohols to proteins to such frontiers as quantum computing. In hospitals the machinery of NMR’s cousin, magnetic resonance imaging (MRI), is as loud as it is big, but nevertheless a mainstay of diagnosis for a wide range of medical conditions.

Spectroscopy with conventional nuclear magnetic resonance (NMR) requires large, expensive, superconducting magnets cooled by liquid helium, like the one in the background. The Pines and Budker groups have demonstrated NMR spectroscopy with a device only a few centimeters high, using no magnets at all (foreground). A chemical sample in the test tube (green) is polarized by introducing hydrogen gas in the parahydrogen form. The sample’s NMR is measured with an optical-atomic magnetometer, at center; laser beams crossing at right angles pump and probe the atoms in the microfabricated vapor cell. (Click on image for best resolution.) 

It sounds like magic, but now two groups of scientists at Berkeley Lab and UC Berkeley, one expert in chemistry and the other in atomic physics, long working together as a multidisciplinary team, have shown that chemical analysis with NMR is practical without using any magnets at all.

Dmitry Budker of Berkeley Lab’s Nuclear Science Division, a professor of physics at UC Berkeley, is a protean experimenter who leads a group with interests ranging as far afield as tests of the fundamental theorems of quantum mechanics, biomagnetism in plants, and violations of basic symmetry relations in atomic nuclei. Alex Pines, of the Lab’s Materials Sciences Division and UCB’s Department of Chemistry, is a modern master of NMR and MRI. He guides the work of a talented, ever-changing cadre of postdocs and grad students known as the “Pinenuts” – not only in doing basic research in NMR but in increasing its practical applications. Together the groups have extended the reach of NMR by eliminating the use of magnetic fields at different stages of NMR measurements, and have finally done away with external magnetic fields entirely.

Spinning the information
NMR and MRI depend on the fact that many atomic nuclei possess spin (not classical rotation but a quantum number) and – like miniature planet Earths with north and south magnetic poles – have their own dipolar magnetic fields. In conventional NMR these nuclei are lined up by a strong external magnetic field, then knocked off axis by a burst of radio waves. The rate at which each kind of nucleus then “wobbles” (precesses) is unique and identifies the element; for example a hydrogen-1 nucleus, a lone proton, precesses four times faster than a carbon-13 nucleus having six protons and seven neutrons.

Being able to detect these signals depends first of all on being able to detect net spin; if the sample were to have as many spin-up nuclei as spin-down nuclei it would have zero polarization, and signals would cancel. But since the spin-up orientation requires slightly less energy, a population of atomic nuclei usually has a slight excess of spin ups, if only by a few score in a million.
“Conventional wisdom holds that trying to do NMR in weak or zero magnetic fields is a bad idea,” says Budker, “because the polarization is tiny, and the ability to detect signals is proportional to the strength of the applied field.”

The lines in a typical NMR spectrum reveal more than just different elements. Electrons near precessing nuclei alter their precession frequencies and cause a “chemical shift” — moving the signal or splitting it into separate lines in the NMR spectrum. This is the principal goal of conventional NMR, because chemical shifts point to particular chemical species; for example, even when two hydrocarbons contain the same number of hydrogen, carbon, or other atoms, their signatures differ markedly according to how the atoms are arranged. But without a strong magnetic field, chemical shifts are insignificant.

“Low- or zero-field NMR starts with three strikes against it: small polarization, low detection efficiency, and no chemical-shift signature,” Budker says.
“So why do it?” asks Micah Ledbetter of Budker’s group. It’s a rhetorical question. “The main thing is getting rid of the big, expensive magnets needed for conventional NMR. If you can do that, you can make NMR portable and reduce the costs, including the operating costs. The hope is to be able to do chemical analyses in the field – underwater, down drill holes, up in balloons – and maybe even medical diagnoses, far from well-equipped medical centers.”

Hydrogen molecules consist of two hydrogen atoms that share their electrons in a covalent bond. In an orthohydrogen molecule, both nuclei are spin up. In parahydrogen, one is spin up and the other spin down. The orthohydrogen molecule as a whole has spin one, but the parahydrogen molecule has spin zero. 

“As it happens,” Budker says, “there are already methods for overcoming small polarization and low detection efficiency, the first two objections to low- or zero-field NMR. By bringing these separate methods together, we can tackle the third objection – no chemical shift – as well. Zero-field NMR may not be such a bad idea after all.”

Net spin orientation can be increased in various ways, collectively known as hyperpolarization. One way to hyperpolarize a sample of hydrogen gas is to change the proportions of parahydrogen and orthohydrogen in it. Like most gases, at normal temperature and pressure each hydrogen molecule consists of two atoms bound together. If the spins of the proton nuclei point in the same direction, it’s orthohydrogen. If the spins point in opposite directions, it’s parahydrogen.

By the mathematics of quantum mechanics, adding up the spin states of the two protons and two electrons in a hydrogen molecule equals three ways for orthohydrogen to reach spin one; parahydrogen can only be spin zero, however. Thus orthohydrogen molecules normally account for three-quarters of hydrogen gas and parahydrogen only one-quarter.

Parahydrogen can be enhanced to 50 percent or even 100 percent using very low temperatures, although the right catalyst must be added or the conversion could take days if not weeks. Then, by chemically reacting spin-zero parahydrogen molecules with an initial chemical, net polarization of the product of the hydrogenation may end up highly polarized. This hyperpolarization can be extended not only to the parts of the molecule directly reacting with the hydrogen, but even to the far corners of large molecules. The Pinenuts, who devised many of the techniques, are masters of parahydrogen production and its hyperpolarization chemistry.
“With a high proportion of parahydrogen you get a terrific degree of polarization,” says Ledbetter. “The catch is, it’s spin zero. It doesn’t have a magnetic moment, so it doesn’t give you a signal! But all is not lost….”

And now for the magic
In low magnetic fields, increasing detection efficiency  requires a very different approach, using detectors called magnetometers. In early low-field experiments, magnetometers called SQUID were used (superconducting quantum interference devices). Although exquisitely sensitive, SQUID, like the big magnets used in high-field NMR, must be cryogenically cooled to low temperatures.

Optical-atomic magnetometers are based on a different principle – one that, curiously, is something like NMR in reverse, except that optical-atomic magnetometers measure whole atoms, not just nuclei. Here, an external magnetic field is measured by measuring the spin of the atoms inside the magnetometer’s own vapor cell, typically a thin gas of an alkali metal such as potassium or rubidium. Their spin is influenced by polarizing the atoms with laser light; if there’s even a weak external field, they begin to precess. A second laser beam probes how much they’re precessing and thus just how strong the external field is.

Budker’s group has brought optical-atomic magnetometry to a high pitch by such techniques as extending the “relaxation time,” the time before the polarized vapor loses its polarization. In previous collaborations, the Pines and Budker groups have used magnetometers with NMR and MRI to image the flow of water using only the Earth’s magnetic field or no field at all, to detect hyperpolarized xenon gas (but without analyzing chemical states), and in other applications. The next frontier is chemical analysis.
“No matter how sensitive your detector or how polarized your samples, you can’t detect chemical shifts in a zero field,” Budker says. “But there has always been another signal in NMR that can be used for chemical analysis – it’s just that it is usually so weak compared to chemical shifts, it has been the poor relative in the NMR family. It’s called J-coupling.”

Discovered in 1950 by the NMR pioneer Erwin Hahn and his graduate student, Donald Maxwell, J-coupling provides an interaction pathway between two protons (or other nuclei with spin), which is mediated by their associated electrons. The signature frequencies of these interactions, appearing in the NMR spectrum, can be used to determine the angle between chemical bonds and distances between the nuclei.
“You can even tell how many bonds separate the two spins,” Ledbetter says. “J-coupling reveals all that information.”

The resulting signals are highly specific and indicate just what chemical species is being observed. Moreover, as Hahn saw right away, while the signal can be modified by external magnetic fields, it does not vanish in their absence.

A molecule of parahydrogen hydrogenates a styrene molecule to form ethylbenzene. J-coupling reveals the position and orientation of the hydrogen atoms and the carbon-13 atoms to which they bond. The upper panel shows a simulated spectrum, in blue, of coupling between a hydrogen and a carbon in the methyl position. The actual experimental data are in white. The lower panel shows simulation of coupling in the methylene position, in green, with actual data in white. Simulation and experiment are in close agreement, indicating the promise of the zero-field technique for chemical fingerprinting. (Click on image for best resolution.) 

With Ledbetter in the lead, the Budker/Pines collaboration built a magnetometer specifically designed to detect J-coupling at zero magnetic field. Thomas Theis, a graduate student in the Pines group, supplied the parahydrogen and the chemical expertise to take advantage of parahydrogen-induced polarization. Beginning with styrene, a simple hydrocarbon, they measured J-coupling on a series of hydrocarbon derivatives including hexane and hexene, phenylpropene, and dimethyl maleate, important constituents of plastics, petroleum products, even perfumes.

“The first step is to introduce the parahydrogen,” Budker says. “The top of the set-up is a test tube containing the sample solution, with a tube down to the bottom through which the parahydrogen is bubbled.” In the case of styrene, the parahydrogen was taken up to produce ethylbenzene, a specific arrangement of eight carbon atoms and 10 hydrogen atoms.
 
Immediately below the test tube sits the magnetometer’s alkali vapor cell, a device smaller than a fingernail, microfabricated by Svenja Knappe and John Kitching of the National Institute of Standards and Technology. The vapor cell, which sits on top of a heater, contains rubidium and nitrogen gas through which pump and probe laser beams cross at right angles. The mechanism is surrounded by cylinders of “mu metal,” a nickel-iron alloy that acts as a shield against external magnetic fields, including Earth’s.

Ledbetter’s measurements produced signatures in the spectra which unmistakably identified chemical species and exactly where the polarized protons had been taken up. When styrene was hydrogenated to form ethylbenzene, for example, two atoms from a parahydrogen molecule bound to different atoms of carbon-13 (a scarce but naturally occurring isotope whose nucleus has spin, unlike more abundant carbon-12).
J-coupling signatures are completely different for otherwise identical molecules in which carbon-13 atoms reside in different locations. All of this is seen directly in the results. Says Budker, “When Micah goes into the laboratory, J-coupling is king.”

Of the present football-sized magnetometer and its lasers, Ledbetter says, “We’re already working on a much smaller version of the magnetometer that will be easy to carry into the field.”
Although experiments to date have been performed on molecules that are easily hydrogenated, hyperpolarization with parahydrogen can also be extended to other kinds of molecules. Budker says, “We’re just beginning to develop zero-field NMR, and it’s still too early to say how well we’re going to be able to compete with high-field NMR. But we’ve already shown that we can get clear, highly specific spectra, with a device that has ready potential for doing low-cost, portable chemical analysis.”

More information 
“Parahydrogen-enhanced zero-field nuclear magnetic resonance,” by Thomas Theis, Paul Ganssle, Gwendal Kervern, Svenja Knappe, John Kitching, Micah Ledbetter, Dmitry Budker, and Alexander Pines, appears in Nature Physics and is available online at http://www.nature.com/nphys/journal/vaop/ncurrent/abs/nphys1986.html. Theis, Ganssle, and Pines are with Berkeley Lab’s Materials Sciences Division and the UC Berkeley Department of Chemistry, as was Kervern, now at the University of Lyon. Knappe and Kitching are with the National Institute of Standards and Technology. Ledbetter is with UC Berkeley’s Department of Physics, as is Budker, who is also a member of Berkeley Lab’s Nuclear Science Division. This work was supported by the National Science Foundation and DOE’s Office of Science.
More about Alex Pines, the “Pinenuts,” and parahydrogen is at http://newscenter.lbl.gov/feature-stories/2007/08/06/pines-talking/.
More about the work of Dmitry Budker and his group on optical-atomic magnetometers is at http://newscenter.lbl.gov/feature-stories/2010/09/14/putting-a-spin-on-light-and-atoms/.

Source Berkeley Lab