Monday, June 27, 2011

The first advertising campaign for non-human primates

Keith Olwell and Elizabeth Kiehner had an epiphany last year. At a TED talk, the two New York advertising executives learned that captive monkeys understand money, and that when faced with economic games they will behave in similar ways to humans. So if they can cope with money, how would they respond to advertising?

Laurie Santos, the Yale University primatologist who gave the TED talk, studies monkeys as a way of exploring the evolution of the human mind. A partnership was soon born between Santos, and Olwell and Kiehner's company Proton. The resulting monkey ad campaign was unveiled on Saturday at the Cannes Lions Festival, the creative festival for the advertising industry.

Monkey brands

The objective, says Olwell, is to see if advertising can make brown capuchins change their behaviour. The team will create two brands of food – the team is considering making two colours of jello – specifically targeted at brown capuchins, one supported by an ad campaign and the other not.
How do you advertise to monkeys? Easy: create a billboard campaign that hangs outside the monkeys' enclosure.

"The foods will be novel to them and are equally delicious," Olwell says. Brand A will be advertised and brand B will not. After a period of exposure to the campaign, the monkeys will be offered a choice of both brands.
Santos plans to kick off the experimental campaign in the coming weeks. "If they tend toward one and not the other we'll be witnessing preference shifting due to our advertising," Olwell says.

Sex sells

Olwell says that developing a campaign for non-humans threw up some special challenges. "They do not have language or culture and they have very short attention spans," he says. "We really had to strip out any hip and current thinking and get to the absolute core of what is advertising.
"We're used to doing fairly complex and nuanced work. For this exploration we had to constantly ask ourselves, 'Could we be less finessed?'. We wanted the most visceral approaches."

New Scientist has seen the resulting two billboards. We are unable to show them until Santos and her team have completed their study, but we can reveal that its message is most certainly visceral.
One billboard shows a graphic shot of a female monkey with her genitals exposed, alongside the brand A logo. The other shows the alpha male of the capuchin troop associated with brand A.
Olwell expects brand A to be the capuchins' favoured product. "Monkeys have been shown in previous studies to really love photographs of alpha males and shots of genitals, and we think this will drive their purchasing habits."

The team wanted shots for the campaign that were as natural as possible. "After we settled on what they were being sold and that we were going to be doing 'sex sells', we really wanted to make a very direct ad. We wanted to shoot our subjects involved in normal day-to-day life."

Source New Scientist

Friday, June 24, 2011

Biologists discover how yeast cells reverse aging

The gene they found can double yeast lifespan when turned on late in life.

A whole yeast (Saccharomyces cerevisiae) cell viewed by X-ray microscopy. Inside, the nucleus and a large vacuole (red) are visible. 

Human cells have a finite lifespan: They can only divide a certain number of times before they die. However, that lifespan is reset when reproductive cells are formed, which is why the children of a 20-year-old man have the same life expectancy as those of an 80-year-old man.

How that resetting occurs in human cells is not known, but MIT biologists have now found a gene that appears to control this process in yeast. Furthermore, by turning on that gene in aged yeast cells, they were able to double their usual lifespan.

If the human cell lifespan is controlled in a similar way, it could offer a new approach to rejuvenating human cells or creating pluripotent stem cells, says Angelika Amon, professor of biology and senior author of a paper describing the work in the June 24 issue of the journal Science.

“If we can identify which genes reverse aging, we can start engineering ways to express them in normal cells,” says Amon, who is also a member of the David H. Koch Institute for Integrative Cancer Research. Lead author of the paper is Koch Institute postdoc Elçin Ünal.

Rejuvenation

Scientists already knew that aged yeast cells look different from younger cells. (Yeast have a normal lifespan of about 30 cell divisions.) Those age-related changes include accumulation of extra pieces of DNA, clumping of cellular proteins and abnormal structures of the nucleolus (a cluster of proteins and nucleic acids found in the cell nucleus that produce all other proteins in the cell).

However, they weren’t sure which of these physical markers were actually important to the aging process. “Nobody really knows what aging is,” Amon says. “We know all these things happen, but we don’t know what will eventually kill a cell or make it sick.”

When yeast cells reproduce, they undergo a special type of cell division called meiosis, which produces spores. The MIT team found that the signs of cellular aging disappear at the very end of meiosis. “There’s a true rejuvenation going on,” Amon says.

The researchers discovered that a gene called NDT80 is activated at the same time that the rejuvenation occurs. When they turned on this gene in aged cells that were not reproducing, the cells lived twice as long as normal.

“It took an old cell and made it young again,” Amon says.

In aged cells with activated NDT80, the nucleolar damage was the only age-related change that disappeared. That suggests that nucleolar changes are the primary force behind the aging process, Amon says.

The next challenge, says Daniel Gottschling, a member of the Fred Hutchinson Cancer Research Center in Seattle, will be to figure out the cellular mechanisms driving those changes. “Something is going on that we don’t know about,” says Gottschling, who was not involved in this research. “It opens up some new biology, in terms of how lifespan is being reset.”

The protein produced by the NDT80 gene is a transcription factor, meaning that it activates other genes. The MIT researchers are now looking for the genes targeted by NDT80, which likely carry out the rejuvenation process.

Amon and her colleagues are also planning to study NDT80’s effects in the worm C. elegans, and may also investigate the effects of the analogous gene in mice, p63. Humans also have the p63 gene, a close relative of the cancer-protective gene p53 found in the cells that make sperm and eggs. 

Source MIT

Humans Guided Evolution of Dog Barks

It’s a question that tends to arise when a neighborhood mutt sees a cat at 3 a.m., or if you live in an apartment above someone who leaves their small, yapping dog alone all day: Why do dogs bark so much?
Perhaps because humans designed them that way.

“The direct or indirect human artificial selection process made the dog bark as we know,” said Csaba Molnar, formerly an ethologist at Hungary’s Eotvos Lorand University.
Molnar’s work was inspired by a simple but intriguing fact: Barking is common in domesticated dogs, but infrequent if not downright absent in their wild counterparts. Wild dogs yip and squeal and whine, but rarely produce the repetitive acoustic percussion that is barking. Many people had made that observation, but Molnar and his colleagues were the first to rigorously investigate it.

Because anatomical differences between wild and domestic dogs don’t explain the barking gap, Molnar hypothesized a link to their one great difference: Domesticated dogs have spent the last 50,000 years in human company, being intensively bred to fit our requirements.
Evolution over such a relatively short time is difficult to pin down, but Molnar reasoned that if his hypothesis were correct, two facts would need to be true: Barks should contain information about dogs’ internal states or external environment, and humans should be able to interpret them.
To people who know dogs well, this might seem self-evident. But not every intuition is true. As Molnar’s research would show, sheepherders — people understandably certain in their ability to recognize their own own dogs’ voices — actually couldn’t distinguish their dogs’ barks from others.

Molnar tested his propositions in a series of experiments described in various journal papers between 2005 and 2010. The most high-profile, published in 2008 in the journal Animal Cognition, described using a computer program to classify dog barks (.pdf).
At the time, many journalists — including this one — glibly interpreted the study as a halting step towards dog-to-human translation, but its significance was deeper. Molnar’s statistical algorithm showed that dog barks displayed common patterns of acoustic structure. In terms of pitch and repetition and harmonics, one dog’s alarm bark fundamentally resembled another dog’s alarm bark, and so on.
Intriguingly, the algorithm showed the most between-individual variation in barks made by dogs at play. According to Molnar, this is a hint of human pressure at work. People traditionally needed to identify alarm sounds quickly, but sounds of play were relatively unimportant.
By recording barks in various situations — confronting a stranger, at play, and so on — and playing them back to humans, Molnar’s group then showed that people could reliably identify the context in which barks were made. In short, we understand them.

The findings support Molnar’s original hypothesis, though more work is needed. Molnar started to cross-reference a phylogenetic tree of dog breeds with their barking habits, looking for an evolutionary trajectory, but never finished. He had been a student, and his thesis was complete. Unable to get more funding, he’s now a science journalist.
According to Eugene Morton, a zoologist and animal communication expert at the National Zoo, Molnar’s ideas are quite plausible. Morton noted that barking is a very useful type of sound, simple and capable of carrying over long distances. However, it could have been a side effect of humans favoring other, domestication-friendly traits in the wolves from which modern dogs descended.
“Barks are used by juvenile wolves, by pups. It’s neotenic — something derived from a juvenile stage, and kept in adults. That’s probably what we selected for,” said Morton. “We don’t want dogs who are dominant over us. The bark might go along with that breeding for juvenile behavior. Or it could have come with something else we selected, such as a lack of aggression.”

Molnar’s research is now a fascinating footnote waiting to be pushed forward by other researchers. In addition to that phylogenetic tree of barking, Molnar would like to see analyses of relationships between breeds’ bark characteristics and their traditional roles. If, as with the deep frightening rumble of mastiff guards, breeds’ barks tend to fit their jobs, it would further support the notion of human-guided bark evolution.
The ultimate evidence, said Molnar, would be if human knowledge of bark structure could be used to synthesize barks. “If these barks, played to dogs and humans, had the same effects, it would be awesome,” he said.

Source Wired

Salty Plumes Point to Underground Ocean inside Saturn's Moon Enceladus

A NASA spacecraft that in 2005 discovered watery plumes spewing from the surface of Saturn's icy moon Enceladus has now found compelling evidence that the plumes stem from an underground reservoir of saltwater.

WATERWORKS: Abundant salt grains in the plumes spewing from Enceladus point to a large underground reservoir in the icy moon.

The Cassini probe in 2008 and 2009 flew through a towering plume emanating from the moon's southern polar region and sampled its contents. In an analysis published online June 22 in Nature, a team of researchers reports that the composition of the plume is most easily explained by a sizable subterranean body of water. (Scientific American is part of Nature Publishing Group.) Cassini's instruments were not designed to make such measurements—and in fact the mission was supposed to have ended before the flyby took place—but with a postponed retirement and a few on-the-fly software tweaks the versatile spacecraft was able to get a whiff of the geyserlike ejecta.

The plumes have since their discovery been known to be rich in water vapor, but their origin has remained unsettled. Even in the absence of a liquid reservoir belowground, water vapor could stem from some of Enceladus's abundant ice sublimating directly to vapor in the vacuum of space or from the breakdown of hydrated solids called clathrates.

But whereas a liquid reservoir in contact with the moon's rocky core should contain dissolved salts that would be injected into an upwelling geyser, sublimating ice or decomposing clathrates would be much less efficient at producing a salty plume. Planetary scientist Frank Postberg of Heidelberg University and the Max Planck Institute for Nuclear Physics in Heidelberg, Germany, and his colleagues gathered some support for the saltwater hypothesis in 2009 when they showed that some particles in Saturn's diffuse E ring were salt-rich. The E ring is fed by Enceladus's plumes, so the implication was that the salty grains originated in the icy moon's hypothesized subterranean ocean and were ejected into the ring as a kind of frozen sea spray.

But with salty grains constituting only a small percentage of the E ring particles, the ocean hypothesis was hardly a lock. In the new analysis of Cassini's dives through the plumes Postberg and his co-authors found a much greater salt concentration—almost all the particles near the source of the plumes are salty ices. It now becomes much more difficult to explain Enceladus's eruption without invoking a large underground reservoir. "Over 99 percent of the emitted ice being salt-rich, that makes a much stronger case [for an ocean], and it's not in agreement with ice sublimation," Postberg says. "Now, with 99 percent, we know that it's just not plausible to be coming from a solid." The salt-rich grains are heavy and tend to fall back to the surface, explaining their relative paucity in Saturn's E ring compared with lighter, salt-free particles.

"They got a sniff of these salty ice grains when they flew through the E ring," says planetary scientist Francis Nimmo of the University of California, Santa Cruz, who did not contribute to the new study. "Now that sniff has become—practically everything is salty. It makes the case that these grains are coming from some liquid reservoir kind of inescapable."

The icy moon, just 500 kilometers in diameter, could be one of several moons in the solar system to harbor underground stores of liquid water. Some evidence has hinted at a subterranean ocean for Titan, a much larger Saturnian moon, as well as for Ganymede, Callisto and Europa, three of Jupiter's largest satellites, and for Neptune's moon Triton.

But just what Enceladus's reservoir would look like is somewhat uncertain. The salt content implies a body of water in contact with the moon's rocky core, which Enceladus's density indicates is dozens of kilometers below the surface, but the escaping vapor at the surface points to evaporation at much shallower subterranean depths. One possibility is a series of near-surface misty caverns fed by a saltwater ocean at Enceladus's core. "You have an ocean at depth at the interface of the ice and the rocky core," Postberg says. "But it must be connected with reservoirs that are only a few hundred meters below the surface."

The scale and complexity of that hypothesized plumbing raises some questions. "These 'deep misty caverns' must be truly immense, and connected in complicated ways with the ocean and with the surface," says Nicholas Schneider, a planetary scientist at the University of Colorado at Boulder. The detection of salt in the plumes is indeed consistent with a liquid source, Schneider says, but geophysicists now need to come up with a viable description of a watery internal structure for the satellite. "After all, we're really using the plumes to tell us what's going on inside, and nobody's taken up that challenge," he says. "We're watching what little Enceladus spits up, but that hardly tells us much about the baby's insides!"

Another question is how a tiny, icy satellite like Enceladus could maintain a large body of liquid water. The tidal energy generated by Enceladus's orbit around Saturn provides some heating, but not enough to keep a large amount of water from freezing over billions of years. "The big question that we still don't have answered is: How can an ocean survive for geological time?" Nimmo says. "Most likely the answer is it's not a global ocean at all but more of a regional sea." In other words, Enceladus's tidal heat could be concentrated on the south polar region, allowing for a localized reservoir of liquid there on an otherwise frozen moon.

Perhaps Cassini, which has been exploring Saturn since 2004, will deliver more answers about the mysterious ice world in the coming years. The spacecraft's mission, originally set to end in 2008, has been extended through 2017.

Source Scientific American

Raising the Temperature on Cold-Blooded Dinosaurs

By studying the chemical composition of dinosaur teeth, scientists have determined that some sauropods had body temperatures as warm as those of mammals.

Robert Eagle, an evolutionary biologist at the California Institute of Technology, and colleagues analyzed 11 dinosaur teeth from sauropods. The researchers report their findings in the current issue of the journal Science.
Camarasaurus, a sauropod found in the United States, could reach a length of 66 feet and weigh up to 15 tons. The researchers estimated its body temperature to be about 96.3 degrees Fahrenheit.
Brachiosaurus, a larger sauropod that could grow to 75 feet and 40 tons, was even warmer, about 100.8 degrees Fahrenheit.

A normal human temperature is about 98.6 degrees Fahrenheit.
“So the first conclusion we could draw from that was that these large dinosaurs didn’t have temperatures as cold as modern crocodiles and alligators,” Dr. Eagle said.
But that does not mean that the dinosaurs had internal thermostats to keep body temperature constant independent of the environment, the way mammals and birds do. For one thing, the dinosaurs must have had “the capacity to retain environmental heat just as a function of being so large,” Dr. Eagle said. And they must have had ways to prevent themselves from overheating, he added.

“They might have had physical adaptations, like an internal air sac system, or they may have been seeking out shade in the hottest part of the day,” he said. Or they may have used their long necks and tails to release heat.
In conducting their studies, the researchers looked at the bonding between two isotopes — carbon-13 and oxygen-18 — in bioapatite, a mineral found in dinosaur teeth.
The number of bonds in the mineral correlates with the animals’ temperatures, Dr. Eagle said.
Last year, his team published a preliminary study in which they similarly determined the temperatures of crocodiles, aquarium sharks and alligators by studying dental enamel.

Source The New York Times

Smarter car algorithm shows radio interference risk

An experiment at the Massachussetts Institute of Technology has highlighted some of the hidden risks inherent in (supposedly) smart cars that will depend on radio-based Intelligent Transport Systems (ITS) for extra safety on the road.

In an ITS system, in-car computers communicate with each other over vehicle-to-vehicle (V2V) microwave radio links, while the cars also communicate with traffic lights and roadside speed sensors over a vehicle-to-infrastructure (V2I) radio signalling system (the infrastructure transmits information about cars that are too old to have ITS systems fitted). When two cars are approaching a junction and the V2V/V2I speed signals suggest they are going to crash, a warning can be sounded or a software algorithm can choose to make one of the cars brake, for instance.

I tried this out on the Millbrook test track in Bedfordshire, UK, in 2007: speeding towards a junction in a Saab my brakes were automatically applied to allow a speeding Opel to pass in front of me. It was by turns scary and impressive. But if it hadn't worked I'd have been toast.
But MIT engineer Domitilla Del Vecchio says such systems can be over-protective, taking braking action when there is no real threat. "It's tempting to treat every vehicle on the road as an agent that's playing against you," she says in an MIT research brief issued today.

So she and researcher Rajeev Verma set out to design an algorithm that doesn't over-react - and to test it with model vehicles in a lab. Their trick was simple: calculate not speed but acceleration and deceleration as cars approach a junction, allowing a much finer calculation of the risk. In 97 out of 100 circuits, the collision avoidance technology worked fine.

But in three cases, there were two near-misses and one collision. The reason? Nothing to do with the algorithm: it was due to delays in V2V and V2I radio communication. This highlights the risk of depending upon a complex safety system like ITS - especially a radio-based one which could easily be jammed or electromagnetically interfered with because of the wireless technologies which proliferate in our built environment.

There is only so much that researchers can do against a phenomenon as difficult to predict as radio interference.
The takehome message? ITS technology will doubtless do much to improve road safety - but sometimes it won't. It's never going to substitute for driver alertness.

Source New Scientist

Thursday, June 23, 2011

Lab yeast make evolutionary leap to multicellularity

IN JUST a few weeks single-celled yeast have evolved into a multicellular organism, complete with division of labour between cells. This suggests that the evolutionary leap to multicellularity may be a surprisingly small hurdle.

 One giant leap for yeastkind 

Multicellularity has evolved at least 20 times since life began, but the last time was about 200 million years ago, leaving few clues to the precise sequence of events. To understand the process better, William Ratcliff and colleagues at the University of Minnesota in St Paul set out to evolve multicellularity in a common unicellular lab organism, brewer's yeast.

Their approach was simple: they grew the yeast in a liquid and once each day gently centrifuged each culture, inoculating the next batch with the yeast that settled out on the bottom of each tube. Just as large sand particles settle faster than tiny silt, groups of cells settle faster than single ones, so the team effectively selected for yeast that clumped together.

Sure enough, within 60 days - about 350 generations - every one of their 10 culture lines had evolved a clumped, "snowflake" form. Crucially, the snowflakes formed not from unrelated cells banding together but from cells that remained connected to one another after division, so that all the cells in a snowflake were genetically identical relatives. This relatedness provides the conditions necessary for individual cells to cooperate for the good of the whole snowflake.

"The key step in the evolution of multicellularity is a shift in the level of selection from unicells to groups. Once that occurs, you can consider the clumps to be primitive multicellular organisms," says Ratcliff.
In some ways, the snowflakes do behave as if they are multicellular. They grow bigger by cell division and when the snowflakes reach a certain size a portion breaks off to form a daughter cell. This "life cycle" is much like the juvenile and adult stages of many multicellular organisms.

After a few hundred further generations of selection, the snowflakes also began to show a rudimentary division of labour. As the snowflakes reach their "adult" size, some cells undergo programmed cell death, providing weak points where daughters can break off. This lets the snowflakes make more offspring while leaving the parent large enough to sink quickly to the base of the tube, ensuring its survival. Snowflake lineages exposed to different evolutionary pressures evolved different levels of cell death. Since it is rarely to the advantage of an individual cell to die, this is a clear case of cooperation for the good of the larger organism. This is a key sign that the snowflakes are evolving as a unit, Ratcliff reported last week at a meeting of the Society for the Study of Evolution in Norman, Oklahoma.

Other researchers familiar with the work were generally enthusiastic. "It really seemed to me to have the elements of the unfolding in real time of a major transition," says Ben Kerr, an evolutionary biologist at the University of Washington in Seattle. "The fact that it happened so quickly was really exciting."
Sceptics, however, point out that many yeast strains naturally form colonies, and that their ancestors were multicellular tens or hundreds of millions of years ago. As a result, they may have retained some evolved mechanisms for cell adhesion and programmed cell death, effectively stacking the deck in favour of Ratcliff's experiment.

"I bet that yeast, having once been multicellular, never lost it completely," says Neil Blackstone, an evolutionary biologist at Northern Illinois University in DeKalb. "I don't think if you took something that had never been multicellular you would get it so quickly."
Even so, much of evolution proceeds by co-opting existing traits for new uses - and that's exactly what Ratcliff's yeast do. "I wouldn't expect these things to all pop up de novo, but for the cell to have many of the elements already present for other reasons," says Kerr.

Ratcliff and his colleagues are planning to address that objection head-on, by doing similar experiments with Chlamydomonas, a single-celled alga that has no multicellular ancestors. They are also continuing their yeast experiments to see whether further division of labour will evolve within the snowflakes. Both approaches offer an unprecedented opportunity to bring experimental rigour to the study of one of the most important leaps in our distant evolutionary past.

Source New Scientist

Wednesday, June 22, 2011

Cause of hereditary blindness discovered

RUB Medicine: new protein identified.

Initially the occurrence of progressive retinal degeneration - progressive retinal atrophy, in man called retinitis pigmentosa - had been identified in Schapendoes dogs. Retinitis pigmentosa is the most common hereditary disease which causes blindness in humans. The researchers report on their findings, in Human Molecular Genetics.

Genetic test developed Based on the new findings, the researchers from Bochum have developed a genetic test for diagnosis in this breed of dogs that can also be used predictively in breeding. Schapendoes dogs are originally a Dutch breed of herding dog, which is now kept mainly in Holland, Germany, Northern Europe and North America. However, the research results are also potentially significant for people. The scientists are currently investigating whether mutations of the CCDC66 gene could also be responsible for some retinitis pigmentosa patients.

Mouse model: disease progression in months instead of years "Since at the beginning of the work, the importance of the CCDC66 protein in the organism was completely unknown, in collaboration with Dr. Thomas Rülicke (Vienna) and Prof. Dr. Saleh Ibrahim (Lübeck), we developed a mouse model with a defect in the corresponding gene" explained Prof. Epplen. The aim was initially to obtain basic information about the consequences of the CCDC66 deficiency in order to draw conclusions on the physiological function of the protein. "Fortunately, the mice showed exactly the expected defect of slow progressive impaired vision", said Epplen. "Along with Dr. Elisabeth Petrasch-Parwez (RUB) and Prof. Dr. Jan Kremers (Erlangen), we were able to anatomically and functionally study the entire development of the visual defect in the mouse in just a few months, whereas the progress takes years in humans and dogs." In this interdisciplinary project, the researchers have precisely documented and characterised the progress of retinal degeneration. Epplen: "Interestingly, the CCDC66 protein is, for example, only localised in certain structures of the rods".

Studies continue The insights gained from the studies of the working group can now be applied in order to better understand the processes that cause this inherited disorder. The mouse model will be studied further, as the researchers said: "with regard to malfunctions of the brain, but naturally, above all as a prerequisite for future therapeutic trials in retinitis pigmentosa."


Source EurekaAlert!

University of Minnesota engineering researchers discover source for generating 'green' electricity

University of Minnesota engineering researchers in the College of Science and Engineering have recently discovered a new alloy material that converts heat directly into electricity. This revolutionary energy conversion method is in the early stages of development, but it could have wide-sweeping impact on creating environmentally friendly electricity from waste heat sources.

Researchers say the material could potentially be used to capture waste heat from a car's exhaust that would heat the material and produce electricity for charging the battery in a hybrid car. Other possible future uses include capturing rejected heat from industrial and power plants or temperature differences in the ocean to create electricity. The research team is looking into possible commercialization of the technology.

"This research is very promising because it presents an entirely new method for energy conversion that's never been done before," said University of Minnesota aerospace engineering and mechanics professor Richard James, who led the research team."It's also the ultimate 'green' way to create electricity because it uses waste heat to create electricity with no carbon dioxide."

To create the material, the research team combined elements at the atomic level to create a new multiferroic alloy, Ni45Co5Mn40Sn10. Multiferroic materials combine unusual elastic, magnetic and electric properties. The alloy Ni45Co5Mn40Sn10 achieves multiferroism by undergoing a highly reversible phase transformation where one solid turns into another solid. During this phase transformation the alloy undergoes changes in its magnetic properties that are exploited in the energy conversion device.

During a small-scale demonstration in a University of Minnesota lab, the new material created by the researchers begins as a non-magnetic material, then suddenly becomes strongly magnetic when the temperature is raised a small amount. When this happens, the material absorbs heat and spontaneously produces electricity in a surrounding coil. Some of this heat energy is lost in a process called hysteresis. A critical discovery of the team is a systematic way to minimize hysteresis in phase transformations. The team's research was recently published in the first issue of the new scientific journal Advanced Energy Materials.
Watch a short research video of the new material suddenly become magnetic when heated: http://z.umn.edu/conversionvideo.

In addition to Professor James, other members of the research team include University of Minnesota aerospace engineering and mechanics post-doctoral researchers Vijay Srivastava and Kanwal Bhatti, and Ph.D. student Yintao Song. The team is also working with University of Minnesota chemical engineering and materials science professor Christopher Leighton to create a thin film of the material that could be used, for example, to convert some of the waste heat from computers into electricity.
"This research crosses all boundaries of science and engineering," James said. "It includes engineering, physics, materials, chemistry, mathematics and more. It has required all of us within the university's College of Science and Engineering to work together to think in new ways."

Source  EurekaAlert!

Researchers identify components of speech recognition pathway in humans

Finding suggests speech perception evolved from animals.


Washington, D.C. — Neuroscientists at Georgetown University Medical Center (GUMC) have defined, for the first time, three different processing stages that a human brain needs to identify sounds such as speech — and discovered that they are the same as ones identified in non-human primates.

In the June 22 issue of the Journal of Neuroscience, the researchers say their discovery — made possible with the help of 13 human volunteers who spent time in a functional MRI machine — could potentially offer important insights into what can go wrong when someone has difficulty speaking, which involves hearing voice-generated sounds, or understanding the speech of others.

But more than that, the findings help shed light on the complex, and extraordinarily elegant, workings of the "auditory" human brain, says Josef Rauschecker, PhD, a professor in the departments of physiology/ biophysics and neuroscience and a member of the Georgetown Institute for Cognitive and Computational Sciences at GUMC.

"This is the first time we have been able to identify three discrete brain areas that help people recognize and understand the sounds they are hearing," says Rauschecker. "These sounds, such as speech, are vitally important to humans, and it is critical that we understand how they are processed in the human brain."
Rauschecker and his colleagues at Georgetown have been instrumental in building a unified theory about how the human brain processes speech and language. They have shown that both human and non-human primates process speech along two parallel pathways, each of which run from lower to higher functioning neural regions.

These pathways are dubbed the "what" and "where" streams and are roughly analogous to how the brain processes sight, but in different regions. The "where" stream localizes sound and the "what" pathway identifies the sound.
Both pathways begin with the processing of signals in the auditory cortex, located inside a deep fissure on the side of the brain underneath the temples - the so-called "temporal lobe." Information processed by the "what" pathway then flows forward along the outside of the temporal lobe, and the job of that pathway is to recognize complex auditory signals, which include communication sounds and their meaning (semantics). The "where" pathway is mostly in the parietal lobe, above the temporal lobe, and it processes spatial aspects of a sound — its location and its motion in space — but is also involved in providing feedback during the act of speaking.

Auditory perception - the processing and interpretation of sound information - is tied to anatomical structures; signals move from lower to higher brain regions, Rauschecker says. "Sound as a whole enters the ear canal and is first broken down into single tone frequencies, then higher-up neurons respond only to more complex sounds, including those used in the recognition of speech, as the neural representation of the sound moves through the various brain regions," he says.

In this study, Rauschecker and his colleagues — computational neuroscientist Maximilian Riesenhuber, Ph.D., and Mark Chevillet, a student in the Interdisciplinary Program in Neuroscience — identified the three distinct areas in the "what" pathway in humans that had been seen in non-human primates. Only two had been recognized before in previous human studies.

The first, and most primary, is the "core" which analyzes tones at the basic level of simple frequencies. The second area, the "belt", wraps around the core, and integrates several tones, "like buzz sounds," that lie close to each other, Rauschecker says. The third area, the "parabelt," responds to speech sounds such as vowels, which are essentially complex bursts of multiple frequencies.

Rauschecker is fascinated by the fact that although speech and language are considered to be uniquely human abilities, the emerging picture of brain processing of language suggests "in evolution, language must have emerged from neural mechanisms at least partially available in animals," he says. "There appears to be a conservation of certain processing pathways through evolution in humans and nonhuman primates."

Source EurekaAlert!

Pandora’s Cluster — Clash of the Titans


A team of scientists studying the galaxy cluster Abell 2744, nicknamed Pandora’s Cluster, have pieced together the cluster’s complex and violent history using telescopes in space and on the ground, including the Hubble Space Telescope, the European Southern Observatory’s Very Large Telescope, the Japanese Subaru telescope, and NASA’s Chandra X-ray Observatory.

The giant galaxy cluster appears to be the result of a simultaneous pile-up of at least four separate, smaller galaxy clusters. The crash took place over a span of 350 million years.

The galaxies in the cluster make up less than 5 percent of its mass. The gas (around 20 percent) is so hot that it shines only in X-rays (colored red in this image). The distribution of invisible dark matter (making up around 75 percent of the cluster’s mass) is colored here in blue.

Dark matter does not emit, absorb, or reflect light, but it makes itself apparent through its gravitational attraction. To pinpoint the location of this elusive substance the team exploited a phenomenon known as gravitational lensing. This is the bending of light rays from distant galaxies as they pass through the gravitational field created by the cluster. The result is a series of telltale distortions in the images of galaxies in the background of the Hubble and VLT observations. By carefully analyzing the way that these images are distorted, it is possible to accurately map where the dark matter lies.

Chandra mapped the distribution of hot gas in the cluster.

The data suggest that the complex collision has separated out some of the hot gas (which interacts upon collision) and the dark matter (which does not) so that they now lie apart from each other, and from the visible galaxies. Near the core of the cluster there is a “bullet” shape where the gas of one cluster collided with that of another to create a shock wave. The dark matter passed through the collision unaffected.

In another part of the cluster, galaxies and dark matter can be found, but no hot gas. The gas may have been stripped away during the collision, leaving behind no more than a faint trail.

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C.

City living affects your brain, researchers find

The part of the brain that senses danger becomes overactive in city-dwellers when they are under stress.

The brains of people living in cities operate differently from those in rural areas, according to a brain-scanning study. Scientists found that two regions, involved in the regulation of emotion and anxiety, become overactive in city-dwellers when they are stressed and argue that the differences could account for the increased rates of mental health problems seen in urban areas.

Previous research has shown that people living in cities have a 21% increased risk of anxiety disorders and a 39% increased risk of mood disorders. In addition, the incidence of schizophrenia is twice as high in those born and brought up in cities.
In the new study, Professor Andreas Meyer-Lindenberg of the University of Heidelberg in Germany scanned the brains of more than 50 healthy volunteers, who lived in a range of locations from rural areas to large cities, while they were engaged in difficult mental arithmetic tasks. The experiments were designed to make the groups of volunteers feel anxious about their performance.

The results, published in Nature, showed that the amygdala of participants who currently live in cities was over-active during stressful situations. "We know what the amygdala does; it's the danger-sensor of the brain and is therefore linked to anxiety and depression," said Meyer-Lindenberg.
Another region called the cingulate cortex was overactive in participants who were born in cities. "We know [the cingulate cortex] is important for controlling emotion and dealing with environmental adversity."

This excess activity could be at the root of the observed mental health problems, said Meyer-Lindenberg. "We speculate that stress might cause these abnormalities in the first place – that speculation lies outside what we can show in our study, it is primarily based on the fact that this specific brain area is very sensitive to developmental stress. If you stress an animal, you will find even structural abnormalities in that area and those may be enduring and make an animal anxious. What we're proposing is that stress causes these things and stress is where they are expressed and then lead to an increased risk of mental illness."

By 2050, almost 70% of people are predicted to be living in urban areas. On average, city dwellers are "wealthier and receive improved sanitation, nutrition, contraception and healthcare", wrote the researchers in Nature. But urban living is also associated with "increased risk for chronic disorders, a more demanding and stressful social environment and greater social disparities. The biological components of this complex landscape of risk and protective factors remain largely uncharacterised."

In an accompanying commentary in Nature, Dr Daniel Kennedy and Prof Ralph Adolphs, both at the California Institute of Technology, said that there are wide variations in a people's preferences for, and ability to cope with, city life.
"Some thrive in New York city; others would happily swap it for a desert island. Psychologists have found that a substantial factor accounting for this variability is the perceived degree of control that people have over their daily lives. Social threat, lack of control and subordination are all likely candidates for mediating the stressful effects of city life, and probably account for much of the individual differences."

Working out what factors in a city cause the stress in the first place is the next step in trying to understand the mental health effects of urban areas. Meyer-Lindenberg said that social fragmentation, noise or over-crowding might all be factors. "There's prior evidence that if someone invades your personal space, comes too close to you, it's exactly that amygdala-cingulate circuit that gets [switched on] so it could be something as simple as density."

He said the research could be used, in future, to inform city design.
"What we can do is try and make cities better places to live in from the view of mental health. Up to now, there really isn't a lot of evidence-base to tell a city planner what would be good, what would be bad."

Source The Guardian 

Icy Saturn Moon May Have Ocean Beneath Its Surface

Five years ago, scientists discovered that Enceladus, one of Saturn’s moons, had geyserlike plumes spewing water vapor and ice particles.

 At least four distinct plumes of water ice spew out from the polar region of Saturn's moon Enceladus. 

These plumes originate from a salt-water reservoir, according to a new study published online by the journal Nature.
“We discovered that the plume is stratified in a composition of ice,” said Frank Postberg , an astrophysicist at the University of Heidelberg in Germany. “And the lower you go, the more salt-rich ice grains you find.”
Dr. Postberg and his collaborators analyzed samples of ice particles from the plumes gathered by NASA’s Cassini spacecraft.
The analysis found that salt-rich particles make up more than 99 percent of the solids ejected in Enceladus’s plumes.

The researchers theorize that there are actually two reservoirs connected to the plumes. The first is a salt-water reservoir close to the moon’s surface that is directly feeding the plumes.
But feeding this reservoir, there is likely a larger, deeper salt-water reservoir, Dr. Postberg said.
“We imagine that between the ice and the ice core there is an ocean of depth and this is somehow connected to the surface reservoir,” he said.

Enceladus, Saturn’s sixth moon, is icy and just over 300 miles wide. The presence of water makes it one of a few other places in the solar system where life could exist.
But even if this isn’t the case, it makes life outside of Earth seem more plausible, Dr. Postberg said.
“If there is water in such an unexpected place,” he said, “it leaves possibility for the rest of the universe”.

Source The New York Times

Breaking out of the internet filter bubble

Eli Pariser is the former executive director of the liberal activism site, MoveOn.org and co-founder of the international political site Avaaz.org. His new book, The Filter Bubble, examines how web personalization is influencing the content we see online. New Scientist caught up with him to talk about the filters he says are shaping our view of the world, and hear why he thinks it's so important to break out of the bubble.


What is the "filter bubble"?
Increasingly we don't all see the same internet. We see stories and ideas and facts that make it through a membrane of personalised algorithms that surround us on Google, Facebook, Yahoo and many other sites. The filter bubble is the personal unique universe of information that results and that we increasingly live in online.

You stumbled upon the filter bubble when you noticed Facebook friends with differing political views were being phased out of your feed, and people were getting very different results for the same search in Google. What made you think all of this was dangerous, or at least harmful?
I take these Facebook dynamics pretty seriously simply because it's a medium that one in 11 people now use. If at a mass level, people don't hear about ideas that are challenging or only hear about ideas that are likeable - as in, you can easily click the "like" button on them - that has fairly significant consequences. I also still have a foot in the social change campaigning world, and I've seen that a campaign about a woman being stoned to death in Iran doesn't get as many likes as a campaign about something more fuzzy and warm.

Do you think part of the problem is that Facebook is still largely used for entertainment?
It's definitely growing very rapidly as a news source. There was a PEW study that said 30 per cent of people under 30 use social media as a news source. I would be surprised if in 15 years surfing news looks like seeking out a bunch of different particular news agencies and seeing what's on their front page.

We have long relied on content filters - in the form publications or TV channels we choose. How is the filter bubble different?
First, yes we've always used filters of some sort, but in this case we don't know we are. We think of the internet as this place where we directly connect with information, but in fact there are these intermediaries, Facebook and Google, that are in the middle in just the same way that editors were in 20th century society. This is invisible; we don't even see or acknowledge that a lot of the time there is filtering at work.
The second issue is that it's passive. We're not choosing a particular editorial viewpoint, and because we're not choosing it, we don't have a sense of on what basis information is being sorted. It's hard to know what's being edited out.
And the final point is that it's a unique universe. It's not like reading a magazine where readers are seeing the same set of articles. Your information environment could differ dramatically from your friends and neighbours and colleagues.

You have suggested that the filter bubble deepens the disconnect between our aspirational selves, who put Citizen Kane high on the movie rental queue, and our actual selves, who really just want to watch The Hangover for the fifth time. Is there a danger inherent in that?
The industry lingo for this is explicit versus revealed preferences. Revealed preferences are what your behaviour suggests you want, and explicit preferences are what you're saying you want. Revealed preferences are in vogue as a way of making decisions for people because now we have the data to do that - to say, you only watched five minutes of Citizen Kane and then turned it off for something else.
But when you take away the possibility of making explicit choices, you're really taking away an enormous amount of control. I choose to do things in my long-term interest even when my short-term behaviour would suggest that it's not what I want to do all the time. I think there's danger in pandering to the short-term self.

What you're promoting has been characterized as a form of "algorithmic paternalism" whereby the algorithm decides what's best for us.
What Facebook does when it selects "like" versus "important" or "recommend" as the name of its button is paternalistic, in the sense that it's making a choice about what kinds of information gets to people on Facebook. It's a very self-serving choice for Facebook, because a medium that only shows you things that people like is a pretty good idea for selling advertising. These systems make value judgments and I think we need to hold them to good values as opposed to merely commercial ones. But, that's not to say that you could take values out of the equation entirely.

Your background is in liberal activism. Do you think the reaction to your ideas as algorithmic paternalism has to do with a perception that you're trying to promote your own political views?
If people think that, they misread me. I'm not suggesting we should go back to a moment where editors impose their values on people whether they want it or not. I'm just saying we can do a better job of drawing information from society at large, if we want to. If Facebook did have an "important" button alongside the "like" button, I have real faith that we would start to promote things that had more social relevance. It's all about how you construct the medium. That's not saying that my ideas of what is important would always trump, it's just that someone's ideas of what is important would rather than nobody's.

You've repeatedly made the case for an "important" button on Facebook, or maybe, as you've put it, an "it was a hard slog at first but in the end it changed my life" button. Do you think really what you're asking Facebook to do is grow up?
Yeah. In its most grandiose rhetoric Facebook wants to be a utility, and if it's a utility, it starts to have more social responsibility. I think Facebook is making this transition, in that it's moved extraordinarily quickly from a feisty insurgent that was cute, fun and new, to being central to lots of people's lives. The generous view is that they're just catching up with the amount of responsibility they've all of a sudden taken on.

Your argument has been called "alarmist", and as I'm sure you're aware, a piece in Slate recently suggested that you're giving these algorithms too much credit. What's your response to such criticism?
There are two things. One is that I'm trying to describe a trend, and I'm trying to make the case that it will continue unless we avert it. I'm not suggesting that it's checkmate already.
Second, there was some great research published in a peer-reviewed internet journal just recently which points out that the effects of personalisation on Google are quite significant: 64 per cent of results are different either in rank or simply different between the users that they tested. That's not a small difference. In fact, in some ways all the results below the first three are mostly irrelevant because people mostly click on the first three results. As Marissa Mayer talked about in an interview, Google actually used to not personalise the first results for precisely this reason. Then, when I called them again, they said, actually we're doing that now. I think that it's moving faster than many people realise.

You offer tips for bursting the filter bubble - deleting cookies, clearing browser history, etc. - but, more broadly, what kind of awareness are you hoping to promote?
I just want people to know that the more you understand how these tools are actually working the more you can use them rather than having them use you.
The other objective here is to highlight the value of the personal data that we're all giving to these companies and to call for more transparency and control when it comes to that data. We're building a whole economy that is premised on the notion that these services are free, but they're really not free. They convert directly into money for these companies, and that should be much more transparent.

Source  New Scientist

Quantum magic trick shows reality is what you make it

Conjurers frequently appear to make balls jump between upturned cups. In quantum systems, where the properties of an object, including its location, can vary depending on how you observe them, such feats should be possible without sleight of hand. Now this startling characteristic has been demonstrated experimentally, using a single photon that exists in three locations at once.

Despite quantum theory's knack for explaining experimental results, some physicists have found its weirdness too much to swallow. Albert Einstein mocked entanglement, a notion at the heart of quantum theory in which the properties of one particle can immediately affect those of another regardless of the distance between them. He argued that some invisible classical physics, known as "hidden-variable theories", must be creating the illusion of what he called "spooky action at a distance".

A series of painstakingly designed experiments has since shown that Einstein was wrong: entanglement is real and no hidden-variable theories can explain its weird effects.
But entanglement is not the only phenomenon separating the quantum from the classical. "There is another shocking fact about quantum reality which is often overlooked," says Aephraim Steinberg of the University of Toronto in Canada.

No absolute reality

In 1967, Simon Kochen and Ernst Specker proved mathematically that even for a single quantum object, where entanglement is not possible, the values that you obtain when you measure its properties depend on the context. So the value of property A, say, depends on whether you chose to measure it with property B, or with property C. In other words, there is no reality independent of the choice of measurement.

It wasn't until 2008, however, that Alexander Klyachko of Bilkent University in Ankara, Turkey, and colleagues devised a feasible test for this prediction. They calculated that if you repeatedly measured five different pairs of properties of a quantum particle that was in a superposition of three states, the results would differ for the quantum system compared with a classical system with hidden variables.
That's because quantum properties are not fixed, but vary depending on the choice of measurements, which skews the statistics. "This was a very clever idea," says Anton Zeilinger of the Institute for Quantum Optics, Quantum Nanophysics and Quantum Information in Vienna, Austria. "The question was how to realise this in an experiment."

Now he, Radek Lapkiewicz and colleagues have realised the idea experimentally. They used photons, each in a superposition in which they simultaneously took three paths. Then they repeated a sequence of five pairs of measurements on various properties of the photons, such as their polarisations, tens of thousands of times.

A beautiful experiment

They found that the resulting statistics could only be explained if the combination of properties that was tested was affecting the value of the property being measured. "There is no sense in assuming that what we do not measure about a system has [an independent] reality," Zeilinger concludes.
Steinberg is impressed: "This is a beautiful experiment." If previous experiments testing entanglement shut the door on hidden variables theories, the latest work seals it tight. "It appears that you can't even conceive of a theory where specific observables would have definite values that are independent of the other things you measure," adds Steinberg.

Kochen, now at Princeton University in New Jersey, is also happy. "Almost a half century after Specker and I proved our theorem, which was based on a [thought] experiment, real experiments now confirm our result," he says.
Niels Bohr, a giant of quantum physics, was a great proponent of the idea that the nature of quantum reality depends on what we choose to measure, a notion that came to be called the Copenhagen interpretation. "This experiment lends more support to the Copenhagen interpretation," says Zeilinger.

Source New Scientist

Red wine's heart health chemical unlocked at last

FANCY receiving the heart protecting abilities of red wine without having to drink a glass every day? Soon you may be able to, thanks to the synthesis of chemicals derived from resveratrol, the molecule believed to give wine its protective powers. The chemicals have the potential to fight many diseases, including cancer.
Plants make a huge variety of chemicals, called polyphenols, from resveratrol to protect themselves against invaders, particularly fungi. But they only make tiny amounts of each chemical, making it extremely difficult for scientists to isolate and utilise them. The unstable nature of resveratrol has also hindered attempts at building new compounds from the chemical itself.

Scott Snyder at Columbia University in New York and his team have found a way around this: building polyphenols from compounds that resemble, but are subtly different to, resveratrol. These differences make the process much easier. Using these alternative starting materials, they have made dozens of natural polyphenols, including vaticanol C, which is known to kill cancer cells (Nature, DOI: 10.1038/nature10197).
"It's like a recipe book for the whole resveratrol family," says Snyder. "We've opened up a whole casket of nature's goodies."

Source New Scientist

Quantum leap: Magnetic properties of a single proton directly observed for the first time

Most important milestone in the direct measurement of the magnetic moment of the proton and its anti-particle has been achieved / Focusing the matter-antimatter symmetry.

Researchers at Johannes Gutenberg University Mainz (JGU) and the Helmholtz Institute Mainz (HIM), together with their colleagues from the Max Planck Institute for Nuclear Physics in Heidelberg and the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, have observed spin quantum-jumps with a single trapped proton for the first time. The fact that they have managed to procure this elusive data means that they have overtaken their research competitors at the elite Harvard University and are now the global leaders in this field.

The result is a pioneering step forward in the endeavor to directly measure the magnetic properties of the proton with high precision. The measuring principle is based on the observation of a single proton stored in an electromagnetic particle trap. As it would also be possible to observe an anti-proton using the same method, the prospect that an explanation for the matter-antimatter imbalance in the universe could be found has become a reality. It is essential to be able to analyze antimatter in detail if we are to understand why matter and antimatter did not completely cancel each other out after the Big Bang - in other words, if we are to comprehend how the universe actually came into existence.

The proton has an intrinsic angular momentum or spin, just like other particles. It is like a tiny bar magnet; in this analogy, a spin quantum jump would correspond to a (switch) flip of the magnetic poles. However, detecting the proton spin is a major challenge. While the magnetic moments of the electron and its anti-particle, the positron, were already being measured and compared in the 1980s, this has yet to be achieved in the case of the proton. "We have long been aware of the magnetic moment of the proton, but it has thus far not been observed directly for a single proton but only in the case of particle ensembles," explains Stefan Ulmer, a member of the work group headed by Professor Dr  Jochen Walz at the Institute of Physics at the new Helmholtz Institute Mainz.

The real problem is that the magnetic moment of the proton is 660 times smaller than that of the electron, which means that it is considerably harder to detect. It has taken the collaborative research team five years to prepare an experiment that would be precise enough to pass the crucial test. "At last we have successfully demonstrated the detection of the spin direction of a single trapped proton," says an exultant Ulmer, a stipendiary of the International Max Planck Research School for Quantum Dynamics in Heidelberg.

This opens the way for direct high-precision measurements of the magnetic moments of both the proton and the anti-proton. The latter is likely to be undertaken at CERN, the European laboratory for particle physics in Geneva, or at FLAIR/GSI in Darmstadt. The magnetic moment of the anti-proton is currently only known to three decimal places. The method used at the laboratories in Mainz aims at a millionfold improvement of the measuring accuracy and should represent a new highly sensitive test of the matter-antimatter symmetry. This first observation of the spin quantum jumps of a single proton is a crucial milestone in the pursuit of this aim.
Matter-antimatter symmetry is one of the pillars of the Standard Model of elementary particle physics.

According to this model, particles and anti-particles should behave identically once inversions of charge, parity and time - referred to as CPT transformation – are applied simultaneously. High-precision comparisons of the fundamental properties of particles and anti-particles make it possible to accurately determine whether this symmetrical behavior actually occurs, and may provide the basis for theories that extend beyond the Standard Model. Assuming that a difference between the magnetic moments of protons and anti-protons could be detected, this would open up a window on this "new physics".

The results obtained by the Mainz cooperative research team were published online in the leading specialist journal Physical Review Letters on Monday. The article is presented as an "Editor's Suggestion." Furthermore, the American Physical Society (APS) presents the article as "Viewpoint."
The research work carried out by the team of Professor Dr Jochen Walz on anti-hydrogen and the magnetic moment of protons forms part of the "Precision Physics, Fundamental Interactions and Structure of Matter" (PRISMA) Cluster of Excellence, which is currently applying for future sponsorship under the German Federal Excellence Initiative.

Source Johannes Gutenberg Universitat

New test for elusive fundamental particle - anyon - proposed

In quantum physics there are two classes of fundamental particles. Photons, the quanta of light, are bosons, while the protons and neutrons that make up atomic nuclei belong to the fermions. Bosons and fermions differ in their behavior at a very basic level. This difference is expressed in their quantum statistics. In the 1980s a third species of fundamental particle was postulated, which was dubbed the anyon. In their quantum statistics, anyons interpolate between bosons and fermions.

"They would be a kind of missing link between the two known sorts of fundamental particles," says LMU physicist Dr. Tassilo Keilmann. "According to the laws of quantum physics, anyons must exist – but so far it hasn't been possible to detect them experimentally."

An international team of theorists under Keilmann's leadership has now taken an in-depth look at the question of whether it is possible to create anyons in the context of a realistic experiment. Happily for experimentalists, the answer is yes. The theoreticians have come up with an experimental design in which conventional atoms are trapped in a so-called optical lattice. Based on their calculations, it ought to be possible to manipulate the interactions between atoms in the lattice in such a way as to create and detect anyons. In contrast to the behavior of bosons and fermions, the exotic statistics of anyons should be continuously variable between the endpoints defined by the other two particle types.

"These novel quantum particles should be able to hop between sites in the optical lattice," says Keilmann. "More importantly, they and their quantum statistics should be continuously adjustable during the experiment." In that case, it might even be feasible to transmute bosons into anyons and then to turn these into fermions. Such a transition would be equivalent to a novel "statistically induced quantum phase transition", and would allow the anyons to be used for the construction of quantum computers that would be far more efficient than conventional electronic processors. "We have pointed to the first practical route to the detection of anyons," says Keilmann. "Experimentalists should be able to implement the set-up in the course of experiments that are already underway."

Source  EurekaAlert!

Smaller companies hit hardest during emerging market crises

CORVALLIS, Ore. – A study of the reaction by the United States stock market to international financial crises shows that small companies are often hit hardest, and the impact is above and beyond what would be expected given their exposure to global market factors.

This unexpected result suggests the significant impact that investors’ actions can have during emerging market crises. During these crises, investors flee to the perceived safety of big companies and shed stocks of smaller companies, despite comparable levels of international exposure during normal periods.
“The take-away is, just because you invest locally doesn’t mean you are protected from the global market,” said David Berger, an assistant professor of finance at Oregon State University.

Looking at almost 20 years of data that covered about eight large emerging market crashes, Berger and H.J. Turtle of Washington State University uncovered this flight-from-risk trend on the part of investors that flee from small stocks. The results are published in the current issue of the Global Finance Journal.
“We would expect that stock markets in two different, but related economies would crash at the same time,” Berger said. “But we found that during big market crashes, investors adjust their holdings towards bigger corporate stocks that they perceive as being safer, even after controlling for economic exposures.”

Berger said the results of his study are unexpected because past research has focused on the aggregate U.S. market as a whole and found little impact during emerging market crises.
“Investors see these big blue chip stocks as the safer ones, and small, R&D intensive stocks for example, as riskier,” Berger said. “So the stock of a smaller domestic company could take a hit because of an international shock.”

Berger studies U.S. equity markets and international stocks, and said the findings from this study have important implications for investors, even those who tend to invest mainly in the domestic market.
“Interestingly, larger stocks often benefited from emerging market crises and exhibited positive returns,” Berger added. “Because investors started dumping smaller stocks in favor of safer, larger ones, the irony is that larger multinational corporations potentially see positive benefits during international crises.”

Source Oregon State University

Speaking maths

We often think of mathematics as a language, but does our brain process mathematical structures in the same way as it processes language? A new study published in the journal Psychological Science suggests that it does: the process of storing and reusing syntax works across different cognitive domains, of which maths and language are just particular examples. Neuroscientists have previously found evidence suggesting a link between maths and language, but according to the authors of the study "this is the first time we've shown it in a behavioral set-up."

The study — conducted by psychologists Christoph Scheepers, Catherine J. Martin, Andriy Myachykov, Kay Teevan, and Izabela Viskupova of the University of Glasgow, and Patrick Sturt of the University of Edinburgh — made use of a cognitive process called structural priming. Simply put, if you use a certain kind of structure in one sentence, you're likely to use it again in a subsequent sentence. To find out how abstract — and cognitively general — this process is, the experimenters gave native English-speaking students a pencil-and-paper test containing a series of maths problems paired with incomplete sentences.

Each maths problem was structured in one of three ways. With high-attachment syntax, the final operation of the problem applied to a large chunk of the earlier part. For instance in the problem 80-(5+15)/5 the final division (by 5) applies to the entire previous addition term (5+15). With low-attachment syntax — say, 80-5+15/5 — the final operation applied to a smaller previous chunk. A third category — baseline problems like 80-5 — implied neither high nor low attachment.

After each equation, the participant was given a fragment of a sentence that they had to complete. Their completed sentence was analysed to see whether the participants had used high or low attachment syntax. For instance the sentence might start with, "The tourist guide mentioned the bells of the church that ...". A high-attachment ending would be "... that chime hourly" as it refers to the entire phrase "the bells of the church". Low attachment would link only the church to the completed final clause — say, "...that stands on a hill."

The subjects were variously successful in solving the problems. Their choice of high or low attachment sentence completions also revealed complexities — some perhaps related to the preference in English for low-attachment syntax.
Still, in significant numbers, high-attachment maths problems primed high-attachment sentence completions, and low-attachment problems made low-attachment completions likely.

What does all this mean? Our cognitive processes operate "at a very high level of abstraction," the authors write. In other words, our ability to deal with syntax isn't specifically linked to language or maths, but sits at a higher level. It may apply in a similar fashion to all kinds of thinking — in numbers, words, or perhaps even music.

Source +Plus Magazine

Tuesday, June 21, 2011

How dense is a cell?

Combining an ancient principle with new technology, MIT researchers have devised a way to answer that question.

MIT researchers designed this tiny microfluidic chip that can measure the mass and density of single cells.

More than 2,000 years after Archimedes found a way to determine the density of a king’s crown by measuring its mass in two different fluids, MIT scientists have used the same principle to solve an equally vexing puzzle — how to measure the density of a single cell.

“Density is such a fundamental, basic property of everything,” says William Grover, a research associate in MIT’s Department of Biological Engineering. “Every cell in your body has a density, and if you can measure it accurately enough, it opens a whole new window on the biology of that cell.”

The new method, described in the Proceedings of the National Academy of Sciences the week of June 20, involves measuring the buoyant mass of each cell in two fluids of different densities. Just as measuring the crown’s density helped Archimedes determine whether it was made of pure gold, measuring cell density could allow researchers to gain biophysical insight into fundamental cellular processes such as adaptations for survival, and might also be useful for identifying diseased cells, according to the authors.

Grover and recent MIT PhD recipient Andrea Bryan are lead authors of the paper. Both work in the lab of Scott Manalis, a professor of biological engineering, member of the David H. Koch Institute for Integrative Cancer Research and senior author of the paper.

Going with the flow

Measuring the density of living cells is tricky because it requires a tool that can weigh cells in their native fluid environment, to keep them alive, and a method to measure each cell in two different fluids.

How the lab can determine the weight and density of individual cells.

In 2007, Manalis and his students developed the first technique to measure the buoyant mass of single living cells. Their device, known as a suspended microchannel resonator, pumps cells, in fluid, through a microchannel that runs across a tiny silicon cantilever, or diving-board structure. That cantilever vibrates within a vacuum; when a cell flows through the channel, the frequency of the cantilever’s vibration changes. The cell’s buoyant mass can be calculated from the change in frequency.

To adapt the system to measure density, the researchers needed to flow each cell through the channel twice, each time in a different fluid. A cell’s buoyant mass (its mass as it floats in fluid) depends on its absolute mass and volume, so by measuring two different buoyant masses for a cell, its mass, volume and density can be calculated.

The new device rapidly exchanges the fluids in the channel without harming the cell, and the entire measurement process for one cell takes as little as five seconds.

David Weitz, professor of physics at Harvard University, says the new technique is a clever way of measuring cell density, and opens up many new avenues of research. “The very interesting thing they show is that density seems to have a more sensitive change than some of the more standard measurements. Why is that? I don’t know. But the fact that I don’t know means it’s interesting,” he says.

Changes in density

The researchers tested their system with several types of cells, including red blood cells and leukemia cells. In the leukemia study, the researchers treated the cells with an antibiotic called staurosporine, then measured their density less than an hour later. Even in that short time, a change in density was already apparent. (The cells grew denser as they started to die.) The treated leukemia cells increased their density by only about 1 percent, a change that would be difficult to detect without a highly sensitive device such as this one. Because of that rapid response and sensitivity, this method could become a good way to screen potential cancer drugs.

“It was really easy, by the density measurement, to identify cells that had responded to the drug. If we had looked at mass alone, or volume alone, we never would have seen that effect,” Bryan says.

The researchers also demonstrated that malaria-infected red blood cells lose density as their infection progresses. This density loss was already known, but this is the first time it has been observed in single cells.

Being able to detect changes in red-blood-cell density could also offer a new way to test athletes who try to cheat by “doping” their blood — that is, by removing their own blood and storing it until just before their competition, when it is transfused back into the bloodstream. This boosts the number of red blood cells, potentially enhancing athletic performance.

Storing blood can alter the blood’s physical characteristics, and if those include changes in density, this technique may be able to detect blood doping, Grover says.

Researchers in Manalis’ lab are now investigating the densities of other types of cells, and are starting to work on measuring single cells as they grow over time — specifically cancer cells, which are characterized by uncontrolled growth.

“Understanding how density of individual cancer cells relates to malignant progression could provide fundamental insights into the underlying cellular processes, as well as lead to clinical strategies for treating patients in situations where molecular markers don’t yet exist or are difficult to measure due to limited sample volumes,” Manalis says.

Other authors on the paper are MIT research scientist Monica Diez-Silva; Subra Suresh, former dean of the MIT School of Engineering; and John Higgins of Massachusetts General Hospital and Harvard Medical School.

Source MIT

What Do We Pay Attention To?

Once we learn the relationship between a cue and its consequences—say, the sound of a bell and the appearance of the white ice cream truck bearing our favorite chocolate cone—do we turn our attention to that bell whenever we hear it? Or do we tuck the information away and marshal our resources to learning other, novel cues—a recorded jingle, or a blue truck?

Psychologists observing “attentional allocation” now agree that the answer is both, and they have arrived at two principles to describe the phenomena. The “predictive” principle says we search for meaningful—important—cues amid the “noise” of our environments. The “uncertainty” principle says we pay most attention to unfamiliar or unpredictable cues, which may yield useful information or surprise us with pleasant or perilous consequences.

Animal studies have supplied evidence for both, and research on humans has showed how predictiveness operates, but not uncertainty. “There was a clear gap in the research,” says Oren Griffiths, a research fellow at the University of New South Wales, in Australia.  So he, along with Ameika M. Johnson and Chris J. Mitchell, set out to demonstrate the uncertainty principle in humans.
“We showed that people will pay more attention to a stimulus or a cue if its status as a predictor is unreliable,” he says. The study will be published in an upcoming issue of  Psychological Science, a journal of the Association for Psychological Science.

The researchers investigated what is called “negative transfer”—a cognitive process by which a learned association between cue and outcome inhibits any further learning about that cue. We think we know what to expect, so we aren’t paying attention when a different outcome shows up—and we learn that new association more slowly than if the cue or outcome were unpredictable. Negative transfer is a good example of the uncertainty principle at work.

Participants were divided into three groups, and administered the “allergist test.” They observed  “Mrs. X” receiving a small piece of fruit—say, apple. Using a scroll bar they predicted her allergic reaction, from none to critical. They then learned that her reaction to the apple was “mild.”  Later, when Mrs. X ate the apple, she had a severe reaction which participants also had to learn to predict.

The critical question was how quickly people learned about the severe reaction. Unsurprisingly, if apple was only ever paired with a severe reaction, learning was fast. But what about if apple had previously been shown to be dangerous (i.e. produce a mild allergic reaction)? In this case, learning about the new severe reaction was slow. This is termed the “negative transfer” effect. This effect did not occur, however, when the initial relationship between apple and allergy was uncertain — if, say, apple was sometimes safe to eat. Under these circumstances, the later association between apple and severe allergic reaction was learned rapidly.
Why? “They didn’t know what to expect from the cue, so they had to pay more attention to it,” says Griffiths. “That’s because of the uncertainty principle.”

Source Association for Psychological Science

Putting a new spin on computing

Physicists at the University of Arizona have achieved a breakthrough toward the development of a new breed of computing devices that can process data using less power.

In a recent publication in Physical Review Letters, physicists at the University of Arizona propose a way to translate the elusive magnetic spin of electrons into easily measurable electric signals. The finding is a key step in the development of computing based on spintronics, which doesn't rely on electron charge to digitize information.
Unlike conventional computing devices, which require electric charges to flow along a circuit, spintronics harnesses the magnetic properties of electrons rather than their electric charge to process and store information.


Just like a magnet with a north and a south pole (left), electrons are surrounded by a magnetic field (right). This magnetic momentum, or spin, could be used to store information in more efficient ways.


"Spintronics has the potential to overcome several shortcomings of conventional, charge-based computing. Microprocessors store information only as long as they are powered up, which is the reason computers take time to boot up and lose any data in their working memory if there is a loss of power," said Philippe Jacquod, an associate professor with joint appointments in the College of Optical Sciences and the department of physics at the College of Science, who published the research together with his postdoctoral assistant, Peter Stano.

"In addition, charge-based microprocessors are leaky, meaning they have to run an electric current all the time just to keep the data in their working memory at their right value," Jacquod added. "That's one reason why laptops get hot while they're working."
"Spintronics avoids this because it treats the electrons as tiny magnets that retain the information they store even when the device is powered down. That might save a lot of energy."
To understand the concept of spintronics, it helps to picture each electron as a tiny magnet, Jacquod explained.

"Every electron has a certain mass, a certain charge and a certain magnetic moment, or as we physicists call it, a spin," he said. "The electron is not physically spinning around, but it has a magnetic north pole and a magnetic south pole. Its spin depends on which pole is pointing up."
Current microprocessors digitize information into bits, or "zeroes" and "ones," determined by the absence or presence of electric charges. "Zero" means very few electronic charges are present; "one" means there are many of them. In spintronics, only the orientation of an electron's magnetic spin determines whether it counts as a zero or a one.

"You want as many magnetic units as possible, but you also want to be able to manipulate them to generate, transfer and exchange information, while making them as small as possible" Jacquod said.
Taking advantage of the magnetic moment of electrons for information processing requires converting their magnetic spin into an electric signal. This is commonly achieved using contacts consisting of common iron magnets or with large magnetic fields. However, iron magnets are too crude to work at the nanoscale of tomorrow's microprocessors, while large magnetic fields disturb the very currents they are supposed to measure.

"Controlling the spin of the electrons is very difficult because it responds very weakly to external magnetic fields," Jacquod explained. "In addition, it is very hard to localize magnetic fields. Both make it hard to miniaturize this technology."
"It would be much better if you could read out the spin by making an electric measurement instead of a magnetic measurement, because miniaturized electric circuits are already widely available," he added.
In their research paper, based on theoretical calculations controlled by numerical simulations, Jacquod and Stano propose a protocol using existing technology and requiring only small magnetic fields to measure the spin of electrons.

"We take advantage of a nanoscale structure known as a quantum point contact, which one can think of as the ultimate bottleneck for electrons," Jacquod explained. "As the electrons are flowing through the circuit, their motion through that bottleneck is constrained by quantum mechanics. Placing a small magnetic field around that constriction allows us to measure the spin of the electrons."

"We can read out the spin of the electrons based on how the current through the bottleneck changes as we vary the magnetic field around it. Looking at how the current changes tells us about the spin of the electrons."
"Our experience tells us that our protocol has a very good chance to work in practice because we have done similar calculations of other phenomena," Jacquod said. "That gives us the confidence in the reliability of these results."

In addition to being able to detect and manipulate the magnetic spin of the electrons, the work is a step forward in terms of quantifying it.
"We can measure the average spin of a flow of electrons passing through the bottleneck," Jacquod explained. "The electrons have different spins, but if there is an excess in one direction, for example ten percent more electrons with an upward spin, we can measure that rather precisely."

He said that up until now, researchers could only determine there was excess, but were not able to quantify it.
"Once you know how to produce the excess spin and know how to measure it, you could start thinking about doing basic computing tasks," he said, adding that in order to transform this work into applications, some distance has yet to be covered.
"We are hopeful that a fundamental stumbling block will very soon be removed from the spintronics roadmap," Stano added.

Spintronics could be a stepping stone for quantum computing, in which an electron not only encodes zero or one, but many intermediate states simultaneously. To achieve this, however, this research should be extended to deal with electrons one-by-one, a feat that has yet to be accomplished.

Source EurekaAlert!