Showing posts with label Mind. Show all posts
Showing posts with label Mind. Show all posts

Thursday, June 16, 2011

Poor 'gut sense' of numbers contributes to persistent math difficulties

Study reveals math learning disabilities are caused by multiple factors, including poor intuition in gauging numerical quantities

A new study published today in the journal Child Development (e-publication ahead of print) finds that having a poor "gut sense" of numbers can lead to a mathematical learning disability and difficulty in achieving basic math proficiency. This inaccurate number sense is just one cause of math learning disabilities, according to the research led by Dr. Michele Mazzocco of the Kennedy Krieger Institute.

Approximately 6 to 14 percent of school-age children have persistent difficulty with mathematics, despite adequate learning opportunities and age-appropriate achievement in other school subjects. These learning difficulties can have lifelong consequences when it comes to job success and financial decision-making. Heightened interest in the nature and origins of these learning difficulties has led to studies to define mathematical learning disability (MLD), identify its underlying core deficits, and differentiate children with MLD from their mathematically successful counterparts.

The new Kennedy Krieger study showed that children with a confirmed math learning disability have a markedly inaccurate number sense compared to their peers. But Dr. Mazzocco said students without a MLD who were below average in achievement performed on the number sense tasks as well as those considered average. For them, number sense doesn't seem to be the trouble.

"Some children have a remarkably imprecise intuitive sense of numbers, and we believe these children have math learning disability, at least in part, due to deficits in this intuitive type of number sense," said Dr. Mazzocco, Director of the Math Skills Development Project at Kennedy Krieger. "But other students who underperform in math do so despite having an intact number sense. This demonstrates the complexity of determining precisely what influences or interferes with a child's mathematical learning. Difficulty learning math may result from a weak number sense but it may also result from a wide range of other factors such as spatial reasoning or working memory. While we should not assume that all children who struggle with mathematics have a poor number sense, we should consider the possibility."

To gauge their sense of numbers, Dr. Mazzocco and colleagues tested 71 children who were previously enrolled in a 10-year longitudinal study of math achievement. The students, all in the ninth grade, completed two basic number sense tasks. In the number naming task, they were shown arrays of dots and asked to judge how many dots were present, without allowing enough time to actually count them. In the number discrimination task, the children were shown arrays of blue dots and yellow dots and asked to determine whether the blue or yellow array had more dots, again, without time to count them.

The researchers then compared the performance of four groups of students, who over the 10-year study, consistently showed having either a MLD, below average, average or above average math achievement.
Students with MLD performed significantly worse than their peers on both of the number tasks. The study findings suggest that an innate ability to approximate numbers, an intact ability present in human infants and many other species, contributes to more sophisticated math abilities later in life, while a less accurate ability underlies MLD. Additionally, the findings reveal that a poor number sense is not the only potential source of math difficulties, reinforcing that a 'one size fits all' educational approach may not be the best for helping children who struggle with math.

"A key message for parents and teachers is that children vary in the precision of their intuitive sense of numbers. We might take for granted that every child perceives numbers with roughly comparable precision, but this assumption would be false. Some students may need more practice, or different kinds of practice, to develop this number sense," Dr. Mazzocco said. "At the same time, if a child is struggling with mathematics at school, we should not assume that the child's difficulty is tied to a poor number sense; this is just one possibility."

Source  EurekaAlert!

Imagination Can Influence Perception

Imagining something with our mind’s eye is a task we engage in frequently, whether we’re daydreaming, conjuring up the face of a childhood friend, or trying to figure out exactly where we might have parked the car. But how can we tell whether our own mental images are accurate or vivid when we have no direct comparison? That is, how do we come to know and judge the contents of our own minds?

Mental imagery is typically thought to be a private phenomenon, which makes it difficult to test people’s metacognition of – or knowledge about –their own mental imagery. But a novel study, to be published in a forthcoming issue of Psychological Science, a journal of the Association for Psychological Science, capitalizes on the visual phenomenon of binocular rivalry as a way to test this kind of metacognition.
The study’s authors, Joel Pearson of the University of New South Wales, Rosanne Rademaker of Maastricht University, and Frank Tong of Vanderbilt University, wanted to find out if people have accurate knowledge about their own imagery performance. Participants were asked to imagine a particular pattern – a green circle with vertical lines or a red circle with horizontal lines – and rate how vivid the circle was for them and the amount of effort it took to imagine the circle.

To test the accuracy of the vividness and effort ratings, participants were presented with a binocular rivalry display so that participants’ left and right eyes were exposed to different patterns. As a result of binocular rivalry, one pattern becomes more dominant, and participants report seeing only this dominant pattern. Pearson and his co-authors theorized that if participants have accurate knowledge about their own mental imagery, then the imagined patterns that participants reported as being most vivid should emerge as the dominant patterns during the binocular rivalry display.
Results of the study confirmed the authors’ suspicions, suggesting that imagined experiences are not merely epiphenomenal – that is, our evaluations of mental imagery bear a direct relationship to our performance on perceptual and cognitive tasks in the real world. The authors used control conditions in order to rule out the influence of other factors, like whether participants might have paid attention to one pattern more than the other or simply chose one pattern more than another. Results from these control conditions indicated that neither attention nor decisional bias could account for the findings from the binocular rivalry condition.

According to Pearson, “our ability to consciously experience the world around us has been dubbed one of the most amazing yet enigmatic processes under scientific investigation today.” But, he argues, “if we stop for a moment and think about it, our ability to imagine the world around us in the absence of stimulation from that world is perhaps even more amazing.” With mental imagery, we can ‘see’ how things might have been or could be in the future. It is perhaps not surprising, then, that strong mental imagery is associated with creativity.
Mental imagery is also critical when organizing our lives on a day-to-day basis. Being able to imagine objects and scenarios is “one of the fundamental abilities that allows us to successfully think about and plan future events,” says Pearson. Mental imagery “allows us to, in a sense, run through a dress rehearsal in our mind’s eye.”

It’s clear that mental imagery contributes to our everyday functioning. There are some instances, however, when incredibly vivid mental imagery may not be a good thing, such as in the case of visual hallucinations. According to Pearson, future research on our experiences of mental imagery will not only help to reveal the inner workings of this fundamental ability, but it may also help in research and treatment in cases of hallucination, when mental imagery becomes disruptive.

Source Association for Psychological Science

Friday, June 10, 2011

Canine telepathy?

Can dogs read our minds? How do they learn to beg for food or behave badly primarily when we're not looking? According to Monique Udell and her team, from the University of Florida in the US, the way that dogs come to respond to the level of people’s attentiveness tells us something about the ways dogs think and learn about human behavior. Their research, published online in Springer's journal Learning & Behavior, suggests it is down to a combination of specific cues, context and previous experience.

Recent work has identified a remarkable range of human-like social behaviors in the domestic dog, including their ability to respond to human body language, verbal commands, and to attentional states. The question is, how do they do it? Do dogs infer humans' mental states by observing their appearance and behavior under various circumstances and then respond accordingly? Or do they learn from experience by responding to environmental cues, the presence or absence of certain stimuli, or even human behavioral cues?

Udell and colleagues' work sheds some light on these questions.Udell and team carried out two experiments comparing the performance of pet domestic dogs, shelter dogs and wolves given the oportunity to beg for food, from either an attentive person or from a person unable to see the animal. They wanted to know whether the rearing and living envi-ronment of the animal (shelter or human home), or the species itself (dog or wolf), had the greater impact on the animal's performance.They showed, for the first time that wolves, like domestic dogs, are capable of begging successfully for food by approaching the attentive human. This demonstrates that both species - domesticated and non-domesticated - have the capacity to behave in accordance with a human's attentional state. In addition, both wolves and pet dogs were able to rapidly improve their performance with practice.

The authors also found that dogs were not sensitive to all visual cues of a human's attention in the same way. In particular, dogs from a home environment rather than a shelter were more sensitive to stimuli predicting attentive humans. Those dogs with less regular exposure to humans performed badly on the begging task. According to the researchers, "These results suggest that dogs' ability to follow human actions stems from a willingness to accept humans as social companions, combined with conditioning to follow the limbs and actions of humans to acquire reinforcement. The type of attentional cues, the context in which the command is presented, and previous experience are all important."

Source Springer

Monday, June 6, 2011

Attention and Awareness Aren’t The Same

Paying attention to something and being aware of it seem like the same thing -they both involve somehow knowing the thing is there. However, a new study, which will be published in an upcoming issue of Psychological Science, a journal of the Association for Psychological Science, finds that these are actually separate; your brain can pay attention to something without you being aware that it’s there.

“We wanted to ask, can things attract your attention even when you don’t see them at all?” says Po-Jang Hsieh, of Duke-NUS Graduate Medical School in Singapore and MIT. He co-wrote the study with Jaron T. Colas and Nancy Kanwisher of MIT. Usually, when people pay attention to something, they also become aware of it; in fact, many psychologists assume these two concepts are inextricably linked. But more evidence has suggested that’s not the case.

To test this, Hsieh and his colleagues came up with an experiment that used the phenomenon called “visual pop-out.” They set each participant up with a display that showed a different video to each eye. One eye was shown colorful, shifting patterns; all awareness went to that eye, because that’s the way the brain works. The other eye was shown a pattern of shapes that didn’t move. Most were green, but one was red. Then subjects were tested to see what part of the screen their attention had gone to. The researchers found that people’s attention went to that red shape – even though they had no idea they’d seen it at all.

In another experiment, the researchers found that if people were distracted with a demanding task, the red shape didn’t attract attention unconsciously anymore. So people need a little brain power to pay attention to something even if they aren’t aware of it, Hsieh and his colleagues concluded.
Hsieh suggests that this could have evolved as a survival mechanism. It might have been useful for an early human to be able to notice and process something unusual on the savanna without even being aware of it, for example. “We need to be able to direct attention to objects of potential interest even before we have become aware of those objects,” he says.

Source Association for Psychological Science

Friday, June 3, 2011

Moral Responses Change as People Age

Research shows morally laden scenarios get different responses from people of different ages.

Both preschool children and adults distinguish between damage done either intentionally or accidently when assessing whether a perpetrator has done something wrong, but adults are much less likely than children to think someone should be punished if the act was accidental.

Moral responses change as people age says a new study from the University of Chicago.
Both preschool children and adults distinguish between damage done either intentionally or accidently when assessing whether a perpetrator has done something wrong, said study author Jean Decety. But, adults are much less likely than children to think someone should be punished for damaging an object, for example, especially if the action was accidental.

The study, which combined brain scanning, eye-tracking and behavioral measures to understand brain responses, was published in the journal Cerebral Cortex in an article titled "The Contribution of Emotion and Cognition to Moral Sensitivity: A Neurodevelopmental Study."
"This is the first study to examine brain and behavior relationships in response to moral and non-moral situations from a neurodevelopmental perspective," wrote Decety in the article.
Decety is the Irving B. Harris Professor in Psychology and Psychiatry at the University of Chicago and a leading scholar on affective and social neuroscience. The National Science Foundation's (NSF) Division of Behavioral and Cognitive Sciences funds the research.

"Studying moral judgment across the lifespan in terms of brain and behavior is important," said Lynn Bernstein, a program director for Cognitive Neuroscience at NSF. "It will, for example, contribute to the understanding of disorders such as autism spectrum disorder and psychopathology and to understanding how people at various times in the lifespan respond to others' suffering from physical and psychological pain."
The different responses correlate with the various stages of development, Decety said. As the brain becomes better equipped to make reasoned judgments and integrate an understanding of the mental states of others, moral judgments become more tempered.

Negative emotions alert people to the moral nature of a situation by bringing on discomfort that can precede moral judgment, said Decety. Such an emotional response is stronger in young children, he explained.
Decety and colleagues studied 127 participants, aged 4 to 36, who were shown short video clips while undergoing an fMRI scan. The team also measured changes in the dilation of the people's pupils as they watched the clips.
The participants watched a total of 96 clips that portrayed intentional harm, such as someone being shoved, and accidental harm, such as someone being struck accidentally, such as a golf player swinging a club. The clips also showed intentional damage to objects, such as a person kicking a bicycle tire, and accidental damage, such as a person knocking a teapot off the shelf.
Eye tracking revealed that all of the participants, irrespective of their age, paid more attention to people being harmed and to objects being damaged than they did to the perpetrators. Additionally, an analysis of pupil size showed that "pupil dilation was significantly greater for intentional actions than accidental actions, and this difference was constant across age, and correlated with activity in the amygdala and anterior cingulate cortex," Decety said.

The study revealed that the extent of activation in different areas of the brain as participants were exposed to the morally laden videos changed with age. For young children, the amygdala, which is associated the generation of emotional responses to a social situation, was much more activated than it was in adults.
In contrast, adults' responses were highest in the dorsolateral and ventromedial prefrontal cortex areas of the brain that allow people to reflect on the values linked to outcomes and actions.
"Whereas young children had a tendency to consider all perpetrators malicious, irrespective of intention and targets (people and objects), as participants aged, they perceived the perpetrator as clearly less mean when carrying out an accidental action, and even more so when the target was an object," Decety said.
Joining Decety in writing the paper were Kalina Michalska, a postdoctoral scholar, and Katherine Kinzler, an assistant professor, both in the Department of Psychology.

Source National Science Foundation

Saturday, May 28, 2011

Inside the infant mind

New study shows that babies can perform sophisticated analyses of how the physical world should behave.


Over the past two decades, scientists have shown that babies only a few months old have a solid grasp on basic rules of the physical world. They understand that objects can’t wink in and out of existence, and that objects can’t “teleport” from one spot to another.

Now, an international team of researchers co-led by MIT’s Josh Tenenbaum has found that infants can use that knowledge to form surprisingly sophisticated expectations of how novel situations will unfold.

Furthermore, the scientists developed a computational model of infant cognition that accurately predicts infants’ surprise at events that violate their conception of the physical world.

The model, which simulates a type of intelligence known as pure reasoning, calculates the probability of a particular event, given what it knows about how objects behave. The close correlation between the model’s predictions and the infants’ actual responses to such events suggests that infants reason in a similar way, says Tenenbaum, associate professor of cognitive science and computation at MIT.

“Real intelligence is about finding yourself in situations that you’ve never been in before but that have some abstract principles in common with your experience, and using that abstract knowledge to reason productively in the new situation,” he says.

The study, which appears in the May 27 issue of Science, is the first step in a long-term effort to “reverse-engineer” infant cognition by studying babies at ages 3-, 6- and 12-months (and other key stages through the first two years of life) to map out what they know about the physical and social world. That “3-6-12” project is part of a larger Intelligence Initiative at MIT, launched this year with the goal of understanding the nature of intelligence and replicating it in machines.

Tenenbaum and Luca Bonatti of the Universitat Pompeu Fabra in Barcelona are co-senior authors of the Science paper; the co-lead authors are Erno Teglas of Central European University in Hungary and Edward Vul, a former MIT student who worked with Tenenbaum and is now at the University of California at San Diego.

Measuring surprise


Elizabeth Spelke, a professor of psychology at Harvard University, did much of the pioneering work showing that babies understand abstract principles about the physical world. Spelke also demonstrated that infants’ level of surprise can be measured by how long they look at something: The more unexpected the event, the longer they watch.

Tenenbaum and Vul developed a computational model, known as an “ideal-observer model,” to predict how long infants would look at animated scenarios that were more or less consistent with their knowledge of objects’ behavior. The model starts with abstract principles of how objects can behave in general (the same principles that Spelke showed infants have), then runs multiple simulations of how objects could behave in a given situation.

In one example, 12-month-olds were shown four objects — three blue, one red — bouncing around a container. After some time, the scene would be covered, and during that time, one of the objects would exit the container through an opening.

If the scene was blocked very briefly (0.04 seconds), infants would be surprised if one of the objects farthest from the exit had left the container. If the scene was obscured longer (2 seconds), the distance from exit became less important and they were surprised only if the rare (red) object exited first. At intermediate times, both distance to the exit and number of objects mattered.

The computational model accurately predicted how long babies would look at the same exit event under a dozen different scenarios, varying number of objects, spatial position and time delay. This marks the first time that infant cognition has been modeled with such quantitative precision, and suggests that infants reason by mentally simulating possible scenarios and figuring out which outcome is most likely, based on a few physical principles.

“We don’t yet have a unified theory of how cognition works, but we’re starting to make progress on describing core aspects of cognition that previously were only described intuitively. Now we’re describing them mathematically,” Tenenbaum says.

Spelke says the new paper offers a possible explanation for how human cognitive development can be both extremely fast and highly flexible.

“Until now, no theory has appeared to have the right properties to account for both features, because core knowledge systems tend to be limited and inflexible, whereas systems designed to learn almost anything tend to learn slowly,” she says. “The research described in this article is the first, I believe, to suggest how human infants' learning could be both fast and flexible.”

New models of cognition

In addition to performing similar studies with younger infants, Tenenbaum plans to further refine his model by adding other physical principles that babies appear to understand, such as gravity or friction. “We think infants are much smarter, in a sense, than this model is,” he says. “We now need to do more experiments and model a broader range of the existing literature to test exactly what they know.”

He is also developing similar models for infants’ “intuitive psychology,” or understanding of how other people act. Such models of normal infant cognition could help researchers figure out what goes wrong in disorders such as autism. “We have to understand more precisely what the normal case is like in order to understand how it breaks,” Tenenbaum says.

Another avenue of research is the origin of infants’ ability to understand how the world works. In a paper published in Science in March, Tenenbaum and several colleagues outlined a possible mechanism, also based on probabilistic inference, for learning abstract principles from very early sensory input. “It’s very speculative, but we understand roughly the mathematical machinery that could explain how this sort of knowledge could be learned surprisingly early from fairly minimal experience,” he says.

Source MIT

Thursday, May 26, 2011

Mind-reading scan identifies simple thoughts

A new new brain imaging system that can identify a subject's simple thoughts may lead to clearer diagnoses for Alzheimer's disease or schizophrenia – as well as possibly paving the way for reading people's minds.
Michael Greicius at Stanford University in California and colleagues used functional magnetic resonance imaging (fMRI) to identify patterns of brain activity associated with different mental states.
He asked 14 volunteers to do one of four tasks: sing songs silently to themselves; recall the events of the day; count backwards in threes; or simply relax.

Participants were given a 10-minute period during which they had to do this. For the rest of that time they were free to think about whatever they liked. The participants' brains were scanned for the entire 10 minutes, and the patterns of connectivity associated with each task were teased out by computer algorithms that compared scans from several volunteers doing the same task.
This differs from previous experiments, in which the subjects were required to perform mental activities at specific times and the scans were then compared with brain activity when they were at rest. Greicius reasons his method encourages "natural" brain activity more like that which occurs in normal thought.

Read my mind

Once the algorithms had established the brain activity necessary for each task, Greicius asked 10 new volunteers to think in turn about each of the four tasks. Without knowing beforehand what each volunteer was thinking, the system successfully identified 85 per cent of the tasks they were engaged in. "Out of 40 scans of the new people, we could identify 34 mental states correctly," he says.
It also correctly concluded that subjects were not engaged in any of the four original activities when it analysed scans of people thinking about moving around their homes.

The findings suggest that patterns for thousands of mental states might serve as a reference bank against which people's thoughts could be compared, potentially revealing what someone is thinking or how they are feeling. "In some dystopian future, you might imagine reference patterns for 10,000 mental states, but that would be a woeful application of this technology," says Greicius.
The idea of the system being used by security services or the justice system to interrogate prisoners or suspects is far-fetched, Greicius says. Thousands of reference patterns would be needed, he points out, and even these might not be enough to tell if someone is lying, for example.

Diagnostic test

Instead, he hopes it could be used in Alzheimer's and schizophrenia to help identify faults in the connections needed to perform everyday tasks. He also says the system might be useful for gauging emotional reactions to film clipsMovie Camera and adverts.
How much detail such brain scans would show remains to be seen. "There would be a pretty coarse limit on what you could distinguish," says John Duncan of the UK Medical Research Council's Cognitive and Brain Sciences Centre in Cambridge. "The distinctiveness of an activity predicts the distinctiveness of brain activity associated with it," he says.

Kay Brodersen of the Swiss Federal Institute of Technology in Zurich, Switzerland, agrees. "You might be able to tell if someone is singing to themselves," he says. "But try to distinguish a Lady Gaga song from another and you would probably fail."
"The most important potential for this is in the clinic where classifying and diagnosing and treating psychiatric disease could be really important," says Brodersen. "At the moment, psychiatry is often just trial and error."

Source New Scientist

Wednesday, May 25, 2011

Geometry skills are innate, Amazon tribe study suggests

Tests given to an Amazonian tribe called the Mundurucu suggest that our intuitions about geometry are innate.

Researchers examined how the Mundurucu think about lines, points and angles, comparing the results with equivalent tests on French and US schoolchildren.
The Mundurucu showed comparable understanding, and even outperformed the students on tasks that asked about forms on spherical surfaces.
The study is published in Proceedings of the National Academy of Sciences.
The basic tenets of geometry as most people know them were laid out first by the Greek mathematician Euclid about 2,300 years ago.


 The Mundurucu do not even have words for geometric concepts


This "Euclidean geometry" includes familiar propositions such as the fact that a line can connect two points, that the angles of a triangle always add up to the same total, or that two parallel lines never cross.
The ideas are profoundly ingrained in formal education, but what remains a matter of debate is whether the capacity, or intuition, for geometry is present in all peoples regardless of their language or level of education.
To that end, Pierre Pica of the National Centre for Scientific Research in France and his colleagues studied an Amazon tribe known as the Mundurucu to investigate their intuitions about geometry.

 The questions posed to the tribe echo a classic Socratic dialogue on geometry

"Mundurucu is a language with only approximative numbers," Dr Pica told BBC News.
"You don't have a lot of geometrical terms like square or triangle or anything like that, and no way of saying two lines are parallel... it looks like the language does not have this concept."
Dr Pica and his colleagues engaged 22 adults and eight children among the Mundurucu in a series of dialogues, presenting situations that built up to questions on geometry. Rather than abstract points on a plane, the team suggested two villages on a notional map, for instance.
'Playing tricks'
 
Similar questions were posed to 30 adults and children in France and the US, some as young as five years old.
The Mundurucu people's responses to the questions were roughly as accurate as those of the French and US respondents; they seemed to have an intuition about lines and geometric shapes without formal education or even the relevant words.

"The question is to what extent knowledge - in this case, of geometry - is dependent on language," Dr Pica explained.
"There doesn't seem to be a causal relation: you have a knowledge of geometry and it's not because it's expressed in the language."

Most surprisingly, the Mundurucu actually outperformed their western counterparts when the tests were moved from a flat surface to that of a sphere (the Mundurucu were presented with a calabash to demonstrate).

For example, on a sphere, seemingly parallel lines can in fact cross - a proposition which the Mundurucu guessed far more reliably than the French or US respondents.
This "non-Euclidean" example, where the formal rules of geometry as most people learn them do not hold true, seems to suggest that our geometry education may actually mislead us, Dr Pica said.

"The education of Euclidean geometry is so strong that we take for granted it's going to apply everywhere, including spherical surfaces. Our education plays a trick with us, leading us to believe things which are not correct."

Source BBC

Saturday, May 21, 2011

'The Potential to Modify the Course of Parkinson's Disease'

'One Mind for Research' forum unites researchers and advocates to promote a national commitment to neuroscience research.


Washington, DC – Georgetown University Medical Center's Howard J. Federoff, MD, PhD, joins preeminent scientists from academia, government, and industry along with advocates, at the "One Mind for Research Forum," a three-day conference designed to dramatically advance the understanding and treatment of brain disorders. By uniting a broad coalition, conference organizers will endorse a bold new 10-year research agenda for the field of neuroscience.

During the forum, May 23rd through May 25th in Boston, leading scientists will share the latest research on debilitating neurodegenerative and psychiatric diseases such as post-traumatic stress disorder, Alzheimer's disease, autism, addiction and depression. Federoff, executive vice president of GUMC and a recognized neuroscientist, will present "The Potential to Modify the Course of Parkinson's Disease" on Tuesday, May 24th during a session beginning at 1:45pm.

"Parkinson's disease is currently treated symptomatically but we are compelled to modify natural history," says Federoff. Among issues he will discuss are when disease begins, the role of neuroinflammation, the impact of genetics and genomics and promising preclinical therapeutic strategies.

Former U.S. Congressman Patrick Kennedy and business executive and philanthropist Garen Staglin will co-chair the forum. "We will launch an ambitious plan for research, uniting our nation's best and brightest in a way not seen since President John F. Kennedy announced the goal of landing a man on the moon 50 years ago," says Kennedy, calling this effort a "moonshot to the mind."

"In 1961, President Kennedy charted an unprecedented scientific goal for this country: to send a man to the moon," says Federoff. "Like that successfully ambitious trek, our exceptional challenge to transform neuroscience research will, as President Kennedy said, 'serve to organize and measure the best of our energies and skills.' I congratulate Congressman Kennedy for leveraging his strengths in this call for a nationwide commitment to address critical need."

Source EurekaAlert!

Tuesday, May 17, 2011

Guilt counts

Guilt, so some people have suggested, is what makes us nice. When we do someone a favour or choose not to exploit someone vulnerable, we do it because we fear the guilt we'd feel otherwise. If this is the case, then guilt is what holds together human society, as society is to a surprising extent is based on cooperation and trust. It would be interesting to know what neural processes generate guilt, not just to answer fundamental psychological questions, but also to understand disorders that are associated with an excess or lack of guilt, such as anxiety and psychopathy.


A team of neuroscientists, psychologists and economists have this month produced some new results in this area, using a model from psychological game theory. "One idea is that most people cooperate because it feels good to do it. And there is some brain imaging data that shows activity in reward-related regions of the brain when people are cooperating," says psychologist Luke Chang, one of the scientists behind the research. "But there is a whole other world of motivation to do good because you don't want to feel bad. That is the idea behind guilt aversion."
To test this idea the researchers used a commonly-studied mathematical game, called the trust game. It involves two people, an investor and a trustee. The investor gives a certain amount of money to the trustee. The amount is then mulitplied by some factor, usually 3 or 4, and the trustee then has the chance to return some, all, or none of the money to the investor.
How can we predict how people will behave in this game? The traditional approach to game theory assumes that players are rational and completely selfish. Under this assumption trustees keep all the money, abusing the investors' trust. Realising this, investors don't hand over any cash in the first place, so no interaction takes place at all.
In this new study the researchers replaced the selfishness assumption with one of guilt aversion. They assume that the trustee simultaneously tries to maximise their financial pay-off and minimise the guilt they expect to feel if they let their partner down. Guilt is defined as the failure to meet the partner's expectations. So if the investor expects to get an amount $E_1$ back from the trustee and the trustee returns $S$, then guilt can be quantified as $E_1-S$ if $S<E_1$ and $0$ otherwise.
However, in a realistic situation, the trustee does not know exactly how much money the investor expects, instead they will base their decision on the amount of money $E_2$, they believe the investor wants back. So a better quantification of guilt is $E_2-S$ if $S<E_2$ and $0$ otherwise.
Now suppose the investor hands over a certain amount of money, which gets multiplied to give a total $T$. After returning $S$, the trustee is left with a pay-off of $T-S$. The idea that the trustee tries to simultaneously maximise pay-off and minimise anticipated guilt is captured by a utility function $U$, which measures how happy the trustee feels with the interaction. It is defined as the trustee’s financial gain minus a term measuring the guilt they feel:
  \[ U=(T-S)-\theta (E_2-S). \]
Suppose the total T=40 and the trustee believes that the investor expects to receive an amount E2=20. The plot shows U versus S for θ=1.5 (blue) and θ=0.5 (pink). For θ=1.5 there is a maximum at S=20 and for θ=0.5 there is a maximum at S=0. 

The number $\theta $, which is positive, measures just how sensitive the trustee is to feeling guilt: the higher $\theta $, the heavier the guilt factor weighs in. It is different for every trustee-investor pair, reflecting the fact that some trustees are more conscientious than others and that the amount of guilt we feel when abusing someone’s trust depends on who we’re dealing with.
The trustee tries to maximise the utility function, that is, he or she looks for the value of $S$ which gives the largest value of $U$. This value gives the best possible trade-off between pay-off and guilt.
If you plot $U$ versus $S$ for fixed values of $\theta $, you get two different types of graphs. For $\theta < 1$ the graph has a maximum at $S=0$. Thus, the model we've just constructed predicts that trustees less sensitive to guilt maximise their utility function by returning no money at all. For $\theta >1$ the graph has a maximum at $S=E_2$, predicting that guilt averse people will return the amount they believe the investor wants back.
The researchers tested their model on 30 volunteers who played repeated rounds of the trust game. On the whole, the volunteers behaved as the model predicts: they typically returned close to the amount they believed the investor expected back. After playing the games, they also reported that they would have felt more guilty had they returned less. This, so the researchers say, suggests that anticipated guilt really does play a role in decisions to cooperate.
The researchers also used fMRI scans to monitor the brain activity of trustees during games. They found that participants who chose to honour trust by returning close to the amount that was expected of them showed increased activity in one network of brain components, while those that abused trust showed increased activity in another network. These two networks compete, but on the whole the network connected to honouring trust wins out.

An fMRI image showing areas of the brain associated with the competing motivations of minimising guilt (yellow) and maximising financial reward (blue) when participants decide whether or not they want to honor an investment partner's trust. (Image courtesy Luke Chang/UA psychology department.) 

The results chime with existing evidence. Previous studies have linked the brain network associated to an abuse of trust to neural processes that compute value and reward. The network associated to honouring trust has been linked in previous studies to feelings of guilt, anger, social distress and empathy for others. "These studies support our conjecture that the prospect of not fulfilling the expectations of another can result in a negative affective state, which in turn ultimately motivates cooperative behaviour," say the researchers in their paper. "Perhaps the function of this frequently observed network is to track deviations from expectations and bias actions to maintain adherence to the expectation such as a moral rule or social norm."
So the study suggests that when you do someone a favour without expecting anything in return, it's because the relevant parts of your brain signal that falling short of the other person's expectations would lead strong feelings of guilt. There's a caveat though. Rather than guilt — feeling bad for not meeting expectations — the driving emotion may be empathy — the ability to "feel" the other person's disappointment when their expectations aren't met. It's a subtle difference and more work is needed to prise apart these two emotions.
The game theoretical model used in this study may seem surprisingly simple, but it's got an edge over traditional models used in economics, which have often been crisitised for failing to take account of human nature. They usually assume that "players" are rational, self-interested and undeterred by complex social emotions such as guilt. This is where the inter-disciplinary approach involving both psychologists and economists can lead to useful results. "In the end, it's a two-way exchange," says economist Martin Dufwenberg, who co-authored the study. "Economists take inspiration from the richer concept of [humanity] usually considered in psychology, but at the same time they have something to offer psychologists through their analytical tools."
The study, Triangulating the neural, psychological, and economic bases of guilt aversion by Luke Chang, Alec Smith, Martin Dufwenberg and Alan G. Sanfey, has appeared in the journal Neuron.

Source +Plus Magazine

Control Desk for the Neural Switchboard

Treating anxiety no longer requires years of pills or psychotherapy. At least, not for a certain set of bioengineered mice.

STANFORD Optogenetics, tested in rodents, can control electrical activity in a few carefully selected neurons, and may hold new insights into our disorders. 

In a study recently published in the journal Nature, a team of neuroscientists turned these high-strung prey into bold explorers with the flip of a switch.
The group, led by Dr. Karl Deisseroth, a psychiatrist and researcher at Stanford, employed an emerging technology called optogenetics to control electrical activity in a few carefully selected neurons.
First they engineered these neurons to be sensitive to light. Then, using implanted optical fibers, they flashed blue light on a specific neural pathway in the amygdala, a brain region involved in processing emotions.
And the mice, which had been keeping to the sides of their enclosure, scampered freely across an open space.

While such tools are very far from being used or even tested in humans, scientists say optogenetics research is exciting because it gives them extraordinary control over specific brain circuits — and with it, new insights into an array of disorders, among them anxiety and Parkinson’s disease.

Mice are very different from humans, as Dr. Deisseroth (pronounced DICE-er-roth) acknowledged. But he added that because “the mammalian brain has striking commonalities across species,” the findings might lead to a better understanding of the neural mechanisms of human anxiety.

David Barlow, founder of the Center for Anxiety and Related Disorders at Boston University, cautions against pushing the analogy too far: “I am sure the investigators would agree that these complex syndromes can’t be reduced to the firing of a single small neural circuit without considering other important brain circuits, including those involved in thinking and appraisal.”

But a deeper insight is suggested by a follow-up experiment in which Dr. Deisseroth’s team directed their light beam just a little more broadly, activating more pathways in the amygdala. This erased the effect entirely, leaving the mouse as skittish as ever.

This implies that current drug treatments, which are far less specific and often cause side effects, could also in part be working against themselves.
David Anderson, a professor of biology at the California Institute of Technology who also does research using optogenetics, compares the drugs’ effects to a sloppy oil change. If you dump a gallon of oil over your car’s engine, some of it will dribble into the right place, but a lot of it will end up doing more harm than good.
“Psychiatric disorders are probably not due only to chemical imbalances in the brain,” Dr. Anderson said. “It’s more than just a giant bag of serotonin or dopamine whose concentrations sometimes are too low or too high. Rather, they likely involve disorders of specific circuits within specific brain regions.”

So optogenetics, which can focus on individual circuits with exceptional precision, may hold promise for psychiatric treatment. But Dr. Deisseroth and others caution that it will be years before these tools are used on humans, if ever.

For one, the procedure involves bioengineering that most people would think twice about. First, biologists identify an “opsin,” a protein found in photosensitive organisms like pond scum that allows them to detect light. Next, they fish out the opsin’s gene and insert it into a neuron within the brain, using viruses that have been engineered to be harmless —“disposable molecular syringes,” as Dr. Anderson calls them.
There, the opsin DNA becomes part of the cell’s genetic material, and the resulting opsin proteins conduct electric currents — the language of the brain — when they are exposed to light. (Some opsins, like channelrhodopsin, which responds to blue light, activate neurons; others, like halorhodopsin, activated by yellow light, silence them.)

Finally, researchers delicately thread thin optical fibers down through layers of nervous tissue and deliver light to just the right spot.
Thanks to optogenetics, neuroscientists can go beyond observing correlations between the activity of neurons and an animal’s behavior; by turning particular neurons on or off at will, they can prove that those neurons actually govern the behavior.

“Sometimes before I give talks, people will ask me about my ‘imaging’ tools,” said Dr. Deisseroth, 39, a practicing psychiatrist whose dissatisfaction with current treatments led him to form a research laboratory in 2004 to develop and apply optogenetic technology.

“I say: ‘Interestingly, it’s the complete opposite of imaging, which is observational. We’re not using light to observe events. We’re sending light in to cause events.’ ” 

In early experiments, scientists showed that they could make worms stop wiggling and drive mice around in manic circles as if by remote control.
Now that the technique has earned its stripes, laboratories around the world are using it to better understand how the nervous system works, and to study problems including chronic pain, Parkinson’s disease and retinal degeneration.

Some of the insights gained from these experiments in the lab are already inching their way to the clinic.
Dr. Amit Etkin, a Stanford psychiatrist and researcher who collaborates with Dr. Deisseroth, is trying to translate the findings about anxiety in rodents to improve human therapy with existing tools. Using transcranial magnetic stimulation, a technique that is far less specific than optogenetics but has the advantage of being noninvasive, Dr. Etkin seeks to activate the human analog of the amygdala circuitry that reduced anxiety in Dr. Deisseroth’s mice.

Dr. Jaimie Henderson, their colleague in the neurosurgery department, has treated more than 600 Parkinson’s patients using a standard procedure called deep brain stimulation. The treatment, which requires implanting metal electrodes in a brain region called the subthalamic nucleus, improves coordination and fine motor control. But it also causes side effects, like involuntary muscle contractions and dizziness, perhaps because turning on electrodes deep inside the brain also activates extraneous circuits.

“If we could find a way to just activate the circuits that provide therapeutic benefit without the ones that cause side effects, that would obviously be very helpful,” Dr. Henderson said.
Moreover, as with any invasive brain surgery, implanting electrodes carries the risk of infection and life-threatening hemorrhage. What if you could stimulate the brain’s surface instead? A new theory of how deep brain stimulation affects Parkinson’s symptoms, based on optogenetics work in rodents, suggests that this might succeed.

Dr. Henderson has recently begun clinical tests in human patients, and hopes that this approach may also treat other problems associated with Parkinson’s, like speech disorders.
In the building next door, Krishna V. Shenoy, a neuroscience researcher, is bringing optogenetics to work on primates. Extending the success of a similar effort by an M.I.T. group led by Robert Desimone and Edward S. Boyden, he recently inserted opsins into the brains of rhesus monkeys. They experienced no ill effects from the viruses or the optical fibers, and the team was able to control selected neurons using light.

Dr. Shenoy, who is part of an international effort financed by the Defense Advanced Research Projects Agency, says optogenetics has promise for new devices that could eventually help treat traumatic brain injury and equip wounded veterans with neural prostheses.

“Current systems can move a prosthetic arm to a cup, but without an artificial sense of touch it’s very difficult to pick it up without either dropping or crushing it,” he said. “By feeding information from sensors on the prosthetic fingertips directly back into the brain using optogenetics, one could in principle provide a high-fidelity artificial sense of touch.”

Some researchers are already imagining how optogenetics-based treatments could be used directly on people if the biomedical challenge of safely delivering novel genes to patients can be overcome.
Dr. Boyden, who participated in the early development of optogenetics, runs a laboratory dedicated to creating and disseminating ever more powerful tools. He pointed out that light, unlike drugs and electrodes, can switch neurons off — or as he put it, “shut an entire circuit down.” And shutting down overexcitable circuits is just what you’d want to do to an epileptic brain.

“If you want to turn off a brain circuit and the alternative is surgical removal of a brain region, optical fiber implants might seem preferable,” Dr. Boyden said. Several labs are working on the problem, even if actual applications still seem far off.

For Dr. Deisseroth, who treats patients with autism and depression, optogenetics offers a more immediate promise: easing the stigma faced by people with mental illness, whose appearance of physical health can cause incomprehension from family members, friends and doctors.
“Just understanding for us, as a society, that someone who has anxiety has a known or knowable circuitry difference is incredibly valuable,” he said.

Source  The New York Times

Saturday, May 7, 2011

More than 20 percent of atheist scientists are spiritual

Rice University study: Scientists think spirituality is congruent with scientific discovery, religion is not.

More than 20 percent of atheist scientists are spiritual, according to new research from Rice University. Though the general public marries spirituality and religion, the study found that spirituality is a separate idea – one that more closely aligns with scientific discovery – for "spiritual atheist" scientists.
The research will be published in the June issue of Sociology of Religion.

Through in-depth interviews with 275 natural and social scientists at elite universities, the Rice researchers found that 72 of the scientists said they have a spirituality that is consistent with science, although they are not formally religious.
"Our results show that scientists hold religion and spirituality as being qualitatively different kinds of constructs," said Elaine Howard Ecklund, assistant professor of sociology at Rice and lead author of the study. "These spiritual atheist scientists are seeking a core sense of truth through spirituality -- one that is generated by and consistent with the work they do as scientists."

For example, these scientists see both science and spirituality as "meaning-making without faith" and as an individual quest for meaning that can never be final. According to the research, they find spirituality congruent with science and separate from religion, because of that quest; where spirituality is open to a scientific journey, religion requires buying into an absolute "absence of empirical evidence."

"There's spirituality among even the most secular scientists," Ecklund said. "Spirituality pervades both the religious and atheist thought. It's not an either/or. This challenges the idea that scientists, and other groups we typically deem as secular, are devoid of those big 'Why am I here?' questions. They too have these basic human questions and a desire to find meaning."

Ecklund co-authored the study with Elizabeth Long, professor and chair of the Department of Sociology at Rice. In their analysis of the 275 interviews, they discovered that the terms scientists most used to describe religion included "organized, communal, unified and collective." The set of terms used to describe spirituality include "individual, personal and personally constructed." All of the respondents who used collective or individual terms attributed the collective terms to religion and the individual terms to spirituality.


"While the data indicate that spirituality is mainly an individual pursuit for academic scientists, it is not individualistic in the classic sense of making them more focused on themselves," said Ecklund, director of the Religion and Public Life Program at Rice. "In their sense of things, being spiritual motivates them to provide help for others, and it redirects the ways in which they think about and do their work as scientists."

Ecklund and Long noted that the spiritual scientists saw boundaries between themselves and their nonspiritual colleagues because their spirituality facilitated engagement with the world around them. Such engagement, according to the spiritual scientists, generated a different approach to research and teaching: While nonspiritual colleagues might focus on their own research at the expense of student interaction, spiritual scientists' sense of spiritualty provides nonnegotiable reasons for making sure that they help struggling students succeed.

Source EurekaAlert!

Thursday, May 5, 2011

Caltech researchers pinpoint brain region that influences gambling decisions

PASADENA, Calif.—When a group of gamblers gather around a roulette table, individual players are likely to have different reasons for betting on certain numbers. Some may play a "lucky" number that has given them positive results in the past—a strategy called reinforcement learning. Others may check out the recent history of winning colors or numbers to try and decipher a pattern. Betting on the belief that a certain outcome is "due" based on past events is called the gambler's fallacy.

Recently, researchers at the California Institute of Technology (Caltech) and Ireland's Trinity College Dublin hedged their bets—and came out winners—when they proposed that a certain region of the brain drives these different types of decision-making behaviors.
"Through our study, we found a difference in activity in a region of the brain called the dorsal striatum depending on whether people were choosing according to reinforcement learning or the gambler's fallacy," says John O'Doherty, professor of psychology at Caltech and adjunct professor of psychology at Trinity College Dublin. "This finding suggests that the dorsal striatum is particularly involved in driving reinforcement-learning behaviors."

In addition, the work, described in the April 27 issue of The Journal of Neuroscience, suggests that people who choose based on the gambler's fallacy may be doing so because at the time of the choice they are not taking into account what they had previously learned or observed.
The focus of O'Doherty's research is to understand the brain mechanisms that underlie the decisions people make in the real world. To study this kind of decision making in the lab, his team gets study participants to play simple games in which they make choices that result in winning or losing small amounts of money. To make these games interesting, the researchers often present simple "gambling" scenarios, such as playing slot machines or roulette.

"For this particular study, we were interested in what part of the brain might play a role in controlling these strategies that drive behavior," says O'Doherty, who conducted the study along with postdoctoral scholar Ryan Jessup.
The team asked 31 participants to complete four roulette-wheel tasks while lying in an MRI scanner. For each round, the volunteers were asked to choose a color on a tricolored spinning wheel. If the wheel stopped on their color, they won two euros. (The study was done at Trinity College Dublin.) For each round, participants were charged a half euro, regardless of the outcome. All the while, the researchers studied the brain activity of participants, with a focus on how they appeared to choose colors.

"The dorsal striatum was more active in people who, at the time of choice, chose in accordance with reinforcement-learning principles compared to when they chose according to the gambler's fallacy," says Jessup. "This suggests that the same region involved in learning is also used at the time of choice."
The two types of strategies are actually contradictory because in reinforcement-learning behavior, one would be more likely to choose something if it has won a lot recently, and less likely to choose something if it has lost a lot recently. The opposite is true of the gambler's fallacy.
"The task was novel because making decisions based on either reinforcement learning or the gambler's fallacy is not rational in this particular task, and yet most of the subjects acted irrationally," explains Jessup. "Only 8 out of 31 subjects were generally rational, meaning they simply chose the color that covered the largest area in that round."

"It is very important to try to understand how interactions between different brain areas result in different types of decision-making behavior," says O'Doherty. "Once we understand the basic mechanisms in healthy people, we can start to look at how these systems go wrong in patients who suffer from different diseases, such as psychiatric disorders or addiction, that impact their decision-making capabilities."
###
The study, "Human Dorsal Striatal Activity during Choice Discriminates Reinforcement Learning Behavior from the Gambler's Fallacy," was supported by a Science Foundation Ireland grant.
Written by Katie Neith

Source  EurekaAlert!

Children conceived in winter have a greater risk of autism, study finds

An examination of the birth records of the more than 7million children born in the state of California during the 1990s and early 2000s has found a clear link between the month in which a child is conceived and the risk of that child later receiving a diagnosis of autism.

Among the children included in the study, those conceived during winter had a significantly greater risk of autism, the study found. The risk of having a child with an autism spectrum disorder grew progressively throughout the fall and winter to early spring, with children conceived in March having a 16 percent greater risk of later autism diagnoses, when compared with July conceptions.
The researchers said the finding suggests that environmental factors, for example, exposure to seasonal viruses like influenza, might play a role in the greater risk they found of children conceived during the winter having autism.

The study is published online today in the journal Epidemiology.
"The study finding was pronounced even after adjusting for factors such as maternal education, race /ethnicity, and the child's year of conception," said lead study author Ousseny Zerbo, a fifth-year doctoral student in the graduate group in epidemiology in the Department of Public Health Sciences in the UC Davis School of Medicine.
For the study, the researchers obtained the more than 7.2 million records for children born from January 1990 through December 2002 from the state of California Office of Vital Statistics. The researchers excluded some records because children did not survive to an age by which they typically would have been diagnosed with autism.

Other records were excluded because they were incomplete. For example, records that did not include adequate information from which to calculate the month of conception were excluded. The month of conception was calculated as the last date mothers reported having a menstrual period plus two weeks.
The total number of records finally included in the study was approximately 6.6 million, or 91 percent of all births recorded during the study period. The children were followed until their sixth birthdays to determine whether they would develop autism.
The researchers identified which children were diagnosed with autism by matching birth records with those of children receiving services from the state Department of Developmental Services (DDS). Approximately 19,000 cases of autism were identified, with autism defined as "full syndrome" autism in the DDS records.
The study found that the overall risk of having a child with autism increased from month to month during the winter through the month of March. For the study, winter was considered the months of December, January and February. Each month was compared with July, with an 8 percent higher incidence in December, increasing to 16 percent higher in March.

Earlier studies' findings about autism risk and month of conception or birth have had varied results. Some, such as ones conducted in Israel, Sweden and Denmark, have found an increased risk of autism for children born in March. Studies conducted in Canada, Japan, the United States and the United Kingdom identified an increased risk of autism for children born in the spring. However, these studies were far smaller, most having a few hundred cases of autism, as compared with the large number of cases in California.
"Studies of seasonal variations can provide clues about some of the underlying causes of autism. Based on this study, it may be fruitful to pursue exposures that show similar seasonal patterns, such infections and mild nutritional deficiencies," said Irva Hertz-Picciotto, chief of the division of environmental and occupational health in the Department of Public Health Sciences in the UC Davis School of Medicine.
"However, it might be that conception is not the time of susceptibility. Rather, it could for instance be an exposure in the third month of pregnancy, or the second trimester, that is harmful," said Hertz-Picciotto, who also is researcher affiliated with the UC Davis MIND Institute. "If so, we might need to look for exposures occurring a few months after conceptions that are at higher risk. For example, allergens that peak in the spring and early summer."

The researchers said the study is a starting point for further inquiry. They noted that other seasonal occurrences include potential exposures to pesticides, such as those used in the home to control insects in rainy or warm months, and those used in agricultural applications.
###
Other study authors include Ana-Maria Iosif, Lora Delwiche and Cheryl Walker, all of UC Davis.
The study is funded by grants from the National Institute of Environmental Health Sciences of the National Institutes of Health.
At the UC Davis MIND Institute, world-renowned scientists engage in research to find improved treatments as well as the causes and cures for autism, attention-deficit/hyperactivity disorder, fragile X syndrome, Tourette syndrome and other neurodevelopmental disorders. Advances in neuroscience, molecular biology, genetics, pharmacology and behavioral sciences are making inroads into a better understanding of brain function. The UC Davis MIND Institute draws from these and other disciplines to conduct collaborative, multidisciplinary research. For more information, visit mindinstitute.ucdavis.edu.

Source EurekaAlert!

Tuesday, May 3, 2011

Amygdala detects spontaneity in human behaviour

Study of jazz musicians reveals how the brain processes improvisations 
 
A pianist is playing an unknown melody freely without reading from a musical score. How does the listener’s brain recognise if this melody is improvised or if it is memorized? Researchers at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig investigated jazz musicians to discover which brain areas are especially sensitive to features of improvised behaviour. Among these are the amygdala and a network of areas known to be involved in the mental simulation of behaviour. Furthermore, the ability to correctly recognise improvisations was not only related to the musical experience of a listener but also to his ability to take the perspective of someone else. 
 
The ability to discriminate spontaneous from planned (rehearsed) behaviour is important when inferring others’ intentions in everyday situations, for example, when judging whether someone’s behaviour is calculated and intended to deceive. In order to examine such basic mechanisms of social abilities in controlled settings, Peter Keller, head of the research group “Music Cognition and Action” at the Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig and his research associate Annerose Engel investigate musical constellations ranging from solos and duos to large musical ensembles. In a recent study, they investigated the brain activity of jazz musicians while these musicians listened to short excerpts of improvised melodies or rehearsed versions of the same melodies. The listeners judged whether each heard melody was improvised.
 
Amygdala activation during listening to improvised melodies (compared to listening to imitated melodies). 
 
“Musical improvisations are more variable in their loudness and timing, most likely due to irregularities in force control associated with fluctuations in certainty about upcoming actions—i.e., when spontaneously deciding what to play—during improvised musical performance”, explains Peter Keller. The amygdala, part of the limbic system, was more active while listening to real improvisations and was sensitive to the fluctuations of loudness and timing in the melodies. Thus, the amygdala seems to be involved in the detection of spontaneous behaviour, which is consistent with studies showing an involvement of this structure when stimuli are difficult to predict, novel or ambiguous in their meaning.

Increased activation in the frontal operculum (left), the pre-supplementary area (middle) and the anterior insula (right) when listening to melodies judged as being improvised.
 
If a melody was judged as being improvised, regardless of whether this was in fact the case, stronger activity was found in a network which is known to be involved in the covert simulation of actions. This network comprised the frontal operculum, the pre-supplementary area and the anterior insula.
“We know today that during perception of actions, similar brain areas are active as during the execution of the same action”, explains Annerose Engel. “This supports the evaluation of other people’s behaviour in order to form expectations and predict future behaviour.” If a melody is perceived as being more difficult to predict, for example, because of fluctuations in loudness and timing, stronger activity is most likely to be elicited in this specialised network.
A further observation the researchers made may be related to this: Not only musical experience but also the capacity to take someone else’s perspective played an important role in judging spontaneity. Jazz musicians who had more musical expertise in playing the piano and playing with other musicians, as well as those who more often described themselves as trying to put themselves in someone else’s shoes were best at recognizing whether a melody was improvised or not.

Source Max-Planck-Gesellschaft

Kathryn Schulz: On being wrong

Most of us will do anything to avoid being wrong. But what if we're wrong about that? "Wrongologist" Kathryn Schulz makes a compelling case for not just admitting but embracing our fallibility.

Sunday, May 1, 2011

Bruce Schneier: The security mirage

The feeling of security and the reality of security don't always match, says computer-security expert Bruce Schneier. At TEDxPSU, he explains why we spend billions addressing news story risks, like the "security theater" now playing at your local airport, while neglecting more probable risks -- and how we can break this pattern.

Saturday, April 30, 2011

Deliberate inaction judged as immoral as wrong action

DOING nothing to stop a crime can be seen by others to be as bad as committing the crime directly.
So says Peter DeScioli at Brandeis University in Waltham, Massachusetts, who presented students with a number of scenarios that led to a fatality. An actor whose hesitancy to act led to the death was seen as less immoral than an actor whose direct actions led to the death. But the students judged deliberate inaction that led to the fatality as equally immoral as direct action that caused the death (Evolution and Human Behavior, DOI: 10.1016/j.evolhumbehav.2011.01.003).
DeScioli thinks the results show we see inaction as less immoral only because we typically lack proof that it was deliberate.

Courtesy New Scientist