Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Wednesday, November 23, 2011

EVERYTHING FROM NOTHING - the empty set axiom and construction of numbers in mathematics

THE mathematicians' version of nothing is the empty set. This is a collection that doesn't actually contain anything, such as my own collection of vintage Rolls-Royces. The empty set may seem a bit feeble, but appearances deceive; it provides a vital building block for the whole of mathematics.

Click on image to enlarge

It all started in the late 1800s. While most mathematicians were busy adding a nice piece of furniture, a new room, even an entire storey to the growing mathematical edifice, a group of worrywarts started to fret about the cellar. Innovations like non-Euclidean geometry and Fourier analysis were all very well - but were the underpinnings sound? To prove they were, a basic idea needed sorting out that no one really understood. Numbers.

Sure, everyone knew how to do sums. Using numbers wasn't the problem. The big question was what they were. You can show someone two sheep, two coins, two albatrosses, two galaxies. But can you show them two?

The symbol "2"? That's a notation, not the number itself. Many cultures use a different symbol. The word "two"? No, for the same reason: in other languages it might be deux or zwei or futatsu. For thousands of years humans had been using numbers to great effect; suddenly a few deep thinkers realised no one had a clue what they were.

An answer emerged from two different lines of thought: mathematical logic, and Fourier analysis, in which a complex waveform describing a function is represented as a combination of simple sine waves. These two areas converged on one idea. Sets.

A set is a collection of mathematical objects - numbers, shapes, functions, networks, whatever. It is defined by listing or characterising its members. "The set with members 2, 4, 6, 8" and "the set of even integers between 1 and 9" both define the same set, which can be written as {2, 4, 6, 8}.

Around 1880 the mathematician Georg Cantor developed an extensive theory of sets. He had been trying to sort out some technical issues in Fourier analysis related to discontinuities - places where the waveform makes sudden jumps. His answer involved the structure of the set of discontinuities. It wasn't the individual discontinuities that mattered, it was the whole class of discontinuities.

How many dwarfs?
One thing led to another. Cantor devised a way to count how many members a set has, by matching it in a one-to-one fashion with a standard set. Suppose, for example, the set is {Doc, Grumpy, Happy, Sleepy, Bashful, Sneezy, Dopey}. To count them we chant "1, 2, 3..." while working along the list: Doc (1), Grumpy (2), Happy (3), Sleepy (4), Bashful (5), Sneezy (6) Dopey (7). Right: seven dwarfs. We can do the same with the days of the week: Monday (1), Tuesday (2), Wednesday (3), Thursday (4), Friday (5), Saturday (6), Sunday (7).

Another mathematician of the time, Gottlob Frege, picked up on Cantor's ideas and thought they could solve the big philosophical problem of numbers. The way to define them, he believed, was through the process of deceptively simple process of counting.

What do we count? A collection of things - a set. How do we count it? By matching the things in the set with a standard set of known size. The next step was simple but devastating: throw away the numbers. You could use the dwarfs to count the days of the week. Just set up the correspondence: Monday (Doc), Tuesday (Grumpy)... Sunday (Dopey). There are Dopey days in the week. It's a perfectly reasonable alternative number system. It doesn't (yet) tell us what a number is, but it gives a way to define "same number". The number of days equals the number of dwarfs, not because both are seven, but because you can match days to dwarfs.

What, then, is a number? Mathematical logicians realised that to define the number 2, you need to construct a standard set which intuitively has two members. To define 3, use a standard set with three numbers, and so on. But which standard sets to use? They have to be unique, and their structure should correspond to the process of counting. This was where the empty set came in and solved the whole thing by itself.

Zero is a number, the basis of our entire number system (see "Zero's convoluted history"). So it ought to count the members of a set. Which set? Well, it has to be a set with no members. These aren't hard to think of: "the set of all honest bankers", perhaps, or "the set of all mice weighing 20 tonnes". There is also a mathematical set with no members: the empty set. It is unique, because all empty sets have exactly the same members: none. Its symbol, introduced in 1939 by a group of mathematicians that went by the pseudonym Nicolas Bourbaki, is ?. Set theory needs ? for the same reason that arithmetic needs 0: things are a lot simpler if you include it. In fact, we can define the number 0 as the empty set.

What about the number 1? Intuitively, we need a set with exactly one member. Something unique. Well, the empty set is unique. So we define 1 to be the set whose only member is the empty set: in symbols, {?}. This is not the same as the empty set, because it has one member, whereas the empty set has none. Agreed, that member happens to be the empty set, but there is one of it. Think of a set as a paper bag containing its members. The empty set is an empty paper bag. The set whose only member is the empty set is a paper bag containing an empty paper bag. Which is different: it's got a bag in it (see diagram).

The key step is to define the number 2. We need a uniquely defined set with two members. So why not use the only two sets we've mentioned so far: ? and {?}? We therefore define 2 to be the set {?, {?}}. Which, thanks to our definitions, is the same as {0, 1}.
Now a pattern emerges. Define 3 as {0, 1, 2}, a set with three members, all of them already defined. Then 4 is {0, 1, 2, 3}, 5 is {0, 1, 2, 3, 4}, and so on. Everything traces back to the empty set: for instance, 3 is {?, {?}, {?, {?}}} and 4 is {?, {?}, {?, {?}}, {?, {?}, {?, {?}}}}. You don't want to see what the number of dwarfs looks like.

The building materials here are abstractions: the empty set and the act of forming a set by listing its members. But the way these sets relate to each other leads to a well-defined construction for the number system, in which each number is a specific set that intuitively has that number of members. The story doesn't stop there. Once you've defined the positive whole numbers, similar set-theoretic trickery defines negative numbers, fractions, real numbers (infinite decimals), complex numbers... all the way to the latest fancy mathematical concept in quantum theory or whatever.

So now you know the dreadful secret of mathematics: it's all based on nothing.

Ian Stewart is emeritus professor of mathematics at the University of Warwick, UK

Cuortesy of New Scientist

Wednesday, June 22, 2011

Quantum magic trick shows reality is what you make it

Conjurers frequently appear to make balls jump between upturned cups. In quantum systems, where the properties of an object, including its location, can vary depending on how you observe them, such feats should be possible without sleight of hand. Now this startling characteristic has been demonstrated experimentally, using a single photon that exists in three locations at once.

Despite quantum theory's knack for explaining experimental results, some physicists have found its weirdness too much to swallow. Albert Einstein mocked entanglement, a notion at the heart of quantum theory in which the properties of one particle can immediately affect those of another regardless of the distance between them. He argued that some invisible classical physics, known as "hidden-variable theories", must be creating the illusion of what he called "spooky action at a distance".

A series of painstakingly designed experiments has since shown that Einstein was wrong: entanglement is real and no hidden-variable theories can explain its weird effects.
But entanglement is not the only phenomenon separating the quantum from the classical. "There is another shocking fact about quantum reality which is often overlooked," says Aephraim Steinberg of the University of Toronto in Canada.

No absolute reality

In 1967, Simon Kochen and Ernst Specker proved mathematically that even for a single quantum object, where entanglement is not possible, the values that you obtain when you measure its properties depend on the context. So the value of property A, say, depends on whether you chose to measure it with property B, or with property C. In other words, there is no reality independent of the choice of measurement.

It wasn't until 2008, however, that Alexander Klyachko of Bilkent University in Ankara, Turkey, and colleagues devised a feasible test for this prediction. They calculated that if you repeatedly measured five different pairs of properties of a quantum particle that was in a superposition of three states, the results would differ for the quantum system compared with a classical system with hidden variables.
That's because quantum properties are not fixed, but vary depending on the choice of measurements, which skews the statistics. "This was a very clever idea," says Anton Zeilinger of the Institute for Quantum Optics, Quantum Nanophysics and Quantum Information in Vienna, Austria. "The question was how to realise this in an experiment."

Now he, Radek Lapkiewicz and colleagues have realised the idea experimentally. They used photons, each in a superposition in which they simultaneously took three paths. Then they repeated a sequence of five pairs of measurements on various properties of the photons, such as their polarisations, tens of thousands of times.

A beautiful experiment

They found that the resulting statistics could only be explained if the combination of properties that was tested was affecting the value of the property being measured. "There is no sense in assuming that what we do not measure about a system has [an independent] reality," Zeilinger concludes.
Steinberg is impressed: "This is a beautiful experiment." If previous experiments testing entanglement shut the door on hidden variables theories, the latest work seals it tight. "It appears that you can't even conceive of a theory where specific observables would have definite values that are independent of the other things you measure," adds Steinberg.

Kochen, now at Princeton University in New Jersey, is also happy. "Almost a half century after Specker and I proved our theorem, which was based on a [thought] experiment, real experiments now confirm our result," he says.
Niels Bohr, a giant of quantum physics, was a great proponent of the idea that the nature of quantum reality depends on what we choose to measure, a notion that came to be called the Copenhagen interpretation. "This experiment lends more support to the Copenhagen interpretation," says Zeilinger.

Source New Scientist

Saturday, June 18, 2011

New Search Engine Looks for Uplifting News

Semantic search technology aimed at a positive slant advances with a system that can spot optimism in news articles.

Good news, if you haven't noticed, has always been a rare commodity. We all have our ways of coping, but the media's pessimistic proclivity presented a serious problem for Jurriaan Kamp, editor of the San Francisco-based Ode magazine—a must-read for "intelligent optimists"—who was in dire need of an editorial pick-me-up, last year in particular. His bright idea: an algorithm that can sense the tone of daily news and separate the uplifting stories from the Debbie Downers.

Talk about a ripe moment: A Pew survey last month found the number of Americans hearing "mostly bad" news about the economy and other issues is at its highest since the downturn in 2008. That is unlikely to change anytime soon: global obesity rates are climbing, the Middle East is unstable, and campaign 2012 vitriol is only just beginning to spew in the U.S. The problem is not trivial. A handful of studies, including one published in the Clinical Psychology Review in 2010, have linked positive thinking to better health. Another from the Journal of Economic Psychology the year prior found upbeat people can even make more money.

Kamp, realizing he could be a purveyor of optimism in an untapped market, partnered with Federated Media Publishing, a San Francisco–based company that leads the field in search semantics. The aim was to create an automated system for Ode to sort and aggregate news from the world's 60 largest news sources based on solutions, not problems. The system, released last week in public beta testing online and to be formally introduced in the next few months, runs thousands of directives to find a story's context. "It's kind of like playing 20 questions, building an ontology to find either optimism or pessimism," says Tim Musgrove, the chief scientist who designed the broader system, which has been dubbed a "slant engine". Think of the word "hydrogen" paired with "energy" rather than "bomb."

Web semantics developers in recent years have trained computers to classify news topics based on intuitive keywords and recognizable names. But the slant engine dives deeper into algorithmic programming. It starts by classifying a story's topic as either a world problem (disease and poverty, for example) or a social good (health care and education). Then it looks for revealing phrases. "Efforts against" in a story, referring to a world problem, would signal something good. "Setbacks to" a social good, likely bad. Thousands of questions later every story is eventually assigned a score between 0 and 1—above 0.95 fast-tracks the story to Ode’s Web interface, called OdeWire. Below that, a score higher than 0.6 is reviewed by a human. The system is trained to only collect themes that are "meaningfully optimistic," meaning it throws away flash-in-the-pan stories about things like sports or celebrities.

No computer is perfect, of course, and like IBM's Watson that held its own on Jeopardy! earlier this year, Ode’s slant engine continues to improve with time—and with each mistake. During one test, the slant system that runs Ode labeled a story about the FBI being "asleep at the switch" as positive, perhaps thinking it addressed sleep deprivation. Nor is it ideologically neutral: the U.S. losing ground to China is not such bad news to, well, China.

The goal is not to be naive, either—drowning out the gloom to focus on rainbows and unicorns. "Ignoring reality is not what this is about," Kamp says. "It's looking at the same reality, just looking at a different angle." High unemployment is a problem that seems all bad, he says, but if you approach it from a side door—perhaps profiling people who have founded new business and learned new skills or industries that have benefited from the downturn—it turns into a story that can inspire others, and maybe even lower the jobless rate faster.

Slant identification may have a big future. Researchers say it could eventually specialize Web content for pockets of consumers and make ads more engaging. Its potential to track attitudes in writing could even help address the age-old lament of how liberal or conservative the mainstream media actually is. Gone, too, could be the journalism axiom of "if it bleeds, it leads". If Ode has its way, solution-based news could become the hot new thing for the overwhelmed and dispirited. Imagine a new newsroom mantra: if it succeeds, it leads.

Thursday, June 16, 2011

A field guide to bullshit.

How do people defend their beliefs in bizarre conspiracy theories or the power of crystals? Philosopher Stephen Law has tips for spotting their strategies.

You describe your new book, Believing Bullshit, as a guide to avoid getting sucked into "intellectual black holes". What are they?
Intellectual black holes are belief systems that draw people in and hold them captive so they become willing slaves of claptrap. Belief in homeopathy, psychic powers, alien abductions - these are examples of intellectual black holes. As you approach them, you need to be on your guard because if you get sucked in, it can be extremely difficult to think your way clear again.

But isn't one person's claptrap another's truth?
There's a belief system about water to which we all sign up: it freezes at 0 °C and boils at 100 °C. We are powerfully wedded to this but that doesn't make it an intellectual black hole. That's because these beliefs are genuinely reasonable. Beliefs at the core of intellectual black holes, however, aren't reasonable. They merely appear so to those trapped inside.

You identify some strategies people use to defend black hole beliefs. Tell me about one of them - "playing the mystery card"?
This involves appealing to mystery to get out of intellectual hot water when someone is, say, propounding paranormal beliefs. They might say something like: "Ah, but this is beyond the ability of science and reason to decide. You, Mr Clever Dick Scientist, are guilty of scientism, of assuming science can answer every question." This is often followed by that quote from Shakespeare's Hamlet: "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy". When you hear that, alarm bells should go off.

But even scientists admit that they can't explain everything.
There probably are questions that science cannot answer. But what some people do to protect their beliefs is to draw a veil across reality and say, "you scientists can go up to the veil and apply your empirical methods this far, but no further". Behind the veil they will put angels, aliens, psychic powers, God, ghosts and so on. Then they insist that there are special people who can see - if only dimly - through this veil. But the fact is that many of the claims made about things behind this veil have empirically observable consequences and that makes them scientifically testable.

How can science test these mysteries?
Psychologist Christopher French at Goldsmiths, University of London, ran an experiment into the effects of crystals to explore claims that holding "real" crystals from a New Age shop while meditating has a powerful effect on the psyche, more so than just holding "fake" ones. But French found no difference in participants using real and fake crystals. This was good evidence that the effect people report is down to the power of suggestion, not the crystals.
Of course, this study provoked comments such as: "Not being able to prove the existence of something does not disprove its existence. Much is yet to be discovered." This is just a smokescreen. But because the mantra "it's-beyond-the-ability-of-science-to-establish..." gets repeated so often, it is effective at lulling people back to sleep - even if they have been stung into entertaining a doubt for a moment or two.

Do you think mystery has a place in science?
Some things may be beyond our understanding, and sometimes it's reasonable to appeal to mystery. If you have excellent evidence that water boils at 100 °C, but on one occasion it appeared it didn't, it's reasonable to attribute that to some mysterious, unknown factor. It's also reasonable, when we have a theory that works but we don't know how it works, to say that this is currently a mystery. But the more we rely on mystery to get us out of intellectual trouble, or the more we use it as a carpet under which to sweep inconvenient facts, the more vulnerable we are to deceit, by others and by ourselves.

In your book you also talk about the "going nuclear" tactic. What is this?
When someone is cornered in an argument, they may decide to get sceptical about reason. They might say: "Ah, but reason is just another faith position." I call this "going nuclear" because it lays waste to every position. It brings every belief - that milk can make you fly or that George Bush was Elvis Presley in disguise - down to the same level so they all appear equally "reasonable" or "unreasonable". Of course, you can be sure that the moment this person has left the room, they will continue to use reason to support their case if they can, and will even trust their life to reason: trusting that the brakes on their car will work or that a particular drug is going to cure them.

Isn't there a grain of truth in this approach?
There is a classic philosophical puzzle about how to justify reason: to do so, it seems you have to use reason. So the justification is circular - a bit like trusting a second-hand car salesman because he says he's trustworthy. But the person who "goes nuclear" isn't genuinely sceptical about reason. They are just raising a philosophical problem as a smokescreen, to give them time to leave with their head held high, saying: "So my belief is as reasonable as yours." That's intellectually dishonest.

You say we should also be aware of the "but it fits" strategy. Why?
Any theory, no matter how ludicrous, can be squared with the evidence, given enough ingenuity. Every last anomaly can be explained away. There is a popular myth about science that if you can make your theory consistent with the evidence, then that shows it is confirmed by that evidence - as confirmed as any other theory. Lots of dodgy belief systems exploit this myth. Young Earth creationism - the view that the whole universe is less than 10,000 years old - is a good example. Given enough shoehorning and reinterpretation, you can make whatever turns up "fit" what the Bible says.

What about when people claim that they "just know" something is right?
Suppose I look out the window and say: "Hey, there's Ted." You say: "It can't be Ted because he's on holiday." I reply: "Look, I just know it's Ted." Here it might be reasonable for you to take my word for it.
But "I just know" also gets used when I present someone with good evidence that there are, say, no auras, angels or flying saucers, and they respond: "Look, I just know there are." In such cases, claiming to "just know" is usually very unreasonable indeed.

What else should we watch out for?
You should be suspicious when people pile up anecdotes in favour of their pet theory, or when they practise the art of pseudo-profundity - uttering seemingly profound statements which are in fact trite or nonsensical. They often mix in references to scientific theory to sound authoritative.

Why does it matter if we believe absurd things?
It can cause no great harm. But the dangers are obvious when people join extreme cults or use alternative medicines to treat serious diseases. I am particularly concerned by psychological manipulation. For charlatans, the difficulty with using reason to persuade is that it's a double-edged sword: your opponent may show you are the one who is mistaken. That's a risk many so-called "educators" aren't prepared to take. If you try using reason to persuade adults the Earth's core is made of cheese, you will struggle. But take a group of kids, apply isolation, control, repetition, emotional manipulation - the tools of brainwashing - and there's a good chance many will eventually accept what you say.

Profile

Stephen Law is senior lecturer in philosophy at Heythrop College, University of London, and editor of the Royal Institute of Philosophy journal, Think. His latest book is Believing Bullshit: How not to get sucked into an intellectual black hole.

Source New Scientist

How we come to know our bodies as our own

By taking advantage of a "body swap" illusion, researchers have captured the brain regions involved in one of the most fundamental aspects of self-awareness: how we recognize our bodies as our own, distinct from others and from the outside world. That self-perception is traced to specialized multisensory neurons in various parts of the brain that integrate different sensory inputs across all body parts into a unified view of the body.

The findings, reported online on June 16 in Current Biology, a Cell Press publication, may have important medical and industrial applications, the researchers say.
"When we look down at our body, we immediately experience that it belongs to us," said Valeria Petkova of Karolinska Institutet in Sweden. "We do not experience our body as a set of fragmented parts, but rather as a single entity. Our study is the first to tackle the important question of how we come to have the unitary experience of owning an entire body."

Earlier studies showed that the integration of visual, tactile, and proprioceptive information (the sense of the relative position of body parts) in multisensory areas constitutes a mechanism for the self-attribution of single limbs, the researchers explained. But how ownership of individual body parts translates into the experience of owning a whole body remained a mystery.
In the new study, the researchers used a "body-swap" illusion, in which people experienced a mannequin to be their own, in combination with functional magnetic resonance imaging. Participants observed touching of the mannequin's body from the point of view of the mannequin's head while feeling identical synchronous touches on their own body, which they could not see. Those studies revealed a tight coupling between the experience of full-body ownership and neural responses in brain regions known to represent multisensory processing nodes in the primate brain, specifically the bilateral ventral premotor and left intraparietal cortices and the left putamen.
Activation in those multisensory areas was stronger when the stimulated body part was attached to a body as compared with when it was detached, the researchers reported, evidence that the integrity between body segments facilitates ownership of the parts.

"Our results suggest that the integration of visual, tactile, and proprioceptive information in body-part-centered reference frames represents a basic neural mechanism underlying the feeling of ownership of entire bodies," the researchers wrote. The finding generalizes existing models of limb ownership to the case of the entire body.
The discovery may find practical application, according to the study's senior author, Henrik Ehrsson.
"Understanding the mechanisms underlying the self-attribution of a body in the healthy brain can help in developing better diagnostic and therapeutic strategies to address pathological disturbances of bodily self-perception," Ehrsson said. "In addition, understanding the mechanisms of perceiving an entire body or a body part as belonging to oneself can have important implications for the design and production of mechanical prostheses or robotic substitutes for paralyzed or amputated body parts."
It might also lead to improvements in the fields of telerobotics and virtual reality, he added.

Source  EurekaAlert!

Wednesday, June 15, 2011

Why the universe wasn't fine-tuned for life

IF THE force of gravity were a few per cent weaker, it would not squeeze and heat the centre of the sun enough to ignite the nuclear reactions that generate the sunlight necessary for life on Earth. But if it were a few per cent stronger, the temperature of the solar core would have been boosted so much the sun would have burned out in less than a billion years - not enough time for the evolution of complex life like us.

In recent years many such examples of how the laws of physics have been "fine-tuned" for us to be here have been reported. Some religious people claim these "cosmic coincidences" are evidence of a grand design by a Supreme Being. In The Fallacy of Fine-tuning, physicist Victor Stenger makes a devastating demolition of such arguments.


A general mistake made in search of fine-tuning, he points out, is to vary just one physical parameter while keeping all the others constant. Yet a "theory of everything" - which alas we do not yet have - is bound to reveal intimate links between physical parameters. A change in one may be compensated by a change in another, says Stenger.

In addition to general mistakes, Stenger deals with specifics. For instance, British astronomer Fred Hoyle discovered that vital heavy elements can be built inside stars only because a carbon-12 nucleus can be made from the fusion of three helium nuclei. For the reaction to proceed, carbon-12 must have an energy level equal to the combined energy of the three helium nuclei, at the typical temperature inside a red giant. This has been touted as an example of fine-tuning. But, as Stenger points out, in 1989, astrophysicist Mario Livio showed that the carbon-12 energy level could actually have been significantly different and still resulted in a universe with the heavy elements needed for life.

The most striking example of fine-tuning appears to be the dark energy - or energy of the vacuum - that is speeding up the expansion of the universe. Calculations show it to be 10120 bigger than quantum theory predicts. But Stenger stresses that this prediction is made in the absence of a quantum theory of gravity, when gravity is known to orchestrate the universe.
Even if some parameters turn out to be fine-tuned, Stenger argues this could be explained if ours is just one universe in a "multiverse" - an infinite number of universes, each with different physical parameters. We would then have ended up in the one where the laws of physics are fine-tuned to life because, well, how could we not have? (For a related philosophical discussion read this article.)

Religious people say that, by invoking a multiverse, physicists are going to extraordinary lengths to avoid God. But physicists have to go where the data lead them. And, currently, there are strong hints from string theory, the standard picture of cosmology and fine-tuning itself to suggest that the universe we can see with our biggest telescopes is only a small part of all that is there.

Source New Scientist

Sunday, June 12, 2011

Teen brain data predicts pop song success

An Emory University study suggests that the brain activity of teens, recorded while they are listening to new songs, may help predict the popularity of the songs.

“We have scientifically demonstrated that you can, to some extent, use neuroimaging in a group of people to predict cultural popularity,” says Gregory Berns, a neuroeconomist and director of Emory’s Center for Neuropolicy. The Journal of Consumer Psychology is publishing the results of the study, conducted by Berns and Sara Moore, an economics research specialist in his lab.

In 2006, Berns’ lab selected 120 songs from MySpace pages, all of them by relatively unknown musicians without recording contracts. Twenty-seven research subjects, aged 12 to 17, listened to the songs while their neural reactions were recorded through functional magnetic resolution imaging (fMRI). The subjects were also asked to rate each song on a scale of one to five.

The data was originally collected to study how peer pressure affects teenagers’ opinions. The experiment used relatively unknown songs to try to ensure that the teens were hearing them for the first time.

Three years later, while watching “American Idol” with his two young daughters, Berns realized that one of those obscure songs had become a hit, when contestant Kris Allen started singing “Apologize” by One Republic.

“I said, ‘Hey, we used that song in our study,’” Berns recalls. “It occurred to me that we had this unique data set of the brain responses of kids who listened to songs before they got popular. I started to wonder if we could have predicted that hit.”

A comparative analysis revealed that the neural data had a statistically significant prediction rate for the popularity of the songs, as measured by their sales figures from 2007 to 2010.

“It’s not quite a hit predictor,” Berns cautions, “but we did find a significant correlation between the brain responses in this group of adolescents and the number of songs that were ultimately sold.”

Previous studies have shown that a response in the brain’s reward centers, especially the orbitofrontal cortex and ventral striatum, can predict people’s individual choices – but only in those people actually receiving brain scans.

The Emory study enters new territory. The results suggest it may be possible to use brain responses from a group of people to predict cultural phenomenon across a population – even in people who are not actually scanned.

The “accidental discovery,” as Berns describes it, has limitations. The study included only 27 subjects, and they were all teenagers, who make up only about 20 percent of music buyers.

The majority of the songs used in the study were flops, with negligible sales. And only three of the songs went on to meet the industry criteria for a certified hit: More than 500,000 unit sales, including albums that had the song as a track and digital downloads.

“When we plotted the data on a graph, we found a ‘sweet spot’ for sales of 20,000 units,” Berns said. The brain responses could predict about one-third of the songs that would eventually go on to sell more than 20,000 units.

 Brain regions positively correlated with the average likability of the song: cuneus, orbitofrontal cortex and ventral striatum.

The data was even clearer for the flops: About 90 percent of the songs that drew a mostly weak response from the neural reward center of the teens went on to sell fewer than 20,000 units.

Another interesting twist: When the research subjects were asked to rate the songs on a scale of one to five, their answers did not correlate with future sales of the songs.

That result may be due to the complicated cognitive process involved in rating something, Berns theorizes. “You have to stop and think, and your thoughts may be colored by whatever biases you have, and how you feel about revealing your preferences to a researcher.”

On the other hand, “you really can’t fake the brain responses while you’re listening to the song,” he says. “That taps into a raw reaction.”

The pop music experiment is merely “a baby step,” Berns says. As a leader in the nascent field of neuroeconomics, he is interested in larger questions of how our understanding of the brain can explain human decision-making. Among his current projects is a study of sacred values, and their potential for triggering violent conflict.

“My long-term goal is to understand cultural phenomena and trends,” Berns says. “I want to know where ideas come from, and why some of them become popular and others don’t. It’s ideas and the way that we think that determines the course of human history. Ultimately, I’m trying to predict history.

Source Emory University

Saturday, June 11, 2011

When the multiverse and many-worlds collide

TWO of the strangest ideas in modern physics - that the cosmos constantly splits into parallel universes in which every conceivable outcome of every event happens, and the notion that our universe is part of a larger multiverse - have been unified into a single theory. This solves a bizarre but fundamental problem in cosmology and has set physics circles buzzing with excitement, as well as some bewilderment.

The problem is the observability of our universe. While most of us simply take it for granted that we should be able to observe our universe, it is a different story for cosmologists. When they apply quantum mechanics - which successfully describes the behaviour of very small objects like atoms - to the entire cosmos, the equations imply that it must exist in many different states simultaneously, a phenomenon called a superposition. Yet that is clearly not what we observe.

Cosmologists reconcile this seeming contradiction by assuming that the superposition eventually "collapses" to a single state. But they tend to ignore the problem of how or why such a collapse might occur, says cosmologist Raphael Bousso at the University of California, Berkeley. "We've no right to assume that it collapses. We've been lying to ourselves about this," he says.
In an attempt to find a more satisfying way to explain the universe's observability, Bousso, together with Leonard Susskind at Stanford University in California, turned to the work of physicists who have puzzled over the same problem but on a much smaller scale: why tiny objects such as electrons and photons exist in a superposition of states but larger objects like footballs and planets apparently do not.

This problem is captured in the famous thought experiment of Schrƶdinger's cat. This unhappy feline is inside a sealed box containing a vial of poison that will break open when a radioactive atom decays. Being a quantum object, the atom exists in a superposition of states - so it has both decayed and not decayed at the same time. This implies that the vial must be in a superposition of states too - both broken and unbroken. And if that's the case, then the cat must be both dead and alive as well.
To explain why we never seem to see cats that are both dead and alive, and yet can detect atoms in a superposition of states, physicists have in recent years replaced the idea of superpositions collapsing with the idea that quantum objects inevitably interact with their environment, allowing information about possible superpositions to leak away and become inaccessible to the observer. All that is left is the information about a single state.

Physicists call this process "decoherence". If you can prevent it - by tracking all the information about all possible states - you can preserve the superposition.
In the case of something as large as a cat, that may be possible in Schrƶdinger's theoretical sealed box. But in the real world, it is very difficult to achieve. So everyday cats decohere rapidly, leaving behind the single state that we observe. By contrast, small things like photons and electrons are more easily isolated from their environment, so they can be preserved in a superposition for longer: that's how we detect these strange states.
The puzzle is how decoherence might work on the scale of the entire universe: it too must exist in a superposition of states until some of the information it contains leaks out, leaving the single state that we see, but in conventional formulations of the universe, there is nothing else for it to leak into.

What Bousso and Susskind have done is to come up with an explanation for how the universe as a whole might decohere. Their trick is to think of the volume of space that encompasses all the information in our universe and everything it might possibly interact with in the future. In previous work, Susskind has dubbed this region a causal patch. The new idea is that our universe is just one causal patch among many others in a much bigger multiverse.
Many physicists have toyed with the idea that the cosmos is made up of regions which differ so profoundly that they can be thought of as different universes inside a bigger multiverse. Bousso and Susskind suggest that information can leak from our causal patch into others, allowing our part of the universe to decohere into one state or another, resulting in the universe that we observe.

But while decoherence explains why we don't see cats that are dead and alive at the same time, or our own universe in a huge superposition of states, it does not tell us which state the cat, or the universe, should eventually end up in. So Bousso and Susskind have also linked the idea of a multiverse of causal patches to something known as the "many worlds" interpretation of quantum mechanics, which was developed in the 1950s and 60s but has only become popular in the last 10 years or so.

According to this strange idea, when a superposition of states occurs, the cosmos splits into multiple parallel but otherwise identical universes. In one universe we might see the cat survive and in another we see it die. This results in an infinite number of parallel universes in which every conceivable outcome of every event actually happens.
Bousso and Susskind's contention is that the alternative realities of the many worlds interpretation are the additional causal patches that make up the multiverse. Most of these patches would have split from other universes, perhaps even ancestors of our own. "We argue that the global multiverse is a representation of the many-worlds in a single geometry," they say. They call this idea the multiverse interpretation of quantum mechanics and in a paper now available online they have proposed the mathematical framework behind it (arxiv.org/abs/1105.3796).

One feature of their framework is that it might explain puzzling aspects of our universe, such as the value of the cosmological constant and the apparent amount of dark energy.
The paper has caused flurry of excitement on physics blogs and in the broader physics community. "It's a very interesting paper that puts forward a lot of new ideas," says Don Page, a theoretical physicist at the University of Alberta in Edmonton, Canada. Sean Carroll, a cosmologist at the California Institute of Technology in Pasadena and author of the Cosmic Variance blog, thinks the idea has some merit. "I've gone from a confused skeptic to a tentative believer," he wrote on his blog. "I realized that these ideas fit very well with other ideas I've been thinking about myself!"

However, most agree that there are still questions to iron out. "It's an important step in trying to understand the cosmological implications of quantum mechanics but I'm sceptical that it's a final answer," says Page.
For example, one remaining question is how information can leak from a causal patch, a supposedly self-contained volume of the multiverse.
Susskind says it will take time for people to properly consider their new approach. And even then, the ideas may have to be refined. "This is not the kind of paper where somebody does a calculation and confirms that we're correct," says Bousso. "It's the sort of thing that will take a while to digest."

Bipolar kids: Victims of the 'madness industry'?

THERE'S a children's picture book in the US called Brandon and the Bipolar Bear. Brandon and his bear sometimes fly into unprovoked rages. Sometimes they're silly and overexcited. A nice doctor tells them they are ill, and gives them medicine that makes them feel much better.

The thing is, if Brandon were a real child, he would have just been misdiagnosed with bipolar disorder.
Also known as manic depression, this serious condition, involving dramatic mood swings, is increasingly being recorded in American children. And a vast number of them are being medicated for it.

 Kids' stuff?

The problem is, this apparent epidemic isn't real. "Bipolar emerges from late adolescence," says Ian Goodyer, a professor in the department of psychiatry at the University of Cambridge who studies child and adolescent depression. "It is very, very unlikely indeed that you'll find it in children under 7 years."
How did this strange, sweeping misdiagnosis come to pass? How did it all start? These were some of the questions I explored when researching The Psychopath Test, my new book about the odder corners of the "madness industry".

Freudian slip

The answer to the second question turned out to be strikingly simple. It was really all because of one man: Robert Spitzer.
I met Spitzer in his large, airy house in Princeton, New Jersey. In his eighties now, he remembered his childhood camping trips to upstate New York. "I'd sit in the tent, looking out, writing notes about the lady campers," he said. "Their attributes." He smiled. "I've always liked to classify people."
The trips were respite from Spitzer's "very unhappy mother". In the 1940s, the only help on offer was psychoanalysis, the Freudian-based approach of exploring the patient's unconscious. "She went from one psychoanalyst to another," said Spitzer. He watched the psychoanalysts flailing uselessly. She never got better.
Spitzer grew up to be a psychiatrist at Columbia University, New York, his dislike of psychoanalysis remaining undimmed. And then, in 1973, an opportunity to change everything presented itself. There was a job going editing the next edition of a little-known spiral-bound booklet called DSM - the Diagnostic and Statistical Manual of Mental Disorders.

DSM is simply a list of all the officially recognised mental illnesses and their symptoms. Back then it was a tiny book that reflected the Freudian thinking predominant in the 1960s. It had very few pages, and very few readers.
What nobody knew when they offered Spitzer the job was that he had a plan: to try to remove human judgement from psychiatry. He would create a whole new DSM that would eradicate all that crass sleuthing around the unconscious; it hadn't helped his mother. Instead it would be all about checklists. Any psychiatrist could pick up the manual, and if the patient's symptoms tallied with the checklist for a particular disorder, that would be the diagnosis.
For six years Spitzer held editorial meetings at Columbia. They were chaos. The psychiatrists would yell out the names of potential new mental disorders and the checklists of their symptoms. There would be a cacophony of voices in assent or dissent - the loudest voices getting listened to the most. If Spitzer agreed with those proposing a new diagnosis, which he almost always did, he'd hammer it out instantly on an old typewriter. And there it would be, set in stone.

That's how practically every disorder you've ever heard of or been diagnosed with came to be defined. "Post-traumatic stress disorder," said Spitzer, "attention-deficit disorder, autism, anorexia nervosa, bulimia, panic disorder..." each with its own checklist of symptoms. Bipolar disorder was another of the newcomers. The previous edition of the DSM had been 134 pages, but when Spitzer's DSM-III appeared in 1980 it ran to 494 pages.
"Were there any proposals for mental disorders you rejected?" I asked Spitzer. "Yes," he said, "atypical child syndrome. The problem came when we tried to find out how to characterise it. I said, 'What are the symptoms?' The man proposing it replied: 'That's hard to say because the children are very atypical'."
He paused. "And we were going to include masochistic personality disorder." He meant battered wives who stayed with their husbands. "But there were some violently opposed feminists who thought it was labelling the victim. We changed the name to self-defeating personality disorder and put it into the appendix."

DSM-III was a sensation. It sold over a million copies - many more copies than there were psychiatrists. Millions of people began using the checklists to diagnose themselves. For many it was a godsend. Something was categorically wrong with them and finally their suffering had a name. It was truly a revolution in psychiatry.
It was also a gold rush for drug companies, which suddenly had 83 new disorders they could invent medications for. "The pharmaceuticals were delighted with DSM," Spitzer told me, and this in turn delighted him: "I love to hear parents who say, 'It was impossible to live with him until we gave him medication and then it was night and day'."

Spitzer's successor, a psychiatrist named Allen Frances, continued the tradition of welcoming new mental disorders, with their corresponding checklists, into the fold. His DSM-IV came in at a mammoth 886 pages, with an extra 32 mental disorders.
Now Frances told me over the phone he felt he had made some terrible mistakes. "Psychiatric diagnoses are getting closer and closer to the boundary of normal," he said.
"Why?" I asked. "There's a societal push for conformity in all ways," he said. "There's less tolerance of difference. Maybe for some people having a label confers a sense of hope - previously I was laughed at but now I can talk to fellow sufferers on the internet."
Part of the problem is the pharmaceutical industry. "It's very easy to set off a false epidemic in psychiatry," said Frances. "The drug companies have tremendous influence."
One condition that Frances considers a mistake is childhood bipolar disorder. "Kids with extreme temper tantrums are being called bipolar," he said. "Childhood bipolar takes the edge of guilt away from parents that maybe they created an oppositional child."

"So maybe the diagnosis is good?"
"No," Frances said. "And there are very good reasons why not." His main concern is that children whose behaviour only superficially matches the bipolar checklist get treated with antipsychotic drugs, which can succeed in calming them down, even if the diagnosis is wrong. These drugs can have unpleasant and sometimes dangerous side effects.

Knife edge

The drug companies aren't the only ones responsible for propagating this false epidemic. Patient advocacy groups can be very fiery too. The author of Brandon and the Bipolar Bear, Tracy Anglada, is head of a childhood bipolar advocacy group called BP Children. She emailed me that she wished me all the best with my project but she didn't want to be interviewed. If, however, I wanted to submit a completed manuscript to her, she added, she'd be happy to consider it for review.
Anglada's friend Bryna Hebert has also written a children's book: My Bipolar, Roller Coaster, Feelings Book. "Matt! Will you take your medicines please?" she called across the kitchen when I visited her at home in Barrington, Rhode Island. The medicines were lined up on the kitchen table. Her son Matt, 14 years old, took them straight away.

The family's nickname for baby Matt had been Mister Manic Depressive. "Because his mood would change so fast. He'd be sitting in his high chair, happy as a clam; 2 seconds later he'd be throwing things across the room. When he was 3 he'd hit and not be sorry that he hit. He was obsessed with vampires. He'd cut out bits of paper and put them into his teeth like vampire teeth and go up to strangers. Hiss hiss hiss. It was a little weird."
"Were you getting nervous?" I asked. "Yeah," said Hebert. "One day he wanted some pretzels before lunch, and I told him no. He grabbed a butcher knife and threatened me."

"How old was he?"

"Four. That was the only time he's ever done anything that extreme," she said. "Oh, he's hit his sister Jessica in the head and kicked her in the stomach."
"She's the one who punched me in the head," called Matt from across the room.
It was after the knife incident, Hebert said, they took him to be tested. As it happened, the paediatric unit at what was then their local hospital, Massachusetts General, was run by Joseph Biederman, the doyen of childhood bipolar disorder. According to a 2008 article in the San Francisco Chronicle, "Biederman's influence is so great that when he merely mentions a drug during a presentation, tens of thousands of children will end up taking it." Biederman has said bipolar disorder can start, "from the moment the child opens his eyes".

"When they were testing Matt he was under the table, he was on top of the table," said Hebert. "We went through all these checklists. One of Dr Biederman's colleagues said, "We really think Matt meets the criteria in the DSM for bipolar disorder."
That was 10 years ago and Matt has been medicated ever since. So has his sister Jessica, who was also diagnosed by Biederman's people as bipolar. "We've been through a million medications," said Hebert. "There's weight gain. Tics. Irritability. Sedation. They work for a couple of years then they stop working."
Hebert was convinced her children were bipolar, and I wasn't going to swoop into a stranger's home for an afternoon and tell her they were normal. That would have been incredibly patronising and offensive. Plus, as the venerable child psychiatrist David Shaffer told me when I met him in New York later that evening, "These kids can be very oppositional, powerful kids who can take years off your happy life. But they aren't bipolar."

"Attention-deficit disorder?" he said. "Often with an ADD kid you think: 'My God, they're just like a manic adult.' But they don't grow up manic. And manic adults weren't ADD when they were children. But they're being labelled bipolar.
"That's an enormous label that's going to stay with you for the rest of your life. You're being told you have a condition which is going to make you unreliable, prone to terrible depressions and suicide."
The debate around childhood bipolar is not going away. In 2008, The New York Times published excerpts from an internal hospital document in which Biederman promised to "move forward the commercial goals of Johnson & Johnson", the firm that funds his hospital unit and sells the antipsychotic drug Risperdal. Biederman has denied the allegations of conflict of interest.

Frances has called for the diagnosis of childhood bipolar to be thrown out of the next edition of DSM, which is now being drawn up by the American Psychiatric Association.
This article shouldn't be read as a polemic against psychiatry. There are a lot of unhappy and damaged people out there whose symptoms manifest themselves in odd ways. I get irritated by critics who seem to think that because psychiatry has elements of irrationality, there is essentially no such thing as mental illness. There is. Childhood bipolar, however, seems to me an example of things having gone palpably wrong.
On the night of 13 December 2006, in Boston, Massachusetts, 4-year-old Rebecca Riley had a cold and couldn't sleep. Her mother, Carolyn Riley, gave her some cold medicine, and some of her bipolar medication, and told her she could sleep on the floor next to the bed. When she tried to wake Rebecca the next morning, she discovered her daughter was dead.

The autopsy revealed that Rebecca's parents had given her an overdose of the antipsychotic drugs she had been prescribed for her bipolar disorder. They had got into the habit of feeding her the medicines to shut her up when she was being annoying. They were both convicted of Rebecca's murder.
Rebecca had been diagnosed as bipolar at 2-and-a-half, and given medication by an upstanding psychiatrist who was a fan of Biederman's research into childhood bipolar. Rebecca had scored high on the DSM checklist, even though like most toddlers she could barely string a sentence together.

Shortly before her trial, Carolyn Riley was interviewed on CBS's 60 Minutes show by Katie Couric:
KC: Do you think Rebecca really had bipolar disorder?
CR: Probably not.             
KC: What do you think was wrong with her now?
CR: I don't know. Maybe she was just hyper for her age.

Jon Ronson is a writer and documentary maker living in London. He is the author of five books, including The Men Who Stare at Goats. His latest book, The Psychopath Test, is about the psychiatry industry

Saturday, June 4, 2011

Are you a genuine skeptic or a climate (change) denier?

by John Cook

In the charged discussions about climate, the words skeptic and denier are often thrown around. But what do these words mean?

Consider the following definitions. Genuine skeptics consider all the evidence in their search for the truth. Deniers, on the other hand, refuse to accept any evidence that conflicts with their pre-determined views.

So here's one way to tell if you're a genuine skeptic or a climate denier.

When trying to understand what's happening to our climate, do you consider the full body of evidence? Or do you find the denial instinct kicking in when confronted with inconvenient evidence?

For example, let's look at the question of whether global warming is happening. Do you acknowledge sea level rise, a key indicator of a warming planet, tripling over the last century? Do you factor in the warming oceans, which since 1970 have been building up heat at a rate of two-and-a-half Hiroshima bombs every second? Glaciers are retreating all over the world, threatening the water supply of hundreds of millions of people. Ice sheets from Greenland in the north to Antarctica in the south are losing hundreds of billions of tonnes of ice every year. Seasons are shifting, flowers are opening earlier each year and animals are migrating towards the poles. The very structure of our atmosphere is changing.

We have tens of thousands of lines of evidence that global warming is happening. A genuine skeptic surveys the full body of evidence coming in from all over our planet and concludes that global warming is unequivocal. A climate denier, on the other hand, reacts to this array of evidence in several possible ways.

The most extreme form of climate denier won't even go near the evidence. They avoid the issue altogether by indulging in conspiracy theories. They'll pull a quote out of context from a stolen 'Climategate' email as proof that climate change is just a huge hoax. I have yet to hear how the ice sheets, glaciers and thousands of migrating animal species are in on the conspiracy, but I'm sure there's a creative explanation floating around on the Internet.

The hardcore denier, firmly entrenched in the "it's not happening" camp, denies each piece of evidence. When confronted by retreating glaciers, their thoughts flick to the handful of growing glaciers while blocking out the vast majority of glaciers that are retreating at an accelerating rate.

They ignore sea level rise by focusing on short periods where sea levels briefly drop before inevitably resuming the long-term upward trend. The key to this form of denial is cherry picking. If you stare long and hard enough at a tiny piece of the puzzle that gives you the answer you want, you find the rest of the picture conveniently fades from view.

Some climate deniers have found it impossible to ignore the overwhelming array of evidence that the planet is warming (cognitive bias does have its limits) and moved onto the next stage of denial: "it's happening but it's not us". After all, climate has changed throughout Earth's history. How can we tell it's us this time?

The answer, as always, is by surveying the full body of evidence. Warming from our carbon dioxide emissions should yield many tell tale patterns. We don't need to rely on guess work or theory to tell us humans are causing warming. We can measure it.

If carbon dioxide is causing warming, we should measure less heat escaping to space. Satellites have observed this, with heat being trapped at those very wavelengths that carbon dioxide absorb radiation. If less heat is escaping, we should see more heat returning to the Earth's surface. This has been measured. Greenhouse warming should cause the lower atmosphere to warm but simultaneously, the upper atmosphere to cool. That's indeed what we observe is happening.

As far back as the 1800s, scientists predicted greenhouse warming should cause nights to warm faster than days and winters to warm faster than summers. Both predictions have come true. Everything we expect to see from greenhouse warming, we do see.

We have, as science historian Naomi Oreskes aptly puts it, "multiple, independent lines of evidence converging on a single coherent account". This consensus of evidence is the reason why we have a consensus of scientists with 97 out of 100 climate experts convinced that humans are driving global warming.

So which camp do you fall in?

Do you look at the full body of evidence, considering the whole picture as you build your understanding of climate? Or do you gravitate towards those select pieces of data that, out of context, give a contrarian impression, while denying the rest of the evidence?

Even for those of us who accept the scientific consensus, there is a more insidious form of denial - accepting that humans are causing climate change, but choosing to ignore it. Governments deny the implications of global warming when they make lots of noise about climate change but fail to back their words up with action. When we let politicians get away with inaction, we let denial prosper.

There are many ways we can roll back climate denial and contribute to the solution, such as reducing our own carbon footprint. But the greatest contribution we can make is to let our leaders know we demand climate action. Politicians may or may not care about the planet's future. But one thing we know with certainty is they care about their own future, particularly at the next election.

If we send a strong message to our politicians that we demand climate action, they will be forced to act.

John Cook created the website Skeptical Science and co-authored the book Climate Change Denial: Heads in the Sand.

Source  ABC

University of Toronto scientist leads international team in quantum physics first

TORONTO, ON - Quantum mechanics is famous for saying that a tree falling in a forest when there's no one there doesn't make a sound. Quantum mechanics also says that if anyone is listening, it interferes with and changes the tree. And so the famous paradox: how can we know reality if we cannot measure it without distorting it?
An international team of researchers, led by University of Toronto physicist Aephraim Steinberg of the Centre for Quantum Information and Quantum Control, have found a way to do just that by applying a modern measurement technique to the historic two-slit interferometer experiment in which a beam of light shone through two slits results in an interference pattern on a screen behind.
That famous experiment, and the 1927 Neils Bohr and Albert Einstein debates, seemed to establish that you could not watch a particle go through one of two slits without destroying the interference effect: you had to choose which phenomenon to look for.
 Patterns emerging from the famous double-slit experiment.

"Quantum measurement has been the philosophical elephant in the room of quantum mechanics for the past century," says Steinberg, who is lead author of Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer, to be published in Science on June 2. "However, in the past 10 to 15 years, technology has reached the point where detailed experiments on individual quantum systems really can be done, with potential applications such as quantum cryptography and computation."

With this new experiment, the researchers have succeeded for the first time in experimentally reconstructing full trajectories which provide a description of how light particles move through the two slits and form an interference pattern. Their technique builds on a new theory of weak measurement that was developed by Yakir Aharonov's group at Tel Aviv University. Howard Wiseman of Griffith University proposed that it might be possible to measure the direction a photon (particle of light) was moving, conditioned upon where the photon is found. By combining information about the photon's direction at many different points, one could construct its entire flow pattern ie. the trajectories it takes to a screen.

"In our experiment, a new single-photon source developed at the National Institute for Standards and Technology in Colorado was used to send photons one by one into an interferometer constructed at Toronto. We then used a quartz calcite, which has an effect on light that depends on the direction the light is propagating, to measure the direction as a function of position. Our measured trajectories are consistent, as Wiseman had predicted, with the realistic but unconventional interpretation of quantum mechanics of such influential thinkers as David Bohm and Louis de Broglie," said Steinberg.

The original double-slit experiment played a central role in the early development of quantum mechanics, leading directly to Bohr's formulation of the principle of complementarity. Complementarity states that observing particle-like or wave-like behaviour in the double-slit experiment depends on the type of measurement made: the system cannot behave as both a particle and wave simultaneously. Steinberg's recent experiment suggests this doesn't have to be the case: the system can behave as both.

"By applying a modern measurement technique to the historic double-slit experiment, we were able to observe the average particle trajectories undergoing wave-like interference, which is the first observation of its kind. This result should contribute to the ongoing debate over the various interpretations of quantum theory," said Steinberg. "It shows that long-neglected questions about the different types of measurement possible in quantum mechanics can finally be addressed in the lab, and weak measurements such as the sort we use in this work may prove crucial in studying all sorts of new phenomena.

"But mostly, we are all just thrilled to be able to see, in some sense, what a photon does as it goes through an interferometer, something all of our textbooks and professors had always told us was impossible."
###
Research partners include the University of Toronto's Centre for Quantum Information and Quantum Control, Department of Physics and Institute for Optical Sciences, the National Institute of Standards and Technology in Boulder, Colorado, the Institute for Quantum Computing at the University of Waterloo, Griffith University, Australia, and the Laboratoire Charles Fabry in Orsay, France. Research was funded by the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute for Advanced Research, and Quantum Works.

Source  EurekaAlert!

Monday, May 23, 2011

Physics and the Immortality of the Soul

The topic of "life after death" raises disreputable connotations of past-life regression and haunted houses, but there are a large number of people in the world who believe in some form of persistence of the individual soul after life ends. Clearly this is an important question, one of the most important ones we can possibly think of in terms of relevance to human life. If science has something to say about, we should all be interested in hearing.
Adam Frank thinks that science has nothing to say about it. He advocates being "firmly agnostic" on the question. (His coblogger Alva Noƫ resolutely disagrees.) I have an enormous respect for Adam; he's a smart guy and a careful thinker. When we disagree it's with the kind of respectful dialogue that should be a model for disagreeing with non-crazy people. But here he couldn't be more wrong.

Adam claims that there "simply is no controlled, experimental[ly] verifiable information" regarding life after death. By these standards, there is no controlled, experimentally verifiable information regarding whether the Moon is made of green cheese. Sure, we can take spectra of light reflecting from the Moon, and even send astronauts up there and bring samples back for analysis. But that's only scratching the surface, as it were. What if the Moon is almost all green cheese, but is covered with a layer of dust a few meters thick? Can you really say that you know this isn't true? Until you have actually examined every single cubic centimeter of the Moon's interior, you don't really have experimentally verifiable information, do you? So maybe agnosticism on the green-cheese issue is warranted. (Come up with all the information we actually do have about the Moon; I promise you I can fit it into the green-cheese hypothesis.)

Obviously this is completely crazy. Our conviction that green cheese makes up a negligible fraction of the Moon's interior comes not from direct observation, but from the gross incompatibility of that idea with other things we think we know. Given what we do understand about rocks and planets and dairy products and the Solar System, it's absurd to imagine that the Moon is made of green cheese. We know better.

We also know better for life after death, although people are much more reluctant to admit it. Admittedly, "direct" evidence one way or the other is hard to come by -- all we have are a few legends and sketchy claims from unreliable witnesses with near-death experiences, plus a bucketload of wishful thinking. But surely it's okay to take account of indirect evidence -- namely, compatibility of the idea that some form of our individual soul survives death with other things we know about how the world works.

Claims that some form of consciousness persists after our bodies die and decay into their constituent atoms face one huge, insuperable obstacle: the laws of physics underlying everyday life are completely understood, and there's no way within those laws to allow for the information stored in our brains to persist after we die. If you claim that some form of soul persists beyond death, what particles is that soul made of? What forces are holding it together? How does it interact with ordinary matter?
Everything we know about quantum field theory (QFT) says that there aren't any sensible answers to these questions. Of course, everything we know about quantum field theory could be wrong. Also, the Moon could be made of green cheese.

Among advocates for life after death, nobody even tries to sit down and do the hard work of explaining how the basic physics of atoms and electrons would have to be altered in order for this to be true. If we tried, the fundamental absurdity of the task would quickly become evident.
Even if you don't believe that human beings are "simply" collections of atoms evolving and interacting according to rules laid down in the Standard Model of particle physics, most people would grudgingly admit that atoms are part of who we are. If it's really nothing but atoms and the known forces, there is clearly no way for the soul to survive death. Believing in life after death, to put it mildly, requires physics beyond the Standard Model. Most importantly, we need some way for that "new physics" to interact with the atoms that we do have.

Very roughly speaking, when most people think about an immaterial soul that persists after death, they have in mind some sort of blob of spirit energy that takes up residence near our brain, and drives around our body like a soccer mom driving an SUV. The questions are these: what form does that spirit energy take, and how does it interact with our ordinary atoms? Not only is new physics required, but dramatically new physics. Within QFT, there can't be a new collection of "spirit particles" and "spirit forces" that interact with our regular atoms, because we would have detected them in existing experiments. Ockham's razor is not on your side here, since you have to posit a completely new realm of reality obeying very different rules than the ones we know.

But let's say you do that. How is the spirit energy supposed to interact with us? Here is the equation that tells us how electrons behave in the everyday world:

Don't worry about the details; it's the fact that the equation exists that matters, not its particular form. It's the  Dirac equation -- the two terms on the left are roughly the velocity of the electron and its inertia -- coupled to electromagnetism and gravity, the two terms on the right.

As far as every experiment ever done is concerned, this equation is the correct description of how electrons behave at everyday energies. It's not a complete description; we haven't included the weak nuclear force, or couplings to hypothetical particles like the Higgs boson. But that's okay, since those are only important at high energies and/or short distances, very far from the regime of relevance to the human brain.

If you believe in an immaterial soul that interacts with our bodies, you need to believe that this equation is not right, even at everyday energies. There needs to be a new term (at minimum) on the right, representing how the soul interacts with electrons. (If that term doesn't exist, electrons will just go on their way as if there weren't any soul at all, and then what's the point?) So any respectable scientist who took this idea seriously would be asking -- what form does that interaction take? Is it local in spacetime? Does the soul respect gauge invariance and Lorentz invariance? Does the soul have a Hamiltonian? Do the interactions preserve unitarity and conservation of information?

Nobody ever asks these questions out loud, possibly because of how silly they sound. Once you start asking them, the choice you are faced with becomes clear: either overthrow everything we think we have learned about modern physics, or distrust the stew of religious accounts/unreliable testimony/wishful thinking that makes people believe in the possibility of life after death. It's not a difficult decision, as scientific theory-choice goes.

We don't choose theories in a vacuum. We are allowed -- indeed, required -- to ask how claims about how the world works fit in with other things we know about how the world works. I've been talking here like a particle physicist, but there's an analogous line of reasoning that would come from evolutionary biology. Presumably amino acids and proteins don't have souls that persist after death. What about viruses or bacteria? Where upon the chain of evolution from our monocellular ancestors to today did organisms stop being described purely as atoms interacting through gravity and electromagnetism, and develop an immaterial immortal soul?

There's no reason to be agnostic about ideas that are dramatically incompatible with everything we know about modern science. Once we get over any reluctance to face reality on this issue, we can get down to the much more interesting questions of how human beings and consciousness really work.















Sean Carroll is a physicist and author. He received his Ph.D. from Harvard in 1993, and is now on the faculty at the California Institute of Technology, where his research focuses on fundamental physics and cosmology. Carroll is the author of From Eternity to Here: The Quest for the Ultimate Theory of Time, and Spacetime and Geometry: An Introduction to General Relativity. He has written for Discover, Scientific American, New Scientist, and other publications. His blog Cosmic Variance is hosted by Discover magazine, and he has been featured on television shows such as The Colbert Report, National Geographic's Known Universe, and Through the Wormhole with Morgan Freeman. His Twitter handle is  @seanmcarroll
Cross-posted on Cosmic Variance.
The views expressed are those of the author and are not necessarily those of Scientific American.

Source  Scientific American

Friday, May 20, 2011

Rapture: Why do people love doomsday predictions?

People always tell you to live life like there's no tomorrow, and for once I'm considering following that advice - literally.

With a massive deadline looming on Monday, I was planning on spending most of the weekend working. But according to the evangelical preacher Harold Camping, the world is going to end tomorrow, so if today is going to be my last day on Earth, I think I'd like to spend tonight doing something a little more fun - especially if there's no all-day hangover awaiting me on the other side.

According to Camping, who is basing his prediction on a mathematical calculation using dates in the Bible, tomorrow - 21 May 2011 - is the Rapture. The day when Christians will rise up to meet Jesus in the sky.
It might sound silly if you don't believe in God, but according to a Pew Research Center poll, 41 per cent of people in the US believe Jesus will return to Earth before 2050. According to a New York news website, some of Camping's thousands of supporters have sold their belongings and quit their jobs in anticipation.

So why are people so keen to predict the end times? Sceptics might see it as a way to make money, or an attention-seeking ploy. Camping has certainly made a name for himself and his radio station. But according to Lorenzo DiTommaso, associate professor of religion at Concordia University in Montreal, Canada, these people have a genuine belief: "It wouldn't work otherwise," he says.

What's more, these kinds of apocalyptical prophecies have been around for 23 centuries since the Book of Daniel, he says, so are forcibly more than a media ploy.
One theory is that such precise predictions feed the human desire to know the unknown. It could simply be a way of trying to explain the world around us, or to give us hope, says DiTommaso: "Within its limitations, apocalypticism is very rational. It's a world view that explains time, space, and human existence. It's not science - it's not universal or repeatable - but it does explain things."

DiTommaso also says that sociological studies have shown that people who tend to enjoy an apocalyptic world view also seem to be the kinds of people who seek out explanations of the world: "They tend to be quite intelligent compared with the general population but they are looking for answers for how life is the way it is, and whether there is a purpose. Envisioning a better time past the evils of the world provides a very powerful way of understanding the world and all its problems." Surprising as it may sound, even Isaac Newton spent a great deal of his career trying to decipher the prophesies of Daniel in the book of revelation.
So, what's the likelihood Camping is right? If I'm going to base my weekend plans on his track record, I should probably keep my head down and work. He predicted the end of the world in 1994, but that one was postponed due to a scheduling error (turns out he got the mathematics wrong).

It may seem odd that people don't dismiss Camping, considering he got it wrong last time. But psychological studies show how the failure of such prophesies has the surprising effect of making the beliefs of their proponents' even stronger.
In their 1956 book When Prophecy Fails: A social and psychological study of a modern group that predicted the destruction of the world, Leon Festinger and others explained that this is a fundamental tenet of human psychology, which they called cognitive dissonance.

Essentially, the social psychologists said that people have a problem when they have two beliefs that sit uncomfortably side by side. For example, the belief that the world will end (for which you have sold your home and all your possessions bar a placard) and the realisation that the world is still here, as are you. In an article written for New Scientist, psychologist Richard Wiseman says: "According to this idea, people find it uncomfortable to hold two conflicting beliefs in their head at the same time, and will perform all sorts of mental gymnastics to reconcile the two."

Because people can't deal with having two contradictory beliefs, they will quickly find a seemingly rational explanation - that the calculations were wrong, for example, or that their preaching converted so many people that the world was saved.
There are other scientific explanations of why we prefer to stick to old beliefs, even in the face of new facts. For example, the principle of confirmation bias shows how we seek out information that supports our beliefs.
It seems then, that if the rapture doesn't happen Camping will be able to explain why, and his followers will likely believe even more strongly than they did before.

Explaining why the mathematics doesn't add up shouldn't be too hard, either, says DiTommaso: "The calculations depend on a lot of variables including a lot of data that can't be verified, like the fact that the world was created a little over 7000 years ago. You can really massage your figures any way you see fit."
If, like me, you're hedging your bets, why not just join in one of the Rapture parties: but make sure you buy some paracetamol for the morning, just in case.

Source New Scientist

Thursday, May 19, 2011

How you think about death may affect how you act

How you think about death affects how you behave in life.

That's the conclusion of a new study which will be published in an upcoming issue of Psychological Science, a journal of the Association for Psychological Science. Researchers had people either think about death in the abstract or in a specific, personal way and found that people who thought specifically about their own death were more likely to demonstrate concern for society by donating blood.

Laura E.R. Blackie, a Ph.D. student at the University of Essex, and her advisor, Philip J. Cozzolino, recruited 90 people in a British town center. Some were asked to respond to general questions about death – such as their thoughts and feelings about death and what they think happens to them if they die. Others were asked to imagine dying in an apartment fire and then asked four questions about how they thought they would deal with the experience and how they thought their family would react. A control group thought about dental pain.

Next, the participants were given an article, supposedly from the BBC, about blood donations. Some people read an article saying that blood donations were "at record highs" and the need was low; others read another article reporting the opposite – that donations were "at record lows" and the need was high. They were then offered a pamphlet guaranteeing fast registration at a blood center that day and told they should only take a pamphlet if they intended to donate.

People who thought about death in the abstract were motivated by the story about the blood shortage. They were more likely to take a pamphlet if they read that article. But people who thought about their own death were likely to take a pamphlet regardless of which article they read; their willingness to donate blood didn't seem to depend on how badly it was needed.

"Death is a very powerful motivation," Blackie says. "People seem aware that their life is limited. That can be one of the best gifts that we have in life, motivating us to embrace life and embrace goals that are important to us." When people think about death abstractly, they may be more likely to fear it, while thinking specifically about your own death "enables people to integrate the idea of death into their lives more fully," she says. Thinking about their mortality in a more personal and authentic manner may make

Source EurekaAlert!