Tuesday, May 31, 2011

Carnegie Mellon researchers uncover how the brain processes faces

Groundbreaking study identifies neural system responsible for face recognition.

Stimuli were matched with respect to low-level properties, external features and high-level characteristics.
 
Each time you see a person that you know, your brain rapidly and seemingly effortlessly recognizes that person by his or her face.
Until now, scientists believed that only a couple of brain areas mediate facial recognition. However, Carnegie Mellon University's Marlene Behrmann, David Plaut and Adrian Nestor have discovered that an entire network of cortical areas work together to identify faces. Published in the current issue of the Proceedings of the National Academy of Sciences (PNAS), their findings will change the future of neural visual perception research and allow scientists to use this discovery to develop targeted remedies for disorders such as face blindness.

"This research will change the types of questions asked going forward because we are not just looking at one area of the brain," said Nestor, a postdoctoral research fellow within CMU's Department of Psychology and lead author of the study. "Now, scientists will have to account for the system as a whole or else our ability to understand face individuation will be limited."
Behrmann, professor of psychology and a renowned expert in using brain imaging to study prosopagnosia, or face blindness, agreed.

"Faces are among the most compelling visual stimulation that we encounter, and recognizing faces taxes our visual perception system to the hilt. Carnegie Mellon has a longstanding history for embracing a full-system account of the brain. We have the computational tools and technology to push further into looking past one single brain region. And, that is what we did here to discover that there are multiple cortical areas working together to recognize faces," she said.

For the study, participants were shown images of faces while in a magnetic resonance imaging (MRI) scanner. Their task was to recognize different facial identities with varying facial expressions. Using dynamic multivariate mapping, the research team examined the functional MRI (fMRI) data and found a network of fusiform and anterior temporal regions that respond with distinct patterns to different identities. Furthermore, they found that the information is evenly distributed among the anterior regions and that the right fusiform region plays a central role within the network.

"Not only do we have a more clearly defined architectural model of the brain, but we were able to determine the involvement of multiple brain areas in face recognition as well as in other types of processes, such as visual word recognition," Behrmann said.

Source  EurekaAlert!

Team solves decades-old molecular mystery linked to blood clotting

CHAMPAIGN, lll. — Blood clotting is a complicated business, particularly for those trying to understand how the body responds to injury. In a new study, researchers report that they are the first to describe in atomic detail a chemical interaction that is vital to blood clotting. This interaction – between a clotting factor and a cell membrane – has baffled scientists for decades.

Above is a movie of the supercomputer simulation of the blood clotting factor interacting with the membrane. The GLA domain of the clotting factor is depicted as a purple tube; individual GLA amino acids are yellow; tightly bound calcium ions are pink spheres; and the interacting phospholipids that make up the membrane are below.

The study appears online in the Journal of Biological Chemistry.
“For decades, people have known that blood-clotting proteins have to bind to a cell membrane in order for the clotting reaction to happen,” said University of Illinois biochemistry professor James Morrissey, who led the study with chemistry professor Chad Rienstra and biochemistry, biophysics and pharmacology professor Emad Tajkhorshid. “If you take clotting factors off the membrane, they’re thousands of times less active.”
The researchers combined laboratory detective work with supercomputer simulations and solid-state nuclear magnetic resonance (SSNMR) to get at the problem from every angle. They also made use of tiny rafts of lipid membranes called nanodiscs, using an approach developed at Illinois by biochemistry professor Stephen Sligar.

Previous studies had shown that each clotting factor contains a region, called the GLA domain, which interacts with specific lipids in cell membranes to start the cascade of chemical reactions that drive blood clotting.
One study, published in 2003 in the journal Nature Structural Biology, indicated that the GLA domain binds to a special phospholipid, phosphatidylserine (PS), which is embedded in the membrane. Other studies had shown that PS binds weakly to the clotting factor on its own, but in the presence of another phospholipid, phosphatidylethanolamine (PE), the interaction is much stronger.

Both PS and PE are abundant in the inner – but not the outer – leaflets of the double-layered membranes of cells. This keeps these lipids from coming into contact with clotting factors in the blood. But any injury that ruptures the cells brings PS and PE together with the clotting factors, initiating a chain of events that leads to blood clotting.
Researchers have developed many hypotheses to explain why clotting factors bind most readily to PS when PE is present. But none of these could fully explain the data.
In the new study, Morrissey’s lab engineered nanodiscs with high concentrations of PS and PE, and conducted functional tests to determine if they responded like normal membranes.
“We found that the nanodisc actually is very representative of what really happens in the cell in terms of the reaction of the lipids and the role that they play,” Morrissey said.

Then Tajkhorshid’s lab used advanced modeling and simulation methods to position every atom in the system and simulated the molecular interactions on a supercomputer. The simulations indicated that one PS molecule was linking directly to the GLA domain of the clotting factor via an amino acid (serine) on its head-group (the non-oily region of a phospholipid that orients toward the membrane surface).
More surprisingly, the simulations indicated that six other phospholipids also were drawing close to the GLA domain. These lipids, however, were bending their head-groups out of the way so that their phosphates, which are negatively charged, could interact with positively charged calcium ions associated with the GLA domain. (Watch a movie of the simulation.)
“The simulations were a breakthrough for us,” Morrissey said. “They provided a detailed view of how things might come together during membrane binding of coagulation factors. But these predictions had to be tested experimentally.”

Rienstra’s lab then analyzed the samples using SSNMR, a technique that allows researchers to precisely measure the distances and angles between individual atoms in large molecules or groups of interacting molecules. His group found that one of every six or seven PS molecules was binding directly to the clotting factor, providing strong experimental support for the model derived from the simulations.
“That turned out to be a key insight that we contributed to this study,” Rienstra said.
The team reasoned that if the PE head-groups were simply bending out of the way, then any phospholipid with a sufficiently small head-group should work as well as PE in the presence of PS. This also explained why only one PS molecule was actually binding to a GLA domain. The other phospholipids nearby were also interacting with the clotting factor, but more weakly.
The finding explained another mystery that had long daunted researchers. A different type of membrane lipid, phosphatidylcholine (PC), which has a very large head-group and is most abundant on the outer surface of cells, was known to block any association between the membrane and the clotting factor, even in the presence of PS.

Follow-up experiments showed that any phospholipid but PC enhanced the binding of PS to the GLA domain. This led to the “ABC” hypothesis: when PS is present, the GLA domain will interact with “Anything But Choline.”
“This is the first real insight at an atomic level of how most of the blood-clotting proteins interact with membranes, an interaction that’s known to be essential to blood clotting,” Morrissey said. The findings offer new targets for the development of drugs to regulate blood clotting, he said.
Morrissey and Tajkhorshid have their primary appointments in the U. of I. College of Medicine. Tajkhorshid also is an affiliate of the Beckman Institute at Illinois.
The National Heart, Lung and Blood Institute and the National Institute for General Medical Sciences provided funding for this study.

Source University of Illinois

Sleep loss lowers testosterone in healthy young men

Cutting back on sleep drastically reduces a healthy young man's testosterone levels, according to a study published in the June 1 issue of the Journal of the American Medical Association (JAMA).
Eve Van Cauter, PhD, professor in medicine and director of the study, found that men who slept less than five hours a night for one week in a laboratory had significantly lower levels of testosterone than when they had a full night's sleep. Low testosterone has a host of negative consequences for young men, and not just in sexual behavior and reproduction. It is critical in building strength and muscle mass, and bone density.
"Low testosterone levels are associated with reduced well being and vigor, which may also occur as a consequence of sleep loss" said Van Cauter.

At least 15% of the adult working population in the US gets less than 5 hours of sleep a night, and suffers many adverse health effects because of it. This study found that skipping sleep reduces a young man's testosterone levels by the same amount as aging 10 to 15 years.
"As research progresses, low sleep duration and poor sleep quality are increasingly recognized as endocrine disruptors," Van Cauter said.

The ten young men in the study were recruited from around the University of Chicago campus. They passed a rigorous battery of tests to screen for endocrine or psychiatric disorders and sleep problems. They were an average of 24 years old, lean and in good health.
For the study, they spent three nights in the laboratory sleeping for up to ten hours, and then eight nights sleeping less than five hours. Their blood was sampled every 15 to 30 minutes for 24 hours during the last day of the ten-hour sleep phase and the last day of the five-hour sleep phase.

The effects of sleep loss on testosterone levels were apparent after just one week of short sleep. Five hours of sleep decreased their testosterone levels by 10% to 15%. The young men had the lowest testosterone levels in the afternoons on their sleep restricted days, between 2 pm and 10 pm.
The young men also self-reported their mood and vigor levels throughout the study. They reported a decline in their sense of well-being as their blood testosterone levels declined. Their mood and vigor fell more every day as the sleep restriction part of the study progressed.
Testosterone levels in men decline by 1% to 2% a year as they age. Testosterone deficiency is associated with low energy, reduced libido, poor concentration, and fatigue.

Source  EurekaAlert!

Code green: Energy-efficient programming to curb computers’ power use

Soaring energy consumption by ever more powerful computers, data centers and mobile devices has many experts looking to reduce the energy use of these devices. Most projects so far focus on more efficient cooling systems or energy-saving power modes.
A University of Washington project sees a role for programmers to reduce the energy appetite of the ones and zeroes in the code itself. Researchers have created a system, called EnerJ, that reduces energy consumption in simulations by up to 50 percent, and has the potential to cut energy by as much as 90 percent. They will present  the research next week in San Jose at the Programming Language Design and Implementation annual meeting.

“We all know that energy consumption is a big problem,” said author Luis Ceze, a UW assistant professor of computer science and engineering. “With our system, mobile phone users would notice either a smaller phone, or a longer battery life, or both. Computing centers would notice a lower energy bill.”
The basic idea is to take advantage of processes that can survive tiny errors that happen when, say, voltage is decreased or correctness checks are relaxed. Some examples of possible applications are streaming audio and video, games and real-time image recognition for augmented-reality applications on mobile devices.
“Image recognition already needs to be tolerant of little problems, like a speck of dust on the screen,” said co-author Adrian Sampson, a UW doctoral student in computer science and engineering. “If we introduce a few more dots on the image because of errors, the algorithm should still work correctly, and we can save energy.”

The UW system is a general framework that creates two interlocking pieces of code. One is the precise part – for instance, the encryption on your bank account’s password. The other portion is for all the processes that could survive occasional slipups.
The software creates an impenetrable barrier between the two pieces.
“We make it impossible to leak data from the approximate part into the precise part,” Sampson said. “You’re completely guaranteed that can’t happen.”
While computers’ energy use is frustrating and expensive, there is also a more fundamental issue at stake. Some experts believe we are approaching a limit on the number of transistors that can run on a single microchip. The so-called “dark silicon problem” says that as we boost computer speeds by cramming more transistors onto each chip, there may no longer be any way to supply enough power to the chip to run all the transistors.

The UW team’s approach would work like a dimmer switch, letting some transistors run at a lower voltage. Approximate tasks could run on the dimmer regions of the chip.
“When I started thinking about this, it became more and more obvious that this could be applied, at least a little bit, to almost everything,” Sampson said. “It seemed like I was always finding new places where it could be applied, at least in a limited way.”
Researchers would use the program with a new type of hardware where some transistors have a lower voltage, the force on electrons in the circuit. This slightly increases the risk of random errors; EnerJ shuttles only approximate tasks to these transistors.
“If you can afford one error every 100,000 operations or so, you can already save a lot of energy,” Ceze said.
Other ways to use hardware to save energy are lowering the refresh rate and reducing voltage of the memory chip.

Simulations of such hardware show that running EnerJ would cut energy by about 20 to 25 percent, on average, depending on the aggressiveness of the approach. For one program the energy saved was almost 50 percent. Researchers are now designing hardware to test their results in the lab.
Today’s computers could also use EnerJ with a purely software-based approach. For example, the computer could round off numbers or skip some extra accuracy checks on the approximate part of the code to save energy – researchers estimate between 30 and 50 percent savings based on software alone.
Combining the software and hardware methods they believe they could cut power use by about 90 percent.
“Our long-term goal would be 10 times improvement in battery life,” Ceze said. “I don’t think it is totally out of the question to have an order of magnitude reduction if we continue squeezing unnecessary accuracy.”
The program is called EnerJ because it is an extension for the Java programming language. The team hopes to release the code as an open-source tool this summer.

Co-authors of the paper are UW computer science and engineering professor Dan Grossman, postdoctoral researcher Werner Dietl, graduate student Emily Fortuna and undergraduate Danushen Gnanapragasam. Also involved in the research is doctoral student Hadi Esmaeilzadeh.

By Hannah Hickey 

Why childhood obesity? It's so much more than what kids eat

URBANA –University of Illinois scientists from a variety of disciplines have teamed up to examine the factors that contribute to childhood obesity. Why? Because individual researchers have found that the problem is too complicated for any of them to tackle alone.

"Our Strong Kids team members are looking at such diverse factors as genetic predisposition, the effect of breastfeeding, how much TV a child watches, and the neighborhood he lives in, among many others," said Kristen Harrison of the U of I's Division of Nutritional Sciences. "It seems like the answer should be simple, just eat less and exercise more, but when you look at the reasons that kids overeat and burn fewer calories, it turns out there are a lot of them."

Harrison and other Strong Kids team members received funding for a three-year longitudinal study and are applying for support to keep the research going. The scientists have collected and analyzed two generations of data on approximately 400 families, and they are beginning a third wave of data collection. Individual studies, including communication professor Harrison's own examination of preschoolers' television viewing and eating habits, are ongoing.

But the first step was developing a model for studying the problem. The team's Six Cs model will examine the problem of childhood obesity from the following angles: cell, child, clan (or family), community, country and culture. A paper detailing their approach appeared in a recent issue of Child Development Perspectives.
"From 30 to 40 percent of the population has a variety of genetic markers that puts them at greater risk for obesity," said professor of nutrition Margarita Teran-Garcia, who is approaching the problem at the cellular level. As a starting point, she is taking saliva samples from preschoolers in the study group to map their genetic susceptibility to obesity.

Child development professor Kelly Bost is looking at the quality of parent-child attachment. "There's evidence that insecure attachment predicts more TV exposure, more consumption of unhealthful foods, and other factors leading to greater obesity," she said.
Another kinesiology and community health professor, Diana Grigsby-Toussaint, is geomapping retail environments in the neighborhoods where the participating families live, looking in detail at what foods are available there. "She's also mapping how much green space is available and how that relates to outdoor play and activity," Harrison said.

Later work will add more puzzle pieces relating to the community and culture components. For example, what's the community BMI and do participants in the study believe that BMI is normal? What's the usual portion size in this culture? Are children urged to take second and third helpings at mealtime?
"Southern U.S. culture, Latin American culture, and the Sam's Club bulk-buying phenomenon are all elements of what we're trying to capture when we talk about culture," Harrison said.
And professor of applied family studies Angela Wiley is collecting data relating to childhood obesity prevention among Mexican immigrant families in the Abriendos-Caminos program so the researchers can compare parallel populations across countries.

"Childhood obesity is a puzzle, and at different stages, certain variables drop in or out of the picture. Breastfeeding versus formula feeding is a predictor, but it drops out of the model entirely when you get past babyhood. Vending machines in schools are important later in a child's life, but they weren't important before," she added.

There has been very little transdisciplinary effort to map out how all these factors work together, although research shows that no single factor is the most important, Harrison noted.
"We're each looking at different spheres in the model, but we're also looking at potential interactions. That's one of the exciting things we'll get to do as we move forward," she said.

Source  EurekaAlert!

Monday, May 30, 2011

Arrowing in on Alzheimer's disease

Recently the number of genes known to be associated with Alzheimer's disease has increased from four to eight, including the MS4A gene cluster on chromosome 11.

New research published in BioMed Central's open access journal Genome Medicine has expanded on this using a genome-wide association study (GWAS) to find a novel location within the MS4A gene cluster which is associated with Alzheimer's disease.

Alzheimer's disease is the most common cause of dementia in the developed world. It irrevocably destroys cells in the brain that are responsible for intellectual ability and memory. Despite continued investigation, the causes of Alzheimer's disease are not yet fully understood but they are thought to be a mixture of genetic and environmental factors. Several studies have used GWAS to search the entire human genome for genes which are mutated in Alzheimer's sufferers in the hope of finding a way to treat or slow down the disease.

A team of researchers across Spain and USA sponsored by non-profit Fundación Alzheimur (Comunidad Autónoma de la Región de Murcia) and Fundació ACE Institut Català de Neurociències Aplicades performed their own GWAS study using patients with Alzheimer's disease, and non-affected controls, from Spain and then combined their results with four public GWAS data sets. Dr Agustín Ruiz said, "Combining these data sets allowed us to look more accurately at small genetic defects. Using this technique we were able to confirm the presence of mutations (SNP) known to be associated with Alzheimer's disease, including those within the MS4A cluster, and we also found a novel site."

Dr Ruiz continued, "Several of the 16 genes within the MS4A cluster are implicated in the activities of the immune system and are probably involved in allergies and autoimmune disease. MS4A2 in particular has been linked to aspirin-intolerant asthma. Our research provides new evidence for a role of the immune system in the progression of Alzheimer's disease."

Source EurekaAlert!

Climate played big role in Vikings’ disappearance from Greenland

Greenland's early Viking settlers were subjected to rapidly changing climate. Temperatures plunged several degrees in a span of decades, according to research from Brown University. A reconstruction of 5,600 years of climate history from lakes near the Norse settlement in western Greenland also shows how climate affected the Dorset and Saqqaq cultures. Results appear in Proceedings of the National Academy of Sciences.

PROVIDENCE, R.I. [Brown University] — The end of the Norse settlements on Greenland likely will remain shrouded in mystery. While there is scant written evidence of the colony’s demise in the 14th and early 15th centuries, archaeological remains can fill some of the blanks, but not all.

What climate scientists have been able to ascertain is that an extended cold snap, called the Little Ice Age, gripped Greenland beginning in the 1400s. This has been cited as a major cause of the Norse’s disappearance. Now researchers led by Brown University show the climate turned colder in an earlier span of several decades, setting in motion the end of the Greenland Norse. Their findings appear in Proceedings of the National Academy of Sciences.

The Brown scientists’ finding comes from the first reconstruction of 5,600 years of climate history from two lakes in Kangerlussuaq, near the Norse “Western Settlement.” Unlike ice cores taken from the Greenland ice sheet hundreds of miles inland, the new lake core measurements reflect air temperatures where the Vikings lived, as well as those experienced by the Saqqaq and the Dorset, Stone Age cultures that preceded them.
“This is the first quantitative temperature record from the area they were living in,” said William D’Andrea, the paper’s first author, who earned his doctorate in geological sciences at Brown and is now a postdoctoral researcher at the University of Massachusetts–Amherst. “So we can say there is a definite cooling trend in the region right before the Norse disappear.”

“The record shows how quickly temperature changed in the region and by how much,” said co-author Yongsong Huang, professor of geological sciences at Brown, principal investigator of the NSF-funded project, and D’Andrea’s Ph.D. adviser. “It is interesting to consider how rapid climate change may have impacted past societies, particularly in light of the rapid changes taking place today.”

D’Andrea points out that climate is not the only factor in the demise of the Norse Western Settlement. The Vikings’ sedentary lifestyle, reliance on agriculture and livestock for food, dependence on trade with Scandinavia and combative relations with the neighboring Inuit, are believed to be contributing factors.
Still, it appears that climate played a significant role. The Vikings arrived in Greenland in the 980s, establishing a string of small communities along Greenland’s west coast. (Another grouping of communities, called the “Eastern Settlement” also was located on the west coast but farther south on the island.) The arrival coincided with a time of relatively mild weather, similar to that in Greenland today. However, beginning around 1100, the climate began an 80-year period in which temperatures dropped 4 degrees Celsius (7 degrees Fahrenheit), the Brown scientists concluded from the lake readings. While that may not be considered precipitous, especially in the summer, the change could have ushered in a number of hazards, including shorter crop-growing seasons, less available food for livestock and more sea ice that may have blocked trade.
“You have an interval when the summers are long and balmy and you build up the size of your farm, and then suddenly year after year, you go into this cooling trend, and the summers are getting shorter and colder and you can’t make as much hay. You can imagine how that particular lifestyle may not be able to make it,” D’Andrea said.

Archaeological and written records show the Western Settlement persisted until sometime around the mid-1300s. The Eastern Settlement is believed to have vanished in the first two decades of the 1400s.
The researchers also examined how climate affected the Saqqaq and Dorset peoples. The Saqqaq arrived in Greenland around 2500 B.C. While there were warm and cold swings in temperature for centuries after their arrival, the climate took a turn for the bitter beginning roughly 850 B.C., the scientists found. “There is a major climate shift at this time,” D’Andrea said. “It seems that it’s not as much the speed of the cooling as the amplitude of the cooling. It gets much colder.”

The Saqqaq exit coincides with the arrival of the Dorset people, who were more accustomed to hunting from the sea ice that would have accumulated with the colder climate at the time. Yet by around 50 B.C., the Dorset culture was waning in western Greenland, despite its affinity for cold weather. “It is possible that it got so cold they left, but there has to be more to it than that,” D’Andrea said.

Contributing authors include Sherilyn Fritz from the University of Nebraska–Lincoln and N. John Anderson from Loughborough University in the United Kingdom. The National Science Foundation funded the work.

Source Brown University

Researchers solve mammoth evolutionary puzzle: The woollies weren't picky, happy to interbreed

A DNA-based study sheds new light on the complex evolutionary history of the woolly mammoth, suggesting it mated with a completely different and much larger species.

The research, which appears in the BioMed Central's open access journal Genome Biology, found the woolly mammoth, which lived in the cold climate of the Arctic tundra, interbred with the Columbian mammoth, which preferred the more temperate regions of North America and was some 25 per cent larger.
"There is a real fascination with the history of mammoths, and this analysis helps to contextualize its evolution, migration and ecology" says Hendrik Poinar, associate professor and Canada Research Chair in the departments of Anthropology and Biology at McMaster University.

Poinar and his team at the McMaster Ancient DNA Centre, along with colleagues from the United States and France, meticulously sequenced the complete mitochondrial genome of two Columbian mammoths, one found in the Huntington Reservoir in Utah, the other found near Rawlins, Wyoming. They compared these to the first complete mitochrondrial genome of an endemic North American woolly mammoth.

"We are talking about two very physically different 'species' here. When glacial times got nasty, it was likely that woollies moved to more pleasant conditions of the south, where they came into contact with the Columbians at some point in their evolutionary history," he says. "You have roughly 1-million years of separation between the two, with the Columbian mammoth likely derived from an early migration into North American approximately 1.5-million years ago, and their woolly counterparts emigrating to North America some 400,000 years ago."

"We think we may be looking at a genetic hybrid," says Jacob Enk, a graduate student in the McMaster Ancient DNA Centre. "Living African elephant species hybridize where their ranges overlap, with the bigger species out-competing the smaller for mates. This results in mitochondrial genomes from the smaller species showing up in populations of the larger. Since woollies and Columbians overlapped in time and space, it's not unlikely that they engaged in similar behaviour and left a similar signal."
The samples used for the analyses date back approximately 12,000 years. All mammoths became extinct approximately 10,000 years ago except for small isolated populations on islands off the coast of Siberia and Alaska.

Source  EurekaAlert!

To Rest Easy, Forget the Sheep

Scientists know that many people inadvertently undermine their ability to fall asleep and stay asleep for a full night. Here are some frequent suggestions:

1. Establish a regular sleep schedule and try to stick to it, even on weekends.
2. If you nap during the day, limit it to 20 or 30 minutes, preferably early in the afternoon.
3. Avoid alcohol in the evening, as it can disrupt sleep.
4. Don’t eat a big meal just before bedtime, but don’t go to bed hungry, either. Eat a light snack before bed, if needed, preferably one high in carbohydrates.
5. If you use medications that are stimulants, take them in the morning, or ask your doctor if you can switch to a nonstimulating alternative. If you use drugs that cause drowsiness, take them in the evening.
6. Get regular exercise during the day, but avoid vigorous exercise within three hours of bedtime.
7. If pressing thoughts interfere with falling asleep, write them down (keep a pad and pen next to the bed) and try to forget about them until morning.
8. If you are frequently awakened by a need to use the bathroom, cut down on how much you drink late in the day.
9. If you smoke, quit. Among other hazards, nicotine is a stimulant and can cause nightmares.
10. Avoid beverages and foods containing caffeine after 3 p.m. Even decaf versions have some caffeine, which can bother some people.

Source The New Yprk Times

A Good Night’s Sleep Isn’t a Luxury; It’s a Necessity

In my younger years, I regarded sleep as a necessary evil, nature’s way of thwarting my desire to cram as many activities into a 24-hour day as possible. I frequently flew the red-eye from California, for instance, sailing (or so I thought) through the next day on less than four hours of uncomfortable sleep.

But my neglect was costing me in ways that I did not fully appreciate. My husband called our nights at the ballet and theater “Jane’s most expensive naps.” Eventually we relinquished our subscriptions. Driving, too, was dicey: twice I fell asleep at the wheel, narrowly avoiding disaster. I realize now that I was living in a state of chronic sleep deprivation.

I don’t want to nod off during cultural events, and I no longer have my husband to spell me at the wheel. I also don’t want to compromise my ability to think and react. As research cited recently in this newspaper’s magazine found, “The sleep-deprived among us are lousy judges of our own sleep needs. We are not nearly as sharp as we think we are.”

Studies have shown that people function best after seven to eight hours of sleep, so I now aim for a solid seven hours, the amount associated with the lowest mortality rate. Yet on most nights something seems to interfere, keeping me up later than my intended lights-out at 10 p.m. — an essential household task, an e-mail requiring an urgent and thoughtful response, a condolence letter I never found time to write during the day, a long article that I must read.
It’s always something.

What’s Keeping Us Up?
I know I’m hardly alone. Between 1960 and 2010, the average night’s sleep for adults in the United States dropped to six and a half hours from more than eight. Some experts predict a continuing decline, thanks to distractions like e-mail, instant and text messaging, and online shopping.
Age can have a detrimental effect on sleep. In a 2005 national telephone survey of 1,003 adults ages 50 and older, the Gallup Organization found that a mere third of older adults got a good night’s sleep every day, fewer than half slept more than seven hours, and one-fifth slept less than six hours a night.
With advancing age, natural changes in sleep quality occur. People may take longer to fall asleep, and they tend to get sleepy earlier in the evening and to awaken earlier in the morning. More time is spent in the lighter stages of sleep and less in restorative deep sleep. R.E.M. sleep, during which the mind processes emotions and memories and relieves stress, also declines with age.

Habits that ruin sleep often accompany aging: less physical activity, less time spent outdoors (sunlight is the body’s main regulator of sleepiness and wakefulness), poorer attention to diet, taking medications that can disrupt sleep, caring for a chronically ill spouse, having a partner who snores. Some use alcohol in hopes of inducing sleep; in fact, it disrupts sleep.
Add to this list a host of sleep-robbing health issues, like painful arthritis, diabetes, depression, anxiety, sleep apnea, hot flashes in women and prostate enlargement in men. In the last years of his life, my husband was plagued with restless leg syndrome, forcing him to get up and walk around in the middle of the night until the symptoms subsided. During a recent night, I was awake for hours with leg cramps that simply wouldn’t quit.

Beauty Rest and Beyond 
A good night’s sleep is much more than a luxury. Its benefits include improvements in concentration, short-term memory, productivity, mood, sensitivity to pain and immune function.
If you care about how you look, more sleep can even make you appear more attractive. In a study published online in December in the journal BMJ, researchers in Sweden and the Netherlands reported that 23 sleep-deprived adults seemed to untrained observers to be less healthy, more tired and less attractive than they appeared to be after a full night’s sleep.
Perhaps more important, losing sleep may make you fat — or at least, fatter than you would otherwise be. In a study by Harvard researchers involving 68,000 middle-aged women followed for 16 years, those who slept five hours or less each night were found to weigh 5.4 pounds more — and were 15 percent more likely to become obese — than the women who slept seven hours nightly.

Michael Breus, a clinical psychologist and sleep specialist in Scottsdale, Ariz., and author of “The Sleep Doctor’s Diet Plan,” points out that as the average length of sleep has declined in the United States, the average weight of Americans has increased.
There are plausible reasons to think this is a cause-and-effect relationship. At least two factors may be involved: more waking hours in homes brimming with food and snacks; and possible changes in the hormones leptin and ghrelin, which regulate appetite.
In a study published in 2009 in The American Journal of Clinical Nutrition, Dr. Plamen D. Penev, an endocrinologist at the University of Chicago, and co-authors explored calorie consumption and expenditure by 11 healthy volunteers who spent two 14-day stays in a sleep laboratory. Both sessions offered unlimited access to tasty foods. During one stay, the volunteers — five women and six men — were limited to 5.5 hours of sleep a night, and during the other they got 8.5 hours of sleep.
Although the subjects ate the same amount of food at meals, during the shortened nights they consumed an average of 221 more calories from snacks than they did when they were getting more sleep. The snacks they ate tended to be high in carbohydrates, and the subjects expended no more energy than they did on the longer nights. In just two weeks, the extra nighttime snacking could add nearly a pound to body weight, the scientists concluded.

These researchers found no significant changes in the participants’ blood levels of the hormones leptin and ghrelin, but others have found that short sleepers have lower levels of appetite-suppressing leptin and higher levels of ghrelin, which prompts an increase in calorie intake.
Sleep loss may also affect the function of a group of neurons in the hypothalamus of the brain, where another hormone, orexin, is involved in the regulation of feeding behavior.
The bottom line: Resist the temptation to squeeze one more thing into the end of your day. If health problems disrupt your sleep, seek treatment that can lessen their effect. If you have trouble falling asleep or often awaken during the night and can’t get back to sleep, you could try taking supplements of melatonin, the body’s natural sleep inducer. I keep it at my bedside.
If you have trouble sleeping, the tips accompanying this article may help. And if all else fails, try to take a nap during the day. Naps can enhance brain function, energy, mood and productivity.
This is the second of two columns on sleep needs.

By JANE E. BRODY

Source The New York Times

Sunday, May 29, 2011

Virtual natural environments and benefits to health

A new position paper by researchers at the European Centre for the Environment and Human Health (ECEHH - part of the Peninsula College of Medicine and Dentistry) and the University of Birmingham has compared the benefits of interaction with actual and virtual natural environments and concluded that the development of accurate simulations are likely to be beneficial to those who cannot interact with nature because of infirmity or other limitations: but virtual worlds are not a substitute for the real thing.
The paper includes details of an exciting project underway between the collaborating institutions to create virtual environments to help identify the clues and cues that we pick up when we spend time in nature.
The study is published in Environmental Science & Technology on 1st June 2011.
The paper discusses the potential for natural and virtual environments in promoting improved human health and wellbeing.

We have all felt the benefit of spending time in natural environments, especially when we are feeling stressed or upset. The researchers describe creating virtual environments to try to identify just how this happens. It may be that the colours, sounds, and smells of nature are all important, but to different extents, in helping to provide mental restoration and motivation to be physically active.

It was recognised that, while some studies have tried to explore this notion, much of the work is anecdotal or involves small-scale studies which often lack appropriate controls or statistical robustness. However, the researchers do identify some studies, such as those relating to Attention Restoration Theory, that are valuable.
Key to the research is an exploration of the studies that showed a direct relationship between interaction with the natural environment and improvements in health, and the potential such activity has for becoming adopted by health services around the world to the benefit of both patients and budgets. For example, a study in Philadelphia suggested that maintaining city parks could achieve yearly savings of approximately $69.4 million in health care costs.

Programmes such as the Green Gym and the Blue Gym which promote, facilitate and encourage activity in the natural environment, are already laying the groundwork for workable programmes that could be adopted throughout the world to the benefit of human health. Research teams from the ECEHH are currently undertaking a range of studies to analyse the effects of interaction with the natural environment on health which in turn could lead to prescribing clinicians being able to treat patients with natural environment activity alone or in conjunction with reduced pharmaceutical solutions – the beneficial effect on national health service drug bills around the world could be immense, and also help reduce the release of toxic pharmaceutical residues contained in sewage into our ecosystems.
The paper also examines how step-change developments in the technology used in computer-generated forms of reality means that the software and hardware required to access increasingly accurate simulated natural environments are more readily available to the general public than ever before.
In addition to recognising the value of better technology – which includes the ability to synthesise smells - the review also recognised that key to the success of virtual environments is the design of appropriate and effective content based on knowledge of human behaviour.

Teams from the ECEHH and colleagues from the University of Birmingham, which include joint authors of the paper, have constructed the first two virtual restorative environments to support their experimental studies. This pilot study is based on the South Devon Coastal Path and Burrator Reservoir located within Dartmoor National Park, both within a short distance of the urban conurbation of Plymouth (UK).
Both natural environments are being recreated using Unity, a powerful game and interactive media development tool.
The research team is attempting to achieve a close match between the virtual and the real by importing Digital Terrain Model (DTM) data and aerial photographs into the Unity toolkit and combining this with natural features and manmade artefacts including wild flowers, trees, hedgerows, fences, seating benches and buildings. High-quality digital oceanic, coastal and birdsong sounds are also incorporated.
The pilot study, part of a Virtual Restorative Environment Therapy (VRET) initiative, is also supporting efforts to establish how psychological and physiological measurement can be used as part of a real-time biofeedback system to link participants' arousal levels to features such as cloud cover, weather, wave strengths, ambient sounds and smells.
Professor Michael Depledge, Chair of Environment and Human Health at the ECEHH, commented: "Virtual environments could benefit the elderly or infirm within their homes are care units, and can be deployed within defence medical establishments to benefit those with physical and psychological trauma following operations in conflict zones. Looking ahead, the wellbeing of others removed from nature, such as submariners and astronauts confined for several months in their crafts, might also be enhanced. Once our research has been conducted and the appropriate software written, artificial environments are likely to become readily affordable and of widespread use to health services."

He added: "However, we would not wish for the availability of virtual environments to become a substitute for the real thing in instances where accessibility to the real world is achievable. Our ongoing research with both the Green Gym and the Blue Gym initiatives aims to make these options a valid and straightforward choice for the majority of the population."
Professor Bob Stone, Chair of Interactive Multimedia Systems at the University of Birmingham, and lead investigator, said: "This technology could be made available to anyone who, for whatever reason, is in hospital, bed-bound or cannot get outside. They will be able to get the benefits of the countryside and seaside by viewing the virtual scenario on screen.

"Patients will be free to choose areas that they want to spend time in; they can take a walk along coastal footpaths, sit on a beach, listen to the waves and birdsong, watch the sun go down and - in due course - even experience the smells of the land- and seascapes almost as if they were experiencing the outdoors for real."
Professor Stone continued: "We are keen to understand what effect our virtual environments have on patients and will be carrying out further studies into arousal levels and reaction. In the summer we will start to test this on a large number of people so that we can measure biofeedback and make any changes or improvements to the scenario we have chosen.'"

 Source EurekaAlert!

Why does flu trigger asthma?

Study suggests new therapeutic targets for virally-induced asthma attacks

Boston, Mass. - When children with asthma get the flu, they often land in the hospital gasping for air. Researchers at Children's Hospital Boston have found a previously unknown biological pathway explaining why influenza induces asthma attacks. Studies in a mouse model, published online May 29 by the journal Nature Immunology, reveal that influenza activates a newly recognized group of immune cells called natural helper cells – presenting a completely new set of drug targets for asthma.

If activation of these cells, or their asthma-inducing secretions, could be blocked, asthmatic children could be more effectively protected when they get the flu and possibly other viral infections, says senior investigator Dale Umetsu, MD, PhD, of Children's Division of Immunology.
Although most asthma is allergic in nature, attacks triggered by viral infection tend to be what put children in the hospital, reflecting the fact that this type of asthma isn't well controlled by existing drugs.
"Virtually 100 percent of asthmatics get worse with a viral infection," says Umetsu. "We really didn't know how that happened, but now we have an explanation, at least for influenza."

Natural helper cells were first, very recently, discovered in the intestines and are recognized to play a role in fighting parasitic worm infections as part of the innate immune system (our first line of immune defense).
"Since the lung is related to the gut – both are exposed to the environment – we asked if natural helper cells might also be in the lung and be important in asthma," Umetsu says.
Subsequent experiments, led by first authors Ya-Jen Chang, PhD, and Hye Young Kim, PhD, in Umetsu's lab, showed that the cells are indeed in the lung in a mouse model of influenza-induced asthma, but not in allergic asthma. The model showed that influenza A infection stimulates production of a compound called IL-33 that activates natural helper cells, which then secrete asthma-inducing compounds.

"Without these cells being activated, infection did not cause airway hyperreactivity, the cardinal feature of asthma," Umetsu says. "Now we can start to think of this pathway as a target – IL-33, the natural helper cell itself or the factors it produces."

Personalized medicine in asthma?
The study adds to a growing understanding of asthma as a collection of different processes, all causing airways to become twitchy and constricted. "In mouse models we're finding very distinct pathways," Umetsu says.

Most asthma-control drugs, such as inhaled corticosteroids, act on the best-known pathway, which involves immune cells known as TH2 cells, and which is important in allergic asthma. However, Umetsu's team showed in 2006 that a second group of cells, known as natural killer T-cells (NKT cells), are also important in asthma, and demonstrated their presence in the lungs of asthma patients. NKT cells, they showed, can function independently of TH2 cells, for example, when asthma is induced with ozone, a major component of air pollution. Compounds targeting NKT cells are now in preclinical development.
The recognition now of a third pathway for asthma, involving natural helper cells, may reflect the diversity of triggers for asthma seen in patients.

"Clinically, we knew there were different asthma triggers, but we thought there was only one pathway for asthma," Umetsu says, adding that all of the identified pathways can coexist in one person. "We need to understand the specific asthma pathways present in each individual with asthma and when they are triggered, so we can give the right treatment at the right time."

Source EurekaAlert!

What is a laboratory mouse? Jackson, UNC researchers reveal the details

Bar Harbor, Maine -- Mice and humans share about 95 percent of their genes, and mice are recognized around the world as the leading experimental model for studying human biology and disease. But, says Jackson Laboratory Professor Gary Churchill, Ph.D., researchers can learn even more "now that we really know what a laboratory mouse is, genetically speaking."

Churchill and Fernando Pardo-Manuel de Villena, Ph.D., of the University of North Carolina, Chapel Hill, leading an international research team, created a genome-wide, high-resolution map of most of the inbred mouse strains used today. Their conclusion, published in Nature Genetics: Most of the mice in use today represent only limited genetic diversity, which could be significantly expanded with the addition of more wild mouse populations.

The current array of laboratory mouse strains is the result of more than 100 years of selective breeding. In the early 20th century, America's first mammalian geneticists, including Jackson Laboratory founder Clarence Cook Little, sought to understand the genetic processes that lead to cancer and other diseases. Mice were the natural experimental choice as they breed quickly and prolifically and are small and easy to keep.
Lacking the tools of molecular genetics, those early scientists started by tracking the inheritance of physical traits such as coat color. A valuable source of diverse-looking mouse populations were breeders of "fancy mice," a popular hobby in Victorian and Edwardian England and America as well as for centuries in Asia.
In their paper, Churchill and Pardo-Manuel de Villena report that "classical laboratory strains are derived from a few fancy mice with limited haplotype diversity." In contrast, strains that were derived from wild-caught mice "represent a deep reservoir of genetic diversity," they write.

The team created an online tool, the Mouse Phylogeny Viewer, for the research community to access complete genomic data on 162 mouse strains. "The viewer provides scientists with a visual tool where they can actually go and look at the genome of the mouse strains they are using or considering, compare the differences and similarities between strains and select the ones most likely to provide the basis for experimental results that can be more effectively extrapolated to the diverse human population," said Pardo-Manuel de Villena.

"As scientists use this resource to find ways to prevent and treat the genetic changes that cause cancer, heart disease, and a host of other ailments, the diversity of our lab experiments should be much easier to translate to humans," he noted.
Churchill and Pardo-Manuel de Villena have been working for almost a decade with collaborators around the world to expand the genetic diversity of the laboratory mouse. In 2004 they launched the Collaborative Cross, a project to interbreed eight different strains--five of the classic inbred strains and three wild-derived strains. In 2009 Churchill's lab started the Diversity Outbred mouse population with breeding stock selected from the Collaborative Cross project.

The research team estimates that the standard laboratory mouse strains carry about 12 million single nucleotide polymorphisms (SNPs), single-letter variations in the A, C, G or T bases of DNA. The Collaborative Cross mice deliver a whopping 45 million SNPs, as much as four times the genetic variation in the human population. "All these variants give us a lot more handles into understanding the genome," Churchill says.

"This work creates a remarkable foundation for understanding the genetics of the laboratory mouse, a critical model for studying human health," said James Anderson, Ph.D., who oversees bioinformatics grants at the National Institutes of Health. "Knowledge of the ancestry of the many strains of this invaluable model vertebrate will not only inform future experimentation but will allow a retrospective analysis of the huge amounts of data already collected."

Source EurekaAlert!

Cross your arms to relieve pain

HURT your hand? You might find that crossing one arm over the other eases the pain.
Giandomenico Iannetti at University College London and colleagues gave 20 volunteers a series of painful "jabs" to the back of one of their hands using a laser, with each pulse lasting 8 to 12 seconds. In half of the experiments the group received the jabs while they laid their palms face down on a desk. In the other half they crossed their arms over one another on the desk. Volunteers rated the pain they felt on a scale from zero to 100.

 Crossed wires in brain relive pain.

Volunteers with crossed hands rated three increasing pain intensities as less painful compared with when they kept their hands uncrossed (Pain, DOI: 10.1016/j.pain.2011.02.029).
Iannetti suggests that placing your hands in unfamiliar spatial positions relative to the body muddles the brain and disrupts the processing of the pain message. "You get this mismatch between your body's frame of reference and your external space frame of reference," he says. Similar pain-relieving effects have been reported before using illusions involving mirrors and virtual limbs.
Iannetti says his team hopes next to test the crossover trick in a clinical setting to see if it helps people suffering from chronic pain.

Source New Scientist

MDC Researchers Discover Key Molecule for Stem Cell Pluripotency

Researchers of the Max Delbrück Center for Molecular Medicine (MDC) Berlin-Buch have discovered what enables embryonic stem cells to differentiate into diverse cell types and thus to be pluripotent. This pluripotency depends on a specific molecule – E-cadherin – hitherto primarily known for its role in mediating cell-cell adhesion as a kind of “intracellular glue”. If E-cadherin is absent, the stem cells lose their pluripotency. The molecule also plays a crucial role in the reprogramming of somatic cells (body cells) into pluripotent stem cells (EMBO Reports, advance online publication 27 May 2011; doi:10.1038/embor.2011.88)*. 

Dr. Daniel Besser, Prof. Walter Birchmeier and Torben Redmer from the MDC, a member of the Helmholtz Association, used mouse embryonic fibroblasts (MEFs) in their stem cell experiments. In a first step they showed that the pluripotency of these stem cells is directly associated with the cell-adhesion molecule E-cadherin. If E-cadherin is absent, the stem cells lose their pluripotency.

In a second step the researchers investigated what happens when somatic cells that normally neither have E-cadherin nor are pluripotent are reprogrammed into a pluripotent stem cell state. In this reprogramming technique, somatic cells are converted into induced pluripotent stem cells (iPSCs). This new technique may help researchers avoid the controversies that come with the use of human embryos to produce human embryonic stem cells for research purposes.

The MDC researchers found that in contrast to the original cells, the new pluripotent cells derived from mouse connective tissue contained E-cadherin. “Thus, we have double proof that E-cadherin is directly associated with stem-cell pluripotency. E-Cadherin is necessary for maintaining pluripotent stem cells and also for inducing the pluripotent state in the reprogramming of somatic cells,” Dr. Besser said. “If E-cadherin is absent, somatic cells cannot be reprogrammed into viable pluripotent cells.” In addition, E-Cadherin can replace OCT 4, one of the signaling molecules until now considered indispensable for reprogramming.

Next, the MDC researchers want to find out to what extent E-cadherin also regulates human embryonic stem cells. “Understanding the molecular relationships is essential for using human somatic cells to develop stem cell therapy for diseases such as heart attack, Alzheimer’s or Parkinson’s disease or diabetes,” Dr. Besser said.

Source MDC

Saturday, May 28, 2011

Mathematically ranking ranking methods

In a world where everything from placement in a Google search result to World Cup eligibility depends on ranking and numerical ratings of some kind, it is becoming increasingly important to analyze the algorithms and techniques that underlie such ranking methods in order to ensure fairness, eliminate bias, and tailor them to specific applications.

In a paper published this month in the SIAM Journal on Scientific Computing, authors Timothy Chartier, Erich Kreutzer, Amy Langville, and Kathryn Pedings mathematically analyze three commonly-used ranking methods.  “We studied the sensitivity and stability of three popular ranking methods: PageRank, which is the method Google has used to rank web pages, and the Colley and Massey methods, which have been used by the Bowl Championship Series to rank U.S. college football teams,” explains Langville.

All three methods analyzed – the Colley and the Massey ranking techniques and the Markov web page rankings—which is a generalized version of PageRank—are linear algebra-based with simple elegant formulations. Here, the authors apply a modified version of PageRank to a sports season.

“Both web page authors and teams sometimes try to game, or spam, ranking systems to achieve a higher ranking. For instance, web page authors try to modify their incoming and outgoing links while teams try to run up the score against weak opponents,” says Langville, pointing out the significance of studying such methods. “Mathematically, such spamming can be viewed as changes to the input data required by the ranking method.”
Most methods, including the aforementioned three, produce “ratings” of numerical scores for each team, which represents their playing ability. When sorted, these ratings produce ranks with integer values for each team, simply representing a numerical listing of the teams based on their rating.

In the first step of their analysis, the authors assume a simple rating scheme with constant difference of 1 in scores and apply it to a perfect sports season.  In a perfect season, each team plays every other team only once and there are no upset victories or losses. In such an ideal scenario, a highly-ranked team would always beat a lower-ranked team. Thus, in a system with teams numbered 1 through 4 for their ranks, team 1 would beat all other teams; team 2 would beat teams 3 and 4, and lose to 1; team 3 would beat team 4, losing to teams 1 and 2; and team 4 would lose to all other teams.  They then compute the output rating for each of the three methods and compare them to the input rating.

The three methods are applied to this ideal data, and all three methods recover the input ranking. However, while the Colley and Massey methods produce ratings that are uniformly spaced as would be desirable in a rating system, the Markov method, produces non-uniformly spaced ratings.
The authors analyze the sensitivity of the methods to small perturbations and determine how much the rating and ranking is affected by these changes. If, for instance, small changes in input data cause large changes in the output ratings, the method is considered sensitive. Similar discrepancies in the input and output ranking data would show instability of the ranking method.

The authors conclude that while the Colley and Massey methods are insensitive to small changes, the Markov method (or Page Rank method) is highly sensitive to such changes, often resulting in anomalies in rankings. For instance, there are cases of a single upset in a perfect season resulting in rearrangements of rankings for all teams because of the Markov method’s high sensitivity.  In these cases, the Colley and Massey methods would have an isolated response, resulting in changes to the rankings of only the two teams in question.

In addition, the sensitivity of the PageRank or Markov method gets more pronounced further down in the rankings. “The PageRank vector is quite sensitive to small changes in the input data. Further, this sensitivity increases as the rank position increases,” Langville explains. “In other words, values in the tail (low-ranked positions) of the PageRank vector are extremely sensitive, which calls into question PageRank’s use to produce a full ranking, as opposed to a simply top-k ranking. It also partially explains PageRank’s susceptibility to spam. On the other hand, the Colley and Massey methods are stable throughout the entire ranking.”

PageRank has recently evolved from being used exclusively for web pages to rank various entities, from species to social networks, reinforcing the ubiquity of these ranking systems.
But the stability displayed by the Colley and Massey methods in this study shows that these two methods would perhaps be effective even in ranking other entities, such as web pages and movies, though originally conceived for sports rankings.

“As future work, we are exploring the use of the Colley and Massey methods in other settings beyond sports. For example, we have found that these two methods are more appropriate than PageRank for ranking in social networks such as Twitter,” says Langville.
While ranking methods can be applied to a wide range of areas, modifications are often required in order to translate a particular method to suit a specific application, making analyses of sensitivity and stability that much more important.

Source article:
Sensitivity and Stability of Ranking Vectors
Timothy P. Chartier, Erich Kreutzer, Amy N. Langville, and Kathryn E. Pedings
SIAM Journal on Scientific Computing, 33 (2011) pp 1077-1102

Source siam

Hotspot in the hot seat

New seismic imaging alters the picture beneath Hawaii.

The Hawaiian archipelago, and its chain of active and extinct volcanoes, has long been viewed as a geological curiosity. While most volcanoes arise at the boundaries of shifting tectonic plates, the Hawaiian chain lies smack in the middle of the Pacific plate, nowhere near its borders.

Now a study by researchers at MIT and Purdue University, published this week in Science, paints an unexpected picture of what’s beneath Hawaii. Using a new imaging technique adapted from uses in oil and gas exploration, MIT’s Robert van der Hilst and colleagues produced high-resolution images that peek hundreds of kilometers below the Earth’s surface.

They found a hotspot — but not where many scientists had thought it would be. Instead, the MIT team found evidence of hot mantle activity some 600 kilometers deep and 2,000 kilometers wide, in an area far west of the “Big Island” of Hawaii.

Many geologists had thought the Hawaiian Islands resulted from a stationary plume of white-hot material rising from the Earth’s lower mantle, spewing out masses of magma in fits of volcanic eruption. This theory held that the massive Pacific plate, moving slowly northwestward, carries newly formed volcanoes away from the hotspot, forming the Hawaiian island chain seen today.

According to the theory, the Big Island, the newest formation in the chain, sits directly over the blistering plume. Scientists have attempted to characterize this hotspot for decades, believing that if a plume exists, it may be a window into the Earth’s deep processes that could help quantify how the Earth loses heat from its core.

“The implication [of this new work] is that there is no simple, deep plume directly beneath Hawaii,” says Van der Hilst, the Cecil and Ida Green Professor of Earth and Planetary Sciences at MIT, and director of the Earth Resources Laboratory. “So the textbooks on Hawaii will have to be rewritten.”

Heat wave

The team developed a new deep-Earth imaging technique using seismic- and mineral-physics data to determine the temperature of the Earth at various depths. Extreme temperature profiles, they reasoned, might suggest plumes or hotspots.

Seismic waves travel through the Earth’s interior at speeds that are primarily influenced by temperature: The higher the temperature, the slower the waves. For years, seismologists have used seismic wave speeds to create — much like CAT scans — 3-D views of the Earth’s internal structure. This tomographic technique works well near earthquake sites or below vast networks of seismographic sensors. But Hawaii, as Van der Hilst observes, is in a no-man’s land of seismic data, far from any tectonic upheaval and adequate seismograph arrays.

Van der Hilst  — along with co-authors Qin Cao, an MIT graduate student; mineral physicist Dan Shim, associate professor of earth, atmospheric and planetary sciences at MIT; and Maarten de Hoop of Purdue University — came up with a new technique, combining seismic data and mineral physics to map temperatures in the Earth’s mantle. The team first collected all available seismic data from the Incorporated Research Institutions for Seismology Data Management Center, based in Seattle, which collects and distributes seismic information to the research community. This amounted to more than 100,000 records of seismic waves from more than 5,000 earthquakes in the last 20 years.  Much of the data came from the so-called “Ring of Fire,” a massive horseshoe of seismic and volcanic activity surrounding the entirety of the Pacific Ocean.

The team then modified a technique used in the oil and gas industry. Typically, companies such as Shell and Exxon Mobil create seismic shocks, and then listen to the echoes that bounce back. The seismic reflection creates a map of the underlying rock compositions, and clues to where oil and gas might lie.

Instead of creating shocks, Van der Hilst’s team took advantage of Earth’s natural shocks — earthquakes — and analyzed seismic waves as they reflected off the rocks underneath Hawaii. By analyzing seismic reflections, the team determined mineral compositions at various depths, noting the boundaries at which minerals changed. Knowing at which pressures and temperatures such boundaries occur in laboratory simulations, they were able to map out the temperatures deep beneath Hawaii.

Seismic shift

Cao, the lead author of the study, developed an algorithm that worked the massive amount of seismic data into deep-Earth temperature maps, revealing the newfound hotspot west of Hawaii. Van der Hilst says the discovery of this 2,000-kilometer-wide anomaly refutes the popular theory of a narrow, pipe-like plume rising straight up to Hawaii from the core-mantle boundary — a finding he anticipates will shake up the geodynamical and geochemical communities studying mantle convection.

Yang Shen, a professor of seismology and marine geophysics at the University of Rhode Island, says the new imaging technique provides much higher-resolution images of the Earth’s mantle than previous techniques, and may change the conventional wisdom on Hawaii’s hotspots.

“The observation is intriguing because it does not fit nicely within the current plume model,” Shen says. “So I think the paper will force us to rethink … mantle plumes and convection.”

Cao is now refining the mapping algorithm, and plans to make it accessible to other researchers in the next few months. As countries set up more earthquake monitors in the coming years, Van der Hilst says the new imaging technique will allow seismologists to draw up higher-resolution images of deep-Earth processes.

“I think this could be the technique of the future,” Van der Hilst says. “The receiver networks are exploding, and in the next five to 10 years we can probably do even more spectacular things.”

Source MIT

Nanoparticles and Their Nifty Uses

China, Russia, and the United States have invested billions of dollars in nanotech research. And it’s actually working! That pile of cash has helped scientists come up with all kinds of exotic uses for these tiny particles. Their structure and size help them fight cancer, manipulate light, and carry electrons in ways that neither individual atoms nor bulk macroscale materials can. Here is a guide to some of these handy specks.
  • Quantum dots
    Made of semiconducting molecules, they glow fluorescently and are great at absorbing light.
    Used for: More efficient solar cells and microscopy dyes for cell biology research.
  • Silica
    These silicon dioxide nanoparticles enable so-called shear thickening fluid to become stronger on impact.
    Used for: Stab-resistant Kevlar for body armor.
  • Zinc oxide
    The tiny crystals stop UV radiation and are toxic to microscopic life.
    Used for: UV-resistant packaging; paint and textiles that inhibit bacteria and fungi.
  • Aluminosilicates
    Basically just clay: The particles’ negative charge triggers clotting.
    Used for: Battlefield wound dressings.
  • Nano barcodes
    Bits of various metals linked into tiny wires make good tags for microscopic things.
    Used for: Tracking DNA and cells.
  • Lithium iron phosphate
    Particles organize themselves into an anode, which allows batteries to charge and deliver power extremely quickly.
    Used for: Electric cars, power tools.
  • Liposomes
    These little blobs of fat (and sometimes protein, too) can protect DNA and RNA as they move through the human body.
    Used for: Delivering gene therapy.
  • Iron oxide
    The mini magnets can stick to certain chemicals.
    Used for: Steering cancer drugs and genes to targets in the body while minimizing collateral damage.
Source WIRED

Wolfram Alpha Turns 2: ‘People Just Need What We Are Doing’

Steven Wolfram, the man behind computing-application Mathematica and the search engine Wolfram Alpha, has a short attention span that’s married to a long-term outlook.



Wolfram Alpha is an online service that computes the answers to queries (e.g., age pyramid for the Philippines or glycogen degradation pathway rather than searching for those terms showing up on webpages.
When asked what his favorite query is, the particle physicist and MacArthur “genius” award recipient says he’s enamored that Wolfram Alpha can tell you about the plane you just saw flying over your town — in his case “flights visible from Concord, Massachusetts.”
But Wolfram’s no plane-spotter.
“My life consists of watching all the new domains being put into Wolfram Alpha,” Wolfram said. “Whatever thing we just finished is the thing I’m most excited about.”

And you might understand Wolfram’s excitement about being able to know the tail number of a plane overhead when you get that answering that question isn’t easy.
For one, there are a lot of planes in the sky. And two, even if you know which planes are in the sky, radar data is delayed, so Wolfram Alpha has to project a plane’s course. And it’s got to take into account that people can’t actually see planes that are very high in the sky.
While that might sound like Wolfram has a short attention span, he’s also taking the long view, as Wolfram Alpha has just passed its second birthday.

“This is my third big life project,” Wolfram said. “Two is early in the life spectrum.”
Wolfram Alpha’s team is now 200 strong, a mix of programmers, linguistic curators and subject-matter experts.
And their to-do list? It’s decades long.
“If you were to look at our whole to-do list, which is a scary thing to do, to finish it would take 20 years,” Wolfram said. “That doesn’t scare me too much, since I’ve been working on Mathematica for 25 years.”
Wolfram Alpha may have a search box, but it’s doubtful that it’s the default search box for anyone, except perhaps Rainman.

But traffic to Wolfram Alpha is in the millions of visits per day, according to Wolfram, and the company is “slightly profitable.” That’s in no small part because high school and college students have figured out at least part of what Wolfram Alpha is useful for — whether they are working on trigonometry equations, music theory or economic models.
“That’s not the worst place to have a core base of users, given they grow up,” Wolfram says.
Wolfram says he takes encouragement from looking at the streams of queries that people put into the search box. Those show that people are trying to use Wolfram Alpha for complicated things like comparing the economies of two countries. And there aren’t many tourists who just show up to see a funny Easter egg in the software, or to enter junk queries.
But Wolfram is frustrated a bit that users don’t know the full power of Wolfram Alpha.
“The mental model for when to go to Wolfram Alpha is not fully fleshed out yet,” Wolfram says.
One of the company’s solution for that is to create a wide range of very focused apps, such as its app for computer network administrators, and those for classes, including astronomy, calculus and algebra.
Wolfram Alpha has also partnered with general purpose search engines such as Bing and DuckDuckGo. The key there, according to Wolfram, is figuring out which of the queries into a general search engine would benefit from a calculated answer, not just a list of links. One of the challenges is that searchers are used to getting search results in single digit milliseconds — while Wolfram Alpha takes considerably longer — say 500 milliseconds — because it’s calculating answers.

One way to solve that is to cache some popular precomputed answers, and — for others — to indicate to searchers that they can get more details on Wolfram Alpha.
“We compute it and do the computation in the background, so by the time they show up, it looks like it was there but it wasn’t,” Wolfram said.
The long-term challenge for Wolfram Alpha is getting more and more datasets into the system. While the process has gotten smoother, each dataset comes with its own unique complexities — meaning that there’s no cookie-cutter approach that will speed new datasets into the engine.
“Our main conclusion is that there is an irreducible amount of work that requires humans and algorithms,” Wolfram said.

The company is also branching out into datasets that one wouldn’t expect from a high-powered calculator, such as info on sports and pop culture, areas that Wolfram Alpha clearly shied away from at first.
“I thought, ‘Gosh, what can you compute about people?’” Wolfram said. “Well, it turns out there’s a lot you can compute, such as what people were born in this city and who was alive at the same time as other people. In every area there is a lot more to compute than you think.”
He’s now thinking about how you can ingest people’s networks of friends (the so-called social graph), how images can be imported and calculated, and what happens when Wolfram Alpha allows people to upload their own data sets.

What’s also becoming apparent is that there are a lot more places that Wolfram Alpha is turning out to be useful than just the website. Makers of software such as spreadsheets and specialized financial applications are turning to the company’s API ,so that they can include computational functions in portions of their software. That means more-diverse revenue for the company, which surprised Wolfram, because when the company launched, he suspected there were only two or three ways for it to make money.
Now he says it’s looking like there are 15 channels or even more.
“People just need what we are doing. It seems like it is a foundational component in so many places,” Wolfram said. “The big debate internally is which of these channels will be the most lucrative, but I think it is still not at all clear.”

And if you think the word channel makes Wolfram sound like an executive, you’d be right.
“I had thought when I started Wolfram Alpha that that stuff isn’t so interesting, and I was going to hire people to figure that out,” Wolfram said. “That didn’t work out so well.”
“So I decided I should learn it, and it’s actually kind of interesting,” Wolfram said. “Now is a fascinating time of platform turbulence, which we haven’t seen since probably about 20 years ago in the rise of PC workstations.”

Wolfram Alpha is also self-funded, as was Mathematica.
And in typical Wolfram style, that makes him both more conservative and more radical than others.
“For 23 years, Mathematica has been a simple private company,” Wolfram said. “For better or worse, that allows one to do much crazier projects than you can through the traditional VC route.”
But doing crazy things doesn’t extend to adding 300 new employees to try to build even faster, even if there’s not enough revenue to pay their salaries.
“I’ve been lucky enough to run a company that’s been profitable for 23 years, so I developed the habit of doing things that way,” Wolfram said.
That’s a way of doing business, that if you think about it, computes much better than getting tens of millions in funding for an iPhone app.

Source WIRED

Birthplace of 'hot Neptunes' revealed

HOT Neptunes are modestly giant planets that resemble their namesake but orbit close to their stars. But the puzzle is why we see so many hot Neptunes elsewhere but none in our solar system.

The conventional view is that these worlds formed in cold regions far from their stars and then migrated inwards. Now Brad Hansen at the University of California, Los Angeles, and Norm Murray of the Canadian Institute for Theoretical Astrophysics in Toronto say hot Neptunes may have arisen right where they are.

The team modelled a disc of gas and dust around a young star. In work submitted to The Astrophysical Journal, they suggest that if the disc is particularly massive, large cores can form in the inner regions with enough gravity to attract gas, producing hot Neptunes (arxiv.org/abs/1105.2050). However, our sun's disc of gas and dust lacked the mass to build giant worlds in its inner regions. So our Neptune formed far from the sun.

This scenario may explain the large number of hot Neptunes orbiting other stars, says Gregory Laughlin at the University of California, Santa Cruz.

Source New Scientist

Inside the infant mind

New study shows that babies can perform sophisticated analyses of how the physical world should behave.


Over the past two decades, scientists have shown that babies only a few months old have a solid grasp on basic rules of the physical world. They understand that objects can’t wink in and out of existence, and that objects can’t “teleport” from one spot to another.

Now, an international team of researchers co-led by MIT’s Josh Tenenbaum has found that infants can use that knowledge to form surprisingly sophisticated expectations of how novel situations will unfold.

Furthermore, the scientists developed a computational model of infant cognition that accurately predicts infants’ surprise at events that violate their conception of the physical world.

The model, which simulates a type of intelligence known as pure reasoning, calculates the probability of a particular event, given what it knows about how objects behave. The close correlation between the model’s predictions and the infants’ actual responses to such events suggests that infants reason in a similar way, says Tenenbaum, associate professor of cognitive science and computation at MIT.

“Real intelligence is about finding yourself in situations that you’ve never been in before but that have some abstract principles in common with your experience, and using that abstract knowledge to reason productively in the new situation,” he says.

The study, which appears in the May 27 issue of Science, is the first step in a long-term effort to “reverse-engineer” infant cognition by studying babies at ages 3-, 6- and 12-months (and other key stages through the first two years of life) to map out what they know about the physical and social world. That “3-6-12” project is part of a larger Intelligence Initiative at MIT, launched this year with the goal of understanding the nature of intelligence and replicating it in machines.

Tenenbaum and Luca Bonatti of the Universitat Pompeu Fabra in Barcelona are co-senior authors of the Science paper; the co-lead authors are Erno Teglas of Central European University in Hungary and Edward Vul, a former MIT student who worked with Tenenbaum and is now at the University of California at San Diego.

Measuring surprise


Elizabeth Spelke, a professor of psychology at Harvard University, did much of the pioneering work showing that babies understand abstract principles about the physical world. Spelke also demonstrated that infants’ level of surprise can be measured by how long they look at something: The more unexpected the event, the longer they watch.

Tenenbaum and Vul developed a computational model, known as an “ideal-observer model,” to predict how long infants would look at animated scenarios that were more or less consistent with their knowledge of objects’ behavior. The model starts with abstract principles of how objects can behave in general (the same principles that Spelke showed infants have), then runs multiple simulations of how objects could behave in a given situation.

In one example, 12-month-olds were shown four objects — three blue, one red — bouncing around a container. After some time, the scene would be covered, and during that time, one of the objects would exit the container through an opening.

If the scene was blocked very briefly (0.04 seconds), infants would be surprised if one of the objects farthest from the exit had left the container. If the scene was obscured longer (2 seconds), the distance from exit became less important and they were surprised only if the rare (red) object exited first. At intermediate times, both distance to the exit and number of objects mattered.

The computational model accurately predicted how long babies would look at the same exit event under a dozen different scenarios, varying number of objects, spatial position and time delay. This marks the first time that infant cognition has been modeled with such quantitative precision, and suggests that infants reason by mentally simulating possible scenarios and figuring out which outcome is most likely, based on a few physical principles.

“We don’t yet have a unified theory of how cognition works, but we’re starting to make progress on describing core aspects of cognition that previously were only described intuitively. Now we’re describing them mathematically,” Tenenbaum says.

Spelke says the new paper offers a possible explanation for how human cognitive development can be both extremely fast and highly flexible.

“Until now, no theory has appeared to have the right properties to account for both features, because core knowledge systems tend to be limited and inflexible, whereas systems designed to learn almost anything tend to learn slowly,” she says. “The research described in this article is the first, I believe, to suggest how human infants' learning could be both fast and flexible.”

New models of cognition

In addition to performing similar studies with younger infants, Tenenbaum plans to further refine his model by adding other physical principles that babies appear to understand, such as gravity or friction. “We think infants are much smarter, in a sense, than this model is,” he says. “We now need to do more experiments and model a broader range of the existing literature to test exactly what they know.”

He is also developing similar models for infants’ “intuitive psychology,” or understanding of how other people act. Such models of normal infant cognition could help researchers figure out what goes wrong in disorders such as autism. “We have to understand more precisely what the normal case is like in order to understand how it breaks,” Tenenbaum says.

Another avenue of research is the origin of infants’ ability to understand how the world works. In a paper published in Science in March, Tenenbaum and several colleagues outlined a possible mechanism, also based on probabilistic inference, for learning abstract principles from very early sensory input. “It’s very speculative, but we understand roughly the mathematical machinery that could explain how this sort of knowledge could be learned surprisingly early from fairly minimal experience,” he says.

Source MIT