Showing posts with label Chemistry. Show all posts
Showing posts with label Chemistry. Show all posts

Wednesday, November 23, 2011

Molecules to Medicine: Should pepper spray be put on (clinical) trial?

Pepper spray is all over the news, following the Occupy Wall Street protests, particularly following the widely disseminated images and videos of protestors being sprayed in NY, Portland, andUCDavis.
Before that, I knew and occasionally used its main ingredient, capsaicin, as a treatment for my patients with shingles, an extremely painful Herpes zoster infection. And I knew about the many of the serious side effects of pepper spray, well-described by Deborah Blum.
Recently though, other questions arose, like “How was this learned?”. So off I went, looking for clinical trials to see what, if anything, had been studied, beyond the individual patient, poison control, and toxicology reports. Here’s what I learned:
There are reports of the efficacy of capsaicin in crowd control, but little regarding trials of exposures. Perhaps this is because pepper spray is regulated by the Environmental Protection Agency, as a pesticide and not by the FDA.

The concentration of capsaicin in bear spray is 1-2%; it is 10-30% in “personal defense sprays.”

While the police might feel reassured by the study, “The effect of oleoresin capsicum “pepper” spray inhalation on respiratory function,” I was not. This study met the “gold standard” of clinical trials, in that it was a “randomized, cross-over controlled trial to assess the effect of Oleoresin capsicum (OC) spray inhalation on respiratory function by itself and combined with restraint.” However, while the OC exposure showed no ill effect, only 34 volunteers were exposed to only 1 sec of Cap-Stun 5.5%OC spray by inhalation “from 5 ft away as they might in the field setting (as recommended by both manufacturer and local police policies).”

By contrast, an ACLU report, “Pepper Spray Update: More Fatalities, More Questions” found, in just two years, 26 deaths after OC spraying, noting that death was more likely if the victim was also restrained. This translated to 1 death per 600 times police used spray. (The cause of death was not firmly linked to the OC). According to the ACLU, “an internal memorandum produced by the largest supplier of pepper spray to the California police and civilian markets” concludes that there may be serious risks with more than a 1 sec spray. A subsequent Department of Justice study examined another 63 deaths after pepper spray during arrests; the spray was felt to be a “contributing factor” in several.

A review in 1996 by the Division of Epidemiology of the NC DHHS and OSHA concluded that exposure to OC spray during police training constituted an unacceptable health risk.

Surveillance into crowd control agents examined reports to the British National Poisons Information Service, finding more late (>6 hour) adverse events than had been previously noted, especially skin reactions (blistering, rashes).

Studies have, understandably, more looked at treatment than at systematically exploring toxic effects of pepper spray. An uncontrolled California Poison Control study of 64 patients with exposure to capsaicin (as spray or topically as a cream) showed benefit with topically applied antacids, especially if applied soon after exposure.

In a randomized clinical trial, 47 subjects were assigned to a placebo, a topical nonsteroidal anti-inflammatory agent, or a topical anesthetic. The only group with significant symptomatic improvement in pain received proparacaine hydrochloride 0.5%–and only 55% had decreased pain with treatment.

Another randomized controlled trial looked at 49 volunteers who were treated with one of five treatment groups(aluminum hydroxide–magnesium hydroxide [Maalox], 2% lidocaine gel, baby shampoo, milk, or water). There was a significant difference in pain with more rapid treatment, but not between the groups.

I was most impressed with the efforts of the Black Cross Health Collective in Portland, Oregon. These activists have been thoughtfully approaching studying treatments for pepper spray exposures with published clinical trial protocols, where each volunteer also serves as their own control. Capsaicin is applied to each arm; a “subject-blinded” treatment is applied to one arm, and differences in pain responses are recorded. I love that they are looking for evidenced based solutions.

So far, antacids have been the most effective.

Suggestions for further study

Pepper spray causes inflammation and swelling—particularly a danger for those with underlying asthma or emphysema. In fact, the Department of Justice report notes that in two of 63 clearly documented deaths, the subjects were asthmatic. If they don’t already, police need to have protocols in place to identify and treat “sprayees” who have these pre-existing conditions that predispose them to serious harm from the spray. This particularly holds true for people also at risk for respiratory compromise from being restrained, on other drugs, or with obesity. The study of restrained healthy volunteers exposed to small amounts of capsaicin is simply not applicable to the general population. Also, given that these compounds appear to have delayed effects, there should be legally required medical monitoring of “sprayees” at regular and frequent intervals for at least 24 hours—by someone competent. (Iraq war veteran Kayvan Sabehgi could easily have died from the lacerated spleen sustained in his beating by police. It was 18 hours before he was taken to the hospital, after the jail’s nurse reportedly only offered him a suppository for his abdominal pain. There is also an, as yet unconfirmed report, of a miscarriage after the Portland, Oregon OWS protest last week).

Unfortunately, there is an urgent need for clinical trials in this area—both retrospective assessments of “sprayees” health outcomes, and prospective randomized trials [like the trial done on subjects' arms] to elucidate the effects of various capsaicin concentrations, carrier solvents and propellents and to identify the most effective treatments for each mixture. Until those can be done, there should be a thorough outcomes registry kept, with standardized data being obtained on all those subsequent to being pepper-sprayed.

Sadly, I’m sure the Black Cross and others in the Occupy Wall Street movement will have too many opportunities to test therapies against painful crowd-control chemicals. Studies will be difficult because the settings are largely uncontrolled and because the sprays have different concentrations of capsaicin, carrier solvents, and propellants.

Until then, there should be a moratorium on the use of pepper spray or other “non-lethal” chemicals by police, except in clearly life-threatening confrontations, due to the high number of associated deaths until the risks are better understood?

Perhaps Kamran Loghman, who helped the FBI weaponize pepper spray, will be dismayed enough at the “inappropriate and improper use of chemical agents” to help the Black Cross develop effective antidotes…One can only hope.

Courtesy of Scientific American guest blogger Judy Stone

Monday, June 20, 2011

NIU scientists discover simple, green and cost-effective way to produce high yields of highly touted graphene

DeKalb, Ill. — Scientists at Northern Illinois University say they have discovered a simple method for producing high yields of graphene, a highly touted carbon nanostructure that some believe could replace silicon as the technological fabric of the future.

The focus of intense scientific research in recent years, graphene is a two-dimensional material, comprised of a single layer of carbon atoms arranged in a hexagonal lattice. It is the strongest material ever measured and has other remarkable qualities, including high electron mobility, a property that elevates its potential for use in high-speed nano-scale devices of the future.
In a June communication to the Journal of Materials Chemistry, the NIU researchers report on a new method that converts carbon dioxide directly into few-layer graphene (less than 10 atoms in thickness) by burning pure magnesium metal in dry ice.

“It is scientifically proven that burning magnesium metal in carbon dioxide produces carbon, but the formation of this carbon with few-layer graphene as the major product has neither been identified nor proven as such until our current report,” said Narayan Hosmane, a professor of chemistry and biochemistry who leads the NIU research group.
“The synthetic process can be used to potentially produce few-layer graphene in large quantities,” he said. “Up until now, graphene has been synthesized by various methods utilizing hazardous chemicals and tedious techniques. This new method is simple, green and cost-effective.”
Hosmane said his research group initially set out to produce single-wall carbon nanotubes. “Instead, we isolated few-layer graphene,” he said. “It surprised us all.”

“It’s a very simple technique that’s been done by scientists before,” added Amartya Chakrabarti, first author of the communication to the Journal of Materials Chemistry and an NIU post-doctoral research associate in chemistry and biochemistry. “But nobody actually closely examined the structure of the carbon that had been produced.”

Other members of the research group publishing in the Journal of Materials Chemistry include former NIU physics postdoctoral research associate Jun Lu, NIU undergraduate student Jennifer Skrabutenas, NIU Chemistry and Biochemistry Professor Tao Xu, NIU Physics Professor Zhili Xiao and John A. Maguire, a chemistry professor at Southern Methodist University.
The work was supported by grants from the National Science Foundation, Petroleum Research Fund administered by the American Chemical Society, the Department of Energy and Robert A. Welch Foundation.

Source Northern Illinois University

Wednesday, June 15, 2011

Researchers record two-state dynamics in glassy silicon

CHAMPAIGN, Ill. — Using high-resolution imaging technology, University of Illinois researchers have answered a question that had confounded semiconductor researchers: Is amorphous silicon a glass? The answer? Yes – until hydrogen is added.

Led by chemistry professor Martin Gruebele, the group published its results in the journal Physical Review Letters.

Amorphous silicon (a-Si) is a semiconductor popular for many device applications because it is inexpensive and can be created in a flexible thin film, unlike the rigid, brittle crystalline form of silicon. But the material has its own unusual qualities: It seems to have some characteristics of glass, but cannot be made the way other glasses are.

Most glasses are made by rapidly cooling a melted material so that it hardens in a random structure. But cooling liquid silicon simply results in an orderly crystal structure. Several methods exist for producing a-Si from crystalline silicon, including bombarding a crystal surface so that atoms fly off and deposit on another surface in a random position.

To settle the debate on the nature of a-Si, Gruebele’s group, collaborating with electrical and computer engineering professor Joseph Lyding’s group at the Beckman Institute for Advanced Science and Technology, used a scanning tunneling microscope to take sub nanometer-resolution images of a-Si surfaces, stringing them together to make a time-lapse video.

The video shows a lumpy, irregular surface; each lump is a cluster about five silicon atoms in diameter. Suddenly, between frames, one bump seems to jump to an adjoining space. Soon, another lump nearby shifts neatly to the right. Although few of the clusters move, the action is obvious.

Such cluster “hopping” between two positions is known as two-state dynamics, a signature property of glass. In a glass, the atoms or molecules are randomly positioned or oriented, much the way they are in a liquid or gas. But while atoms have much more freedom of motion to diffuse through a liquid or gas, in a glass the molecules or atom clusters are stuck most of the time in the solid. Instead, a cluster usually has only two adjoining places that it can ferry between.

“This is the first time that this type of two-state hopping has been imaged in a-Si,” Gruebele said. “It’s been predicted by theory and people have inferred it indirectly from other measurements, but this is the first time we’re been able to visualize it.”

The group’s observations of two-state dynamics show that pure a-Si is indeed a glass, in spite of its unorthodox manufacturing method. However, a-Si is rarely used in its pure form; hydrogen is added to make it more stable and improve performance.

Researchers have long assumed that hydrogenation has little to no effect on the random structure of a-Si, but the group’s observations show that this assumption is not quite correct. In fact, adding hydrogen robs a-Si of its two-state dynamics and its categorization as a glass. Furthermore, the surface is riddled with signs of crystallization: larger clusters, cracks and highly structured patches.

Such micro-crystalline structure has great implications for the properties of a-Si and how they are studied and applied. Since most research has been conducted on hydrogenated a-Si, Gruebele sees a great opportunity to delve into the largely unknown characteristics of the glassy state.

“In some ways, I think we actually know less about the properties of glassy silicon than we think we do, because a lot of what’s been investigated of what people call amorphous or glassy silicon isn’t really completely amorphous,” Gruebele said. “We really need to revisit what the properties of a-Si are. There could yet be surprises in the way it functions and the kind of things that we might be able to do with it.”

Next, the group hopes to conduct temperature-depended studies to further establish the activation barriers, or the energy “humps” that the clusters must overcome to move between positions.
The National Science Foundation supported this work.
Source University of Illinois

Monday, June 13, 2011

Under pressure, sodium, hydrogen could undergo a metamorphosis, emerging as superconductor

BUFFALO, N.Y. -- In the search for superconductors, finding ways to compress hydrogen into a metal has been a point of focus ever since scientists predicted many years ago that electricity would flow, uninhibited, through such a material.

Liquid metallic hydrogen is thought to exist in the high-gravity interiors of Jupiter and Saturn. But so far, on Earth, researchers have been unable to use static compression techniques to squeeze hydrogen under high enough pressures to convert it into a metal. Shock-wave methods have been successful, but as experiments with diamond anvil cells have shown, hydrogen remains an insulator even under pressures equivalent to those found in the Earth's core.

To circumvent the problem, a pair of University at Buffalo chemists has proposed an alternative solution for metallizing hydrogen: Add sodium to hydrogen, they say, and it just might be possible to convert the compound into a superconducting metal under significantly lower pressures.
The research, published June 10 in Physical Review Letters, details the findings of UB Assistant Professor Eva Zurek and UB postdoctoral associate Pio Baettig.

Using an open-source computer program that UB PhD student David Lonie designed, Zurek and Baettig looked for sodium polyhydrides that, under pressure, would be viable superconductor candidates. The program, XtalOpt <http://xtalopt.openmolecules.net>, is an evolutionary algorithm that incorporates quantum mechanical calculations to determine the most stable geometries or crystal structures of solids.
In analyzing the results, Baettig and Zurek found that NaH9, which contains one sodium atom for every nine hydrogen atoms, is predicted to become metallic at an experimentally achievable pressure of about 250 gigapascals -- about 2.5 million times the Earth's standard atmospheric pressure, but less than the pressure at the Earth's core (about 3.5 million atmospheres).

"It is very basic research," says Zurek, a theoretical chemist. "But if one could potentially metallize hydrogen using the addition of sodium, it could ultimately help us better understand superconductors and lead to new approaches to designing a room-temperature superconductor."
By permitting electricity to travel freely, without resistance, such a superconductor could dramatically improve the efficiency of power transmission technologies.
Zurek, who joined UB in 2009, conducted research at Cornell University as a postdoctoral associate under Roald Hoffmann, a Nobel Prize-winning theoretical chemist whose research interests include the behavior of matter under high pressure.

In October 2009, Zurek co-authored a paper with Hoffman and other colleagues in the Proceedings of the National Academy of Sciences predicting that LiH6 -- a compound containing one lithium atom for every six hydrogen atoms -- could form as a stable metal at a pressure of around 1 million atmospheres.
Neither LiH6 and NaH9 exists naturally as stable compounds on Earth, but under high pressures, their structure is predicted to be stable.

"One of the things that I always like to emphasize is that chemistry is very different under high pressures," Zurek says. "Our chemical intuition is based upon our experience at one atmosphere. Under pressure, elements that do not usually combine on the Earth's surface may mix, or mix in different proportions. The insulator iodine becomes a metal, and sodium becomes insulating. Our aim is to use the results of computational experiments in order to help develop a chemical intuition under pressure, and to predict new materials with unusual properties."

Source  EurekaAlert!

Saturday, June 11, 2011

Coal-to-liquid fuels poised for a comeback

With rising energy prices, could coal-to-liquid conversion become an economical fuel option?

Converting coal into liquid fuels is known to be more costly than current energy technologies, both in terms of production costs and the amount of greenhouse gases the process emits. Production of coal-to-liquid fuel, or CTL, has a large carbon footprint, releasing more than twice the lifecycle greenhouse gases of conventional petroleum fuels. However, with the rise in energy prices that began in 2008 and concerns over energy security, there is renewed interest in the conversion technology.

The top graph is for a no policy scenario and the bottom graph is for a world climate policy scenario
Image: Chen et al., 2011.


Researchers from the MIT Joint Program on the Science and Policy of Global Change (JPSPGC) recently released an assessment of the economic viability of CTL conversion. The researchers looked at how different climate policies and the availability of other fuel alternatives, such as biofuels, would influence the prospects of CTL in the future.

Coal-to-liquid technology has been in existence since the 1920s and was used extensively in Germany in 1944, producing around 90 percent of the national fuel needs at that time. Since then, the technology has been largely abandoned for the relatively cheaper crude oil of the Middle East. A notable exception is South Africa, where CTL conversion still provides about 30 percent of national transportation fuel.

But will there be a resurgence of CTL technology? To determine the role that CTL conversion would play in the future global fuel mix, researchers examined several crucial factors affecting CTL prospects. Different scenarios were modeled, varying the stringency of future carbon policies, the availability of biofuels and the ability to trade carbon allowances on an international market. Researchers also examined whether CTL-conversion plants would use carbon capture and storage technology, which would lower greenhouse gas emissions but create an added cost.

The study found that, without climate policy, CTL might become economical as early as 2015 in coal-abundant countries like the United States and China. In other regions, CTL could become economical by 2020 or 2025. Carbon capture and storage technologies would not be used, as they would raise costs. In this scenario, CTL has the potential to account for about a third of the global liquid-fuel supply by 2050.

However, the viability of CTL would be highly limited in regions that adopt climate policies, especially if low-carbon biofuels are available. Under scenarios that include stringent future climate policies, the high costs associated with a large carbon footprint would diminish CTL prospects, even with carbon capture and storage technologies. CTL conversion may only be viable in countries with less stringent climate policies or where low-carbon fuel alternatives are not available.

“In short, various climate proposals have very different impacts on the allowances of regional CO2 emissions, which in turn have quite distinct implications on the prospects for CTL conversion,” says John Reilly, co-director of the JPSPGC and one of the study’s authors. “If climate policies are enforced, world demand for petroleum products would decrease, the price of crude oil would fall, and coal-to-liquid fuels would be much less competitive.”

Source MIT 

 

Friday, June 10, 2011

UGA researcher leads discovery of a new driving force for chemical reactions

Athens, Ga. – New research just published in the journal Science by a team of chemists at the University of Georgia and colleagues in Germany shows for the first time that a mechanism called tunneling control may drive chemical reactions in directions unexpected from traditional theories.
The finding has the potential to change how scientists understand and devise reactions in everything from materials science to biochemistry.

The discovery was a complete surprise and came following the first successful isolation of a long-elusive molecule called methylhydroxycarbene by the research team. While the team was pleased that it had "trapped" the prized compound in solid argon through an extremely low-temperature experiment, they were surprised when it vanished within a few hours. That prompted UGA theoretical chemistry professor Wesley Allen to conduct large scale, state-of-the-art computations to solve the mystery.
"What we found was that the change was being controlled by a process called quantum mechanical tunneling," said Allen, "and we found that tunneling can supersede the traditional chemical reactivity processes of kinetic and thermodynamic control. We weren't expecting this at all."
What had happened? Clearly, a chemical reaction had taken place, but only inert argon atoms surrounded the compound, and essentially no thermal energy was available to create new molecular arrangements. Moreover, said Allen, "the observed product of the reaction, acetaldehyde, is the least likely outcome among conceivable possibilities."

Other authors of the paper include Professor Peter Schreiner and his group members Hans Peter Reisenauer, David Ley and Dennis Gerbig of the Justus-Liebig University in Giessen, Germany. Graduate student Chia-Hua Wu at UGA undertook the theoretical work with Allen.
Quantum tunneling isn't new. It was first recognized as a physical process decades ago in early studies of radioactivity. In classical mechanics, molecular motions can be understood in terms of particles roaming on a potential energy surface. Energy barriers, visualized as mountain passes on the surface, separate one chemical compound from another.


For a chemical reaction to occur, a molecular system must have enough energy to "get over the top of the hill," or it will come back down and fail to react. In quantum mechanics, particles can get to the other side of the barrier by tunneling through it, a process that seemingly requires imaginary velocities. In chemistry, tunneling is generally understood to provide secondary correction factors for the rates of chemical reactions but not to provide the predominant driving force.

(The strange world of quantum mechanics has been subject to considerable interest and controversy over the last century, and Austrian physicist Erwin Schrödinger's thought-experiment called "Schrödinger's Cat" illustrates how perplexing it is to apply the rules and laws of quantum mechanics to everyday life.)
"We knew that the rate of a reaction can be significantly affected by quantum mechanical tunneling," said Allen. "It becomes especially important at low temperatures and for reactions involving light atoms. What we discovered here is that tunneling can dominate a reaction mechanism sufficiently to redirect the outcome away from traditional kinetic control. Tunneling can cause a reaction that does not have the lowest activation barriers to occur exclusively."

Allen suggests a vivid analogy between the behavior of methylhydroxycarbene and Schrödinger's iconic cat.
"The cat cannot jump out of its box of deadly confinement because the walls are too high, so it conjures a Houdini-like escape by bursting through the thinnest wall," he said.
The fact that new ideas about tunneling came from the isolation of methylhydroxycarbene was the kind of serendipity that runs through the history of science. Schreiner and his team had snagged the elusive compound, and that was reason enough to celebrate, Allen said. But the surprising observation that it vanished within a few hours raised new questions that led to even more interesting scientific discoveries.

"The initiative to doggedly follow up on a 'lucky observation' was the key to success," said Allen. "Thus, a combination of persistent experimentation and exacting theoretical analysis on methylhydroxycarbene and its reactivity led to the concept I dubbed tunneling control, which may be characterized as `a type of nonclassical kinetic control wherein the decisive factor is not the lowest activation barrier'."
While the process was unearthed for the specific case of methylhydroxycarbene at extremely low temperatures, Allen said that tunneling control "can be a general phenomenon, especially if hydrogen transfer is involved, and such processes need not be restricted to cryogenic temperatures."

Source EurekaAlert!

Meteorite holds clues to organic chemistry of the early Earth

Washington, DC— Carbonaceous chondrites are a type of organic-rich meteorite that contain samples of the materials that took part in the creation of our planets nearly 4.6 billion years ago, including materials that were likely formed before our Solar System was created and may have been crucial to the formation of life on Earth. The complex suite of organic materials found in carbonaceous chondrites can vary substantially from meteorite to meteorite. New research from Carnegie's Department of Terrestrial Magnetism and Geophysical Laboratory, published June 10 in Science, shows that most of these variations are the result of hydrothermal activity that took place within a few million years of the formation of the Solar System, when the meteorites were still part of larger parent bodies, likely asteroids.

Organic material in carbonaceous chondrites shares many characteristics with organic matter found in other primitive samples, including interplanetary dust particles, comet 81P/Wild-2, and Antarctic micrometeorites. It has been argued by some that this similarity indicates that organic material throughout the Solar System largely originated from a common source, possibly the interstellar medium.
A test of this common-source hypothesis stems from its requirement that the organic diversity within and among meteorites be due primarily to chemical and thermal processing that took place while the meteorites were parts of their parent bodies. In other words, there should be a relationship between the extent of hydrothermal alteration that a meteorite experienced and the chemistry of the organic material it contains.
If--as many have speculated--the organic material in meteorites had a role to play in the origin of life on Earth, the attraction of the common-source hypothesis is that the same organic material would have been delivered to all bodies in the Solar System. If the common source was the interstellar medium, then similar material would also be delivered to any forming planetary system.

The research team—led by Christopher Herd of the University of Alberta, Canada, and including Carnegie's Conel Alexander, Larry Nittler, Frank Gyngard, George Cody, Marilyn Fogel, and Yoko Kebukawa—studied four meteorite specimens from the shower of stones, produced by the breakup of a meteoroid as it entered the atmosphere, that fell on Tagish Lake in northern Canada in January 2000. The samples are considered very pristine, because they fell on a frozen lake, were collected without hand contact within a few days of landing and have remained frozen ever since.

The samples were processed and analyzed on the microscopic level using a variety of sophisticated techniques. Examination of their inorganic components indicated that the specimens had experienced large differences in the extent of hydrothermal alteration, prompting an in-depth examination of their organic material. The team demonstrated that the insoluble organic matter found in the samples has properties that span nearly the entire range found in all carbonaceous chondrites and that those properties correlate with other measures of the extent of parent body alteration. Their finding confirms that the diversity of this material is due to processing of a common precursor material in the asteroidal parent bodies.

The team found large concentrations of monocarboxylic acids, or MCAs, which are essential to biochemistry, in their Tagish Lake samples. They attributed the high level of these acids to the pristine nature of the samples, which have been preserved below zero degrees Celsius since they were recovered. There was variety in the types of MCAs, which they determined could also be due to alterations that took place on the parent bodies.
The samples also contained amino acids—the essential-for-life organic building blocks used to create proteins. The types and abundances of amino acids contained in the samples are consistent with an extraterrestrial origin, and were clearly also influenced, albeit in a complex way, by the alteration histories of their host meteorites.

"Taken together these results indicate that the chemical and thermal processing common to the Tagish Lake meteorites likely occurred when the samples were part of a larger parent body that was created from the same raw materials that formed our Solar System," said Larry Nittler of Carnegie's DTM. "These samples can also provide important clues to the source of organic material, and life, on Earth."

Source  ScienceAlert!

Thursday, June 9, 2011

A NEW WAY TO MAKE LIGHTER, STRONGER STEEL – IN A FLASH

COLUMBUS, Ohio – A Detroit entrepreneur surprised university engineers here recently, when he invented a heat-treatment that makes steel 7 percent stronger than any steel on record – in less than 10 seconds.
In fact, the steel, now trademarked as Flash Bainite, has tested stronger and more shock-absorbing than the most common titanium alloys used by industry.
Now the entrepreneur is working with researchers at Ohio State University to better understand the science behind the new treatment, called flash processing.

What they’ve discovered may hold the key to making cars and military vehicles lighter, stronger, and more fuel-efficient.
In the current issue of the journal Materials Science and Technology, the inventor and his Ohio State partners describe how rapidly heating and cooling steel sheets changes the microstructure inside the alloy to make it stronger and less brittle.
The basic process of heat-treating steel has changed little in the modern age, and engineer Suresh Babu is one of few researchers worldwide who still study how to tune the properties of steel in detail. He’s an associate professor of materials science and engineering at Ohio State, and Director of the National Science Foundation (NSF) Center for Integrative Materials Joining for Energy Applications, headquartered at the university.
“Steel is what we would call a ‘mature technology.’ We’d like to think we know most everything about it,” he said. “If someone invented a way to strengthen the strongest steels even a few percent, that would be a big deal. But 7 percent? That’s huge.”

Yet, when inventor Gary Cola initially approached him, Babu didn’t know what to think.
“The process that Gary described – it shouldn’t have worked,” he said. “I didn’t believe him. So he took my students and me to Detroit.”
Cola showed them his proprietary lab setup at SFP Works, LLC., where rollers carried steel sheets through flames as hot as 1100 degrees Celsius and then into a cooling liquid bath.
Though the typical temperature and length of time for hardening varies by industry, most steels are heat-treated at around 900 degrees Celsius for a few hours. Others are heated at similar temperatures for days.
Cola’s entire process took less than 10 seconds.

He claimed that the resulting steel was 7 percent stronger than martensitic advanced high-strength steel. [Martensitic steel is so named because the internal microstructure is entirely composed of a crystal form called martensite.] Cola further claimed that his steel could be drawn – that is, thinned and lengthened – 30 percent more than martensitic steels without losing its enhanced strength.
If that were true, then Cola’s steel could enable carmakers to build frames that are up to 30 percent thinner and lighter without compromising safety. Or, it could reinforce an armored vehicle without weighing it down.

“We asked for a few samples to test, and it turned out that everything he said was true,” said Ohio State graduate student Tapasvi Lolla. “Then it was up to us to understand what was happening.”
Cola is a self-taught metallurgist, and he wanted help from Babu and his team to reveal the physics behind the process – to understand it in detail so that he could find ways to adapt it and even improve it.
He partnered with Ohio State to provide research support for Brian Hanhold, who was an undergraduate student at the time, and Lolla, who subsequently earned his master’s degree working out the answer.
Using an electron microscope, they discovered that Cola’s process did indeed form martensite microstructure inside the steel. But they also saw another form called bainite microstructure, scattered with carbon-rich compounds called carbides.

In traditional, slow heat treatments, steel’s initial microstructure always dissolves into a homogeneous phase called austenite at peak temperature, Babu explained. But as the steel cools rapidly from this high temperature, all of the austenite normally transforms into martensite. 
“We think that, because this new process is so fast with rapid heating and cooling, the carbides don’t get a chance to dissolve completely within austenite at high temperature, so they remain in the steel and make this unique microstructure containing bainite, martensite and carbides,” Babu said.
Lolla pointed out that this unique microstructure boosts ductility -- meaning that the steel can crumple a great deal before breaking – making it a potential impact-absorber for automotive applications.
Babu, Lolla, Ohio State research scientist Boian Alexandrov, and Cola co-authored the paper with Badri Narayanan, a doctoral student in materials science and engineering.

Now Hanhold is working to carry over his lessons into welding engineering, where he hopes to solve the problem of heat-induced weakening during welding. High-strength steel often weakens just outside the weld joint, where the alloy has been heated and cooled. Hanhold suspects that bringing the speed of Cola’s method to welding might minimize the damage to adjacent areas and reduce the weakening.
If he succeeds, his discovery will benefit industrial partners of the NSF Center for Integrative Materials Joining Science for Energy Applications, which formed earlier this year. Ohio State’s academic partners on the center include Lehigh University, the University of Wisconsin-Madison, and the Colorado School of Mines.


 Source Ohio State University

Saturday, June 4, 2011

Heaviest elements yet join periodic table

Elements 114 and 116 have been officially added to the periodic table, becoming its heaviest members yet. They both exist for less than a second before decaying into lighter atoms, but they bring researchers a step closer to making even heavier elements that are predicted to be stable for decades or longer, forming a fabled "island of stability" in the periodic table.

 Thow your old periodic table away.

Evidence for the two elements has been mounting for years. They were finally given official status as new elements on Wednesday, after a three-year review by the Joint Working Party on Discovery of Elements, a committee of scientists from the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP).
Several teams have claimed to have produced element 114, starting in 1999. But the committee decided that a series of experiments reported by a collaboration of two teams in 2004 and 2006 provided the first convincing evidence. The same series of experiments is credited with producing evidence of element 116.

Slammed together

One of the collaborating groups was led by Yuri Oganessian at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, and the other by Ken Moody at the Lawrence Livermore National Laboratory in California.
The researchers forged the new heavy elements by slamming together the nuclei of lighter atoms at an accelerator at JINR. They made element 116 by bombarding targets made of the radioactive element curium, which has 96 protons in its nucleus, with calcium nuclei, which have 20 protons.
Nuclei of element 116 lasted only a few milliseconds before spitting out an alpha particle made of two protons and two neutrons and thereby decaying into nuclei of element 114. The team also made element 114 directly by firing calcium nuclei at plutonium targets, which have 94 protons in their nuclei.

Gone too soon

Element-114 nuclei decayed after about half a second into copernicium, which contains 112 protons, and is itself a very recent addition to the periodic table, having officially joined only in 2009. It was the pattern of time intervals between these decays, along with the energy of the alpha particles produced, that clinched the case for the elements' creation.
So what are elements 114 and 116 like? Unfortunately, their properties are still murky because the quantities produced were too small and existed too fleetingly for scientists to measure their chemical behaviour, such as what other elements they tend to react with.
"The lifetimes of these things have to be reasonably long so you can study the chemistry – meaning, pushing a minute," says Paul Karol of Carnegie Mellon University in Pittsburgh, who chaired the committee that approved the new elements.

'Not too weird'

As yet, the elements have no names. Instead they go by the temporary placeholder terms ununquadium and ununhexium, which by IUPAC convention are derived from the digits 114 and 116 respectively. Their discoverers will get a chance to offer suggestions that another IUPAC committee will consider. "As long as it's not something really weird, they will probably say it's fine," Karol told New Scientist.
The committee also considered discovery claims for elements 113, 115 and 118, but said in its report that the evidence it reviewed was not yet strong enough to warrant their addition to the periodic table.
Elements found to date at this extreme end of the table are ephemeral, but nuclear theorists suspect a class of super-heavy atoms could live for decades or longer and might boast useful new chemical properties.
Karol says the discovery of elements 114 and 116 is exciting because it is a step towards this island of stability, which some predict may be centred on nuclei with 120 or 126 protons. "It's getting closer and closer," Karol says.

Source New Scientist

Tuesday, May 31, 2011

Team solves decades-old molecular mystery linked to blood clotting

CHAMPAIGN, lll. — Blood clotting is a complicated business, particularly for those trying to understand how the body responds to injury. In a new study, researchers report that they are the first to describe in atomic detail a chemical interaction that is vital to blood clotting. This interaction – between a clotting factor and a cell membrane – has baffled scientists for decades.

Above is a movie of the supercomputer simulation of the blood clotting factor interacting with the membrane. The GLA domain of the clotting factor is depicted as a purple tube; individual GLA amino acids are yellow; tightly bound calcium ions are pink spheres; and the interacting phospholipids that make up the membrane are below.

The study appears online in the Journal of Biological Chemistry.
“For decades, people have known that blood-clotting proteins have to bind to a cell membrane in order for the clotting reaction to happen,” said University of Illinois biochemistry professor James Morrissey, who led the study with chemistry professor Chad Rienstra and biochemistry, biophysics and pharmacology professor Emad Tajkhorshid. “If you take clotting factors off the membrane, they’re thousands of times less active.”
The researchers combined laboratory detective work with supercomputer simulations and solid-state nuclear magnetic resonance (SSNMR) to get at the problem from every angle. They also made use of tiny rafts of lipid membranes called nanodiscs, using an approach developed at Illinois by biochemistry professor Stephen Sligar.

Previous studies had shown that each clotting factor contains a region, called the GLA domain, which interacts with specific lipids in cell membranes to start the cascade of chemical reactions that drive blood clotting.
One study, published in 2003 in the journal Nature Structural Biology, indicated that the GLA domain binds to a special phospholipid, phosphatidylserine (PS), which is embedded in the membrane. Other studies had shown that PS binds weakly to the clotting factor on its own, but in the presence of another phospholipid, phosphatidylethanolamine (PE), the interaction is much stronger.

Both PS and PE are abundant in the inner – but not the outer – leaflets of the double-layered membranes of cells. This keeps these lipids from coming into contact with clotting factors in the blood. But any injury that ruptures the cells brings PS and PE together with the clotting factors, initiating a chain of events that leads to blood clotting.
Researchers have developed many hypotheses to explain why clotting factors bind most readily to PS when PE is present. But none of these could fully explain the data.
In the new study, Morrissey’s lab engineered nanodiscs with high concentrations of PS and PE, and conducted functional tests to determine if they responded like normal membranes.
“We found that the nanodisc actually is very representative of what really happens in the cell in terms of the reaction of the lipids and the role that they play,” Morrissey said.

Then Tajkhorshid’s lab used advanced modeling and simulation methods to position every atom in the system and simulated the molecular interactions on a supercomputer. The simulations indicated that one PS molecule was linking directly to the GLA domain of the clotting factor via an amino acid (serine) on its head-group (the non-oily region of a phospholipid that orients toward the membrane surface).
More surprisingly, the simulations indicated that six other phospholipids also were drawing close to the GLA domain. These lipids, however, were bending their head-groups out of the way so that their phosphates, which are negatively charged, could interact with positively charged calcium ions associated with the GLA domain. (Watch a movie of the simulation.)
“The simulations were a breakthrough for us,” Morrissey said. “They provided a detailed view of how things might come together during membrane binding of coagulation factors. But these predictions had to be tested experimentally.”

Rienstra’s lab then analyzed the samples using SSNMR, a technique that allows researchers to precisely measure the distances and angles between individual atoms in large molecules or groups of interacting molecules. His group found that one of every six or seven PS molecules was binding directly to the clotting factor, providing strong experimental support for the model derived from the simulations.
“That turned out to be a key insight that we contributed to this study,” Rienstra said.
The team reasoned that if the PE head-groups were simply bending out of the way, then any phospholipid with a sufficiently small head-group should work as well as PE in the presence of PS. This also explained why only one PS molecule was actually binding to a GLA domain. The other phospholipids nearby were also interacting with the clotting factor, but more weakly.
The finding explained another mystery that had long daunted researchers. A different type of membrane lipid, phosphatidylcholine (PC), which has a very large head-group and is most abundant on the outer surface of cells, was known to block any association between the membrane and the clotting factor, even in the presence of PS.

Follow-up experiments showed that any phospholipid but PC enhanced the binding of PS to the GLA domain. This led to the “ABC” hypothesis: when PS is present, the GLA domain will interact with “Anything But Choline.”
“This is the first real insight at an atomic level of how most of the blood-clotting proteins interact with membranes, an interaction that’s known to be essential to blood clotting,” Morrissey said. The findings offer new targets for the development of drugs to regulate blood clotting, he said.
Morrissey and Tajkhorshid have their primary appointments in the U. of I. College of Medicine. Tajkhorshid also is an affiliate of the Beckman Institute at Illinois.
The National Heart, Lung and Blood Institute and the National Institute for General Medical Sciences provided funding for this study.

Source University of Illinois

Thursday, May 26, 2011

Iowa State physicists explain the long, useful lifetime of carbon-14

AMES, Iowa - The long, slow decay of carbon-14 allows archaeologists to accurately date the relics of history back to 60,000 years.

And while the carbon dating technique is well known and understood (the ratio of carbon-14 to other carbon isotopes is measured to determine the age of objects containing the remnants of any living thing), the reason for carbon-14's slow decay has not been understood. Why, exactly, does carbon-14 have a half-life of nearly 6,000 years while other light atomic nuclei have half-lives of minutes or seconds? (Half-life is the time it takes for the nuclei in a sample to decay to half the original amount.)
"This has been a very significant puzzle to nuclear physicists for several decades," said James Vary, an Iowa State University professor of physics and astronomy. "And the underlying reason turned out to be a fairly exotic one."

The reason involves the strong three-nucleon forces (a nucleon is either a neutron or a proton) within each carbon-14 nucleus. It's all about the simultaneous interactions among any three nucleons and the resulting influence on the decay of carbon-14. And it's no easy task to simulate those interactions.
In this case, it took about 30 million processor-hours on the Jaguar supercomputer at Oak Ridge National Laboratory in Tennessee. Jaguar has a peak performance of 2.3 quadrillion calculations per second, a speed that topped the list of the world's top 500 supercomputers when the carbon-14 simulations were run.
The research project's findings were recently published online by the journal Physical Review Letters.
Vary and Pieter Maris, an Iowa State research staff scientist in physics and astronomy, are the lead authors of the paper. Collaborating on the paper are Petr Navratil of TRIUMF (Canada's National Laboratory for Particle and Nuclear Physics in Vancouver) and the Lawrence Livermore National Laboratory in California; Erich Ormand of Lawrence Livermore National Lab; plus Hai Ah Nam and David Dean of Oak Ridge National Lab. The research was supported by contracts and grants from the U.S. Department of Energy Office of Science.

Vary, in explaining the findings, likes to remind people that two subatomic particles with different charges will attract each other. Particles with the same charges repel each other. Well, what happens when there are three particles interacting that's different from the simple addition of their interactions as pairs?
The strong three-nucleon interactions are complicated, but it turns out a lot happens to extend the decay of carbon 14 atoms.

"The whole story doesn't come together until you include the three-particle forces," said Vary. "The elusive three-nucleon forces contribute in a major way to this fact of life that carbon-14 lives so long."
Maris said the three-particle forces work together to cancel the effects of the pairwise forces governing the decay of carbon-14. As a result, the carbon-14 half-life is extended by many orders of magnitude. And that's why carbon-14 is a very useful tool for determining the age of objects.
To get that answer, Maris said researchers needed a billion-by-billion matrix and a computer capable of handling its 30 trillion non-zero elements. They also needed to develop a computer code capable of simulating the entire carbon-14 nucleus, including the roles of the three-nucleon forces. Furthermore, they needed to perform the corresponding simulations for nitrogen-14, the daughter nucleus of the carbon-14 decay. And, they needed to figure out how the computer code could be scaled up for use on the Jaguar petascale supercomputer.

"It was six months of work pressed into three months of time," Maris said.
But it was enough for the nuclear physicists to explain the long half-life of carbon-14. And now they say there are more puzzles to solve:
"Everybody now knows about these three-nucleon forces," Vary said. "But what about four-nucleon forces? This does open the door for more study."

Source Iowa State University

Tuesday, May 17, 2011

Sharpening the Nanofocus: Berkeley Lab Researchers Use Nanoantenna to Enhance Plasmonic Sensing

Such highly coveted technical capabilities as the observation of single catalytic processes in nanoreactors, or the optical detection of low concentrations of biochemical agents and gases are an important step closer to fruition. Researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab), in collaboration with researchers at the University of Stuttgart in Germany, report the first experimental demonstration of antenna-enhanced gas sensing at the single particle level. By placing a palladium nanoparticle on the focusing tip of a gold nanoantenna, they were able to clearly detect changes in the palladium’s optical properties upon exposure to hydrogen.

Top figure shows hydrogen (red) absorbed on a palladium nanoparticle, resulting in weak light scattering and barely detectable spectral changes. Bottom figure shows gold antenna enhancing light scattering and producing an easy to detect spectral shift. (Image courtesy of Alivisatos group)

“We have demonstrated resonant antenna-enhanced single-particle hydrogen sensing in the visible region and presented a fabrication approach to the positioning of a single palladium nanoparticle in the nanofocus of a gold nanoantenna,” says Paul Alivisatos, Berkeley Lab’s director and the leader of this research. “Our concept provides a general blueprint for amplifying plasmonic sensing signals at the single-particle level and should pave the road for the optical observation of chemical reactions and catalytic activities in nanoreactors, and for local biosensing.”

Alivisatos, who is also the Larry and Diane Bock Professor of Nanotechnology at the University of California, Berkeley, is the corresponding author of a paper in the journal Nature Materials describing this research. The paper is titled “Nanoantenna-enhanced gas sensing in a single tailored nanofocus.” Co-authoring the paper with Alivisatos were Laura Na Liu, Ming Tang, Mario Hentschel and Harald Giessen.

One of the hottest new fields in technology today is plasmonics – the confinement of electromagnetic waves in  dimensions smaller than half-the-wavelength of the incident photons in free space. Typically this is done at the interface between metallic nanostructures, usually gold, and a dielectric, usually air. The confinement of the electromagnetic waves in these metallic nanostructures generates electronic surface waves called “plasmons.” A matching of the oscillation frequency between plasmons and the incident electromagnetic waves gives rise to a phenomenon known as localized surface plasmon resonance (LSPR), which can concentrate the electromagnetic field into a volume less than a few hundred cubic nanometers. Any object brought into this locally confined field – referred to as the nanofocus – will influence the LSPR in a manner that can be detected via dark-field microscopy.

“Nanofocusing has immediate implications for plasmonic sensing,” says Laura Na Liu, lead author of the Nature Materials paper who was at the time the work was done a member of Alivisatos’ research group but is now with Rice University. “Metallic nanostructures with sharp corners and edges that form a pointed tip are especially favorable for plasmonic sensing because the field strengths of the electromagnetic waves are so strongly enhanced over such an extremely small sensing volume.”

Scanning electron microscopy image showing a palladium nanoparticle with a gold antenna to enhance plasmonic sensing. (Image courtesy of Alivisatos group)

Plasmonic sensing is especially promising for the detection of flammable gases such as hydrogen, where the use of sensors that require electrical measurements pose safety issues because of the potential threat from sparking. Hydrogen, for example, can ignite or explode in concentrations of only four-percent. Palladium was seen as a prime candidate for the plasmonic sensing of hydrogen  because it readily and rapidly absorbs hydrogen that alters its electrical and dielectric properties. However, the LSPRs of palladium nanoparticles yield broad spectral profiles that make detecting changes extremely difficult.

“In our resonant antenna-enhanced scheme, we use double electron-beam lithography in combination with a double lift-off procedure to precisely position a single palladium nanoparticle in the nanofocus of a gold nanoantenna,” Liu says. “The strongly enhanced gold-particle plasmon near-fields can sense the change in the dielectric function of the proximal palladium nanoparticle as it absorbs or releases hydrogen. Light scattered by the system is collected by a dark-field microscope with attached spectrometer and the LSPR change is read out in real time.”

Alivisatos, Liu and their co-authors found that the antenna enhancement effect could be controlled by changing the distance between the palladium nanoparticle and the gold antenna, and by changing the shape of the antenna.

“By amplifying sensing signals at the single-particle level, we eliminate the statistical and average characteristics inherent to ensemble measurements,” Liu says. “Moreover, our antenna-enhanced plasmonic sensing technique comprises a noninvasive scheme that is biocompatible and can be used in aqueous environments, making it applicable to a variety of physical and biochemical materials.”

For example, by replacing the palladium nanoparticle with other nanocatalysts, such as ruthenium, platinum, or magnesium, Liu says their antenna-enhanced plasmonic sensing scheme can be used to monitor the presence of numerous other important gases in addition to hydrogen, including carbon dioxide and the nitrous oxides. This technique also offers a promising plasmonic sensing alternative to the fluorescent detection of catalysis, which depends upon the challenging task of finding appropriate fluorophores. Antenna-enhanced plasmonic sensing also holds potential for the observation of single chemical or biological events.

“We believe our antenna-enhanced sensing technique can serve as a bridge between plasmonics and biochemistry,” Liu says. “Plasmonic sensing offers a unique tool for optically probing biochemical processes that are optically inactive in nature. In addition, since plasmonic nanostructures made from gold or silver do not bleach or blink, they allow for continuous observation, an essential capability for in-situ monitoring of biochemical behavior.”

This research was supported by the DOE Office of Science and the German ministry of research.
Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 12 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit www.lbl.gov.
Additional information:
For more information about the research of Paul Alivisatos, visit the Website at http://www.cchem.berkeley.edu/pagrp/

Source Berkeley Lab

Sunday, May 15, 2011

Oxygen oases saved first animals from asphyxiation

Oxygen-rich microbial mats may have triggered the evolution of animals that could move.
The oldest known animal burrows are in 600-million-year-old rocks from the Ediacaran period. Their discovery surprised geologists, because oxygen levels in the oceans at the time were around one-tenth of today's levels – too low to support energetic activity.
To work out how the animals avoided suffocation, Murray Gingras of the University of Alberta in Edmonton, Canada, explored modern-day, low-oxygen lagoons in the Los Roques archipelago, Venezuela. He found that microbial mats on the lagoon floors contained four times as much oxygen as the virtually lifeless water above – enough to support a community of worms and insect larvae.
Gingras says the burrows these animals leave are similar to those found in the 600-million-year-old rocks. Because the rocks also contain fossil microbial mats, that suggests the mats produced enough oxygen to allow animals to become mobile for the first time, despite the generally low oxygen conditions that existed in the Ediacaran.
"This is a really neat solution to an old problem," says Ediacaran researcher Jim Gehling of the South Australian Museum in Adelaide. But he points out that animals in the Ediacaran might have struggled to survive at night, when the microbes stopped photosynthesising and oxygen levels fell.

Source  New Scientist

Wednesday, May 11, 2011

Proton dripping tests a fundamental force in nature

Like gravity, the strong interaction is a fundamental force of nature. It is the essential "glue" that holds atomic nuclei—composed of protons and neutrons— together to form atoms, the building blocks of nearly all the visible matter in the universe. Despite its prevalence in nature, researchers are still searching for the precise laws that govern the strong force. However, the recent discovery of an extremely exotic, short-lived nucleus called fluorine-14 in laboratory experiments may indicate that scientists are gaining a better grasp of these rules.

Fluorine-14 comprises nine protons and five neutrons. It exists for a tiny fraction of a second before a proton "drips" off, leaving an oxygen-13 nucleus behind. A team of researchers led by James Vary, a professor of physics at Iowa State University, first predicted the properties of fluorine-14 with the help of scientists in Lawrence Berkeley National Laboratory's (Berkeley Lab's) Computational Research Division, as well as supercomputers at the National Energy Research Scientific Computing Center (NERSC) and the Oak Ridge Leadership Computing Facility. These fundamental predictions served as motivations for experiments conducted by Vladilen Goldberg's team at Texas A&M's Cyclotron Institute, which achieved the first sightings of fluorine-14.

"This is a true testament to the predictive power of the underlying theory," says Vary. "When we published our theory a year ago, fluorine-14 had never been observed experimentally. In fact, our theory helped the team secure time on their newly commissioned cyclotron to conduct their experiment. Once their work was done, they saw virtually perfect agreement with our theory."

 
This graph shows the flourine-14 supercomputer predictions (far-left) and experimental results (center). The striking similarities between these graphs indicate that researchers are gaining a better understanding of the precise laws that govern the strong force.

He notes that the ability to reliably predict the properties of exotic nuclei with supercomputers helps pave the way for researchers to cost-effectively improve designs of nuclear reactors, to predict results from next generation accelerator experiments that will produce rare and exotic isotopes, as well as to better understand phenomena such as supernovae and neutron stars.
"We will never be able to travel to a neutron star and study it up close, so the only way to gain insights into its behavior is to understand how exotic nuclei like fluorine-14 behave and scale up," says Vary.

Developing a Computer Code to Simulate the Strong Force Including fluorine-14, researchers have so far discovered about 3,000 nuclei in laboratory experiments and suspect that 6,000 more could still be created and studied. Understanding the properties of these nuclei will give researchers insights into the strong force, which could in turn be applied to develop and improve future energy sources.
With these goals in mind, the Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program brought together teams of theoretical physicists, applied mathematicians, computer scientists and students from universities and national laboratories to create a computational project called the Universal Nuclear Energy Density Functional (UNEDF), which uses supercomputers to predict and understand behavior of a wide range of nuclei, including their reactions, and to quantify uncertainties. In fact, fluorine-14 was simulated with a code called Many Fermion Dynamics–nuclear (MFDn) that is part of the UNEDF project.

According to Vary, much of this code was developed on NERSC systems over the past two decades. "We started by calculating how two or three neutrons and protons interact, then built up our interactions from there to predict the properties of exotic nuclei like fluorine-14 with nine protons and five neutrons," says Vary. "We actually had these capabilities for some time, but were waiting for computing power to catch up. It wasn't until the past three or four years that computing power became available to make the runs."
Through the SciDAC program, Vary's team partnered with Ng and other scientists in Berkeley Lab's CRD who brought discrete and numerical mathematics expertise to improve a number of aspects in the code. "The prediction of fluorine-14 would not have been possible without SciDAC. Before our collaboration, the code had some bottlenecks, so performance was an issue," says Esmond Ng, who heads Berkeley Lab's Scientific Computing Group. Vary and Ng lead teams that are part of the UNEDF collaboration.

"We would not have been able to solve this problem without help from Esmond and the Berkeley Lab collaborators, or the initial investment from NERSC, which gave us the computational resources to develop and improve our code," says Vary. "It just would have taken too long. These contributions improved performance by a factor of three and helped us get more precise numbers."
He notes that a single simulation of fluorine-14 would have taken 18 hours on 30,000 processor cores, without the improvements implemented with the Berkeley Lab team's help. However, thanks to the SciDAC collaboration, each final run required only 6 hours on 30,000 processors. The final runs were performed on the Jaguar system at the Oak Ridge Leadership Computing Facility with an Innovative and Novel Computational Impact on Theory and Experiment (INCITE) allocation from the Department of Energy's Office of Advanced Scientific Computing Research (ASCR).

Source EurekaAlery!

Monday, May 9, 2011

Fundamental question on how life started solved

German and US researchers calculate a carbon nucleus of crucial importance

The researchers published their results in the coming issue of the scientific journal Physical Review Letters.

"Attempts to calculate the Hoyle state have been unsuccessful since 1954," said Professor Dr. Ulf-G. Meißner (Helmholtz-Institut für Strahlen- und Kernphysik der Universität Bonn). "But now, we have done it!" The Hoyle state is an energy-rich form of the carbon nucleus. It is the mountain pass over which all roads from one valley to the next lead: From the three nuclei of helium gas to the much larger carbon nucleus. This fusion reaction takes place in the hot interior of heavy stars. If the Hoyle state did not exist, only very little carbon or other higher elements such as oxygen, nitrogen and iron could have formed. Without this type of carbon nucleus, life probably also would not have been possible.

The search for the "slave transmitter"

The Hoyle state had been verified by experiments as early as 1954, but calculating it always failed. For this form of carbon consists of only three, very loosely linked helium nuclei - more of a cloudy diffuse carbon nucleus. And it does not occur individually, only together with other forms of carbon. "This is as if you wanted to analyze a radio signal whose main transmitter and several slave transmitters are interfering with each other," explained Prof. Dr. Evgeny Epelbaum (Institute of Theoretical Physics II at Ruhr-Universität Bochum). The main transmitter is the stable carbon nucleus from which humans - among others - are made. "But we are interested in one of the unstable, energy-rich carbon nuclei; so we have to separate the weaker radio transmitter somehow from the dominant signal by means of a noise filter."

What made this possible was a new, improved calculating approach the researchers used that allowed calculating the forces between several nuclear particles more precisely than ever. And in JUGENE, the supercomputer at Forschungszentrum Jülich, a suitable tool was found. It took JUGENE almost a week of calculating. The results matched the experimental data so well that the researchers can be certain that they have indeed calculated the Hoyle state.

More about how the Universe came into existence

"Now we can analyze this exciting and essential form of the carbon nucleus in every detail," explained Prof. Meißner. "We will determine how big it is, and what its structure is. And it also means that we can now take a very close look at the entire chain of how elements are formed."

In future, this may even allow answering philosophical questions using science. For decades, the Hoyle state was a prime example for the theory that natural constants must have precisely their experimentally determined values, and not any different ones, since otherwise we would not be here to observe the Universe (the anthropic principle). "For the Hoyle state this means that it must have exactly the amount of energy it has, or else, we would not exist," said Prof. Meißner. "Now we can calculate whether - in a changed world with other parameters - the Hoyle state would indeed have a different energy when comparing the mass of three helium nuclei." If this is so, this would confirm the anthropic principle.

###

The study was jointly conducted by the University of Bonn, Ruhr-Universität Bochum, North Carolina State University, and Forschungszentrum Jülich.

Source EurekaAlert!

Consumption, carbon emissions and international trade

Palo Alto, CA— Accurately calculating the amount of carbon dioxide emitted in the process of producing and bringing products to our doorsteps is nearly impossible, but still a worthwhile effort, two Carnegie researchers claim in a commentary published online this week by Proceedings of the National Academy of Sciences. The Global Ecology department's Ken Caldeira and Steven Davis commend the work of industrial ecologist Glen Peters and colleagues, published in the same journal late last month, and use that team's data to do additional analysis on the disparity between emissions and consumption in different parts of the world.

Caldeira and Davis point out that carbon is released at many stages of the production process including the energy used in creating each component of a product, CO2 released in making the manufacturing equipment, and carbon released by vehicles transporting factory workers to and from their jobs.

"Very quickly, we see that nothing exists in isolation and that to understand how much emission can be related to any particular action, we must have a reasonable accounting system that allocates total CO2 emissions to specific actions," Caldeira said. "The accounting system must conform to our intuitions about how responsibility should be shared among participants in complex systems."

Caldeira and Davis say Peters and his team are leaders in asking questions about how much CO2 consumption in the United States and other developed countries—used here to signify nations that made commitments under the Kyoto Protocol— is supported by CO2 in developing countries.

The earlier PNAS-published study looked at the impact of goods and services that were consumed in developed countries, but produced in developing ones. Peters and team found decreased emissions in the former since 1990, and increased emissions in the latter. But when emissions from the production of goods were transferred to the place where the goods were consumed, then the trend in developed countries was reversed.

The Carnegie scientists took this data and broke it down in terms of per-capita and per-dollar gross domestic product. They found that on a per-capita basis the average person in developed countries is responsible for more CO2 emissions than his or her counterpart in the developing world. And the amount of CO2 emitted per dollar of GDP is improving at similar rates between the two categories.

Caldeira and Davis concluded that "the focus on territorial emissions … has perhaps led us to underemphasize the role of consumption of goods and services in driving these emissions. It is important to look at all drivers of emissions, as everyone along the supply chain has a vested interest in the benefits that accrue from our fossil-fueled global economy."

###

The Department of Global Ecology was established in 2002 to help build the scientific foundations for a sustainable future. The department is located on the campus of Stanford University, but is an independent research organization funded by the Carnegie Institution. Its scientists conduct basic research on a wide range of large-scale environmental issues, including climate change, ocean acidification, biological invasions, and changes in biodiversity.

The Carnegie Institution for Science (carnegiescience.edu) is a private, nonprofit organization headquartered in Washington, D.C., with six research departments throughout the U.S. Since its founding in 1902, the Carnegie Institution has been a pioneering force in basic scientific research. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science.

Source EurekaAlert!

Wednesday, May 4, 2011

New evidence that caffeine is a healthful antioxidant in coffee

Scientists are reporting an in-depth analysis of how the caffeine in coffee, tea, and other foods seems to protect against conditions such as Alzheimer's disease and heart disease on the most fundamental levels. The report, which describes the chemistry behind caffeine's antioxidant effects, appears in ACS' The Journal of Physical Chemistry B.
Annia Galano and Jorge Rafael León-Carmona describe evidence suggesting that coffee is one of the richest sources of healthful antioxidants in the average person's diet. Some of the newest research points to caffeine (also present in tea, cocoa, and other foods) as the source of powerful antioxidant effects that may help protect people from Alzheimer's and other diseases. However, scientists know little about exactly how caffeine works in scavenging the so-called free radicals that have damaging effects in the body. And those few studies sometimes have reached contradictory conclusions.
In an effort to bolster scientific knowledge about caffeine, they present detailed theoretical calculations on caffeine's interactions with free radicals. Their theoretical conclusions show "excellent" consistency with the results that other scientists have report from animal and other experiments, bolstering the likelihood that caffeine is, indeed, a source of healthful antioxidant activity in coffee.

Source   EurekaAlert!