2013年5月11日星期六

Sifting through atmospheres of far-off worlds

Sifting through atmospheres of far-off worlds

One breakthrough to come in recent years is direct imaging of exoplanets. Ground-based telescopes have begun taking infrared pictures of the planets posing near their stars in family portraits. But to astronomers, a picture is worth even more than a thousand words if its light can be broken apart into a rainbow of different wavelengths.

Those wishes are coming true as researchers are beginning to install infrared cameras on ground-based telescopes equipped with spectrographs. Spectrographs are instruments that spread an object's light apart, revealing signatures of molecules. Project 1640, partly funded by NASA's Jet Propulsion Laboratory, Pasadena, Calif., recently accomplished this goal using the Palomar Observatory near San Diego.

"In just one hour, we were able to get precise composition information about four planets around one overwhelmingly bright star," said Gautam Vasisht of JPL, co-author of the new study appearing in the Astrophysical Journal. "The star is a hundred thousand times as bright as the planets, so we've developed ways to remove that starlight and isolate the extremely faint light of the planets."

Along with ground-based infrared imaging, other strategies for combing through the atmospheres of giant planets are being actively pursued as well. For example, NASA's Spitzer and Hubble space telescopes monitor planets as they cross in front of their stars, and then disappear behind. NASA's upcoming James Webb Space Telescope will use a comparable strategy to study the atmospheres of planets only slightly larger than Earth.

In the new study, the researchers examined HR 8799, a large star orbited by at least four known giant, red planets. Three of the planets were among the first ever directly imaged around a star, thanks to observations from the Gemini and Keck telescopes on Mauna Kea, Hawaii, in 2008. The fourth planet, the closest to the star and the hardest to see, was revealed in images taken by the Keck telescope in 2010.

That alone was a tremendous feat considering that all planet discoveries up until then had been made through indirect means, for example by looking for the wobble of a star induced by the tug of planets.

Those images weren't enough, however, to reveal any information about the planets' chemical composition. That's where spectrographs are needed -- to expose the "fingerprints" of molecules in a planet's atmosphere. Capturing a distant world's spectrum requires gathering even more planet light, and that means further blocking the glare of the star.

Project 1640 accomplished this with a collection of instruments, which the team installs on the ground-based telescopes each time they go on "observing runs." The instrument suite includes a coronagraph to mask out the starlight; an advanced adaptive optics system, which removes the blur of our moving atmosphere by making millions of tiny adjustments to two deformable telescope mirrors; an imaging spectrograph that records 30 images in a rainbow of infrared colors simultaneously; and a state-of-the-art wave front sensor that further adjusts the mirrors to compensate for scattered starlight.

"It's like taking a single picture of the Empire State Building from an airplane that reveals a bump on the sidewalk next to it that is as high as an ant," said Ben R. Oppenheimer, lead author of the new study and associate curator and chair of the Astrophysics Department at the American Museum of Natural History, N.Y., N.Y.

Their results revealed that all four planets, though nearly the same in temperature, have different compositions. Some, unexpectedly, do not have methane in them, and there may be hints of ammonia or other compounds that would also be surprising. Further theoretical modeling will help to understand the chemistry of these planets.

Meanwhile, the quest to obtain more and better spectra of exoplanets continues. Other researchers have used the Keck telescope and the Large Binocular Telescope near Tucson, Ariz., to study the emission of individual planets in the HR8799 system. In addition to the HR 8799 system, only two others have yielded images of exoplanets. The next step is to find more planets ripe for giving up their chemical secrets. Several ground-based telescopes are being prepared for the hunt, including Keck, Gemini, Palomar and Japan's Subaru Telescope on Mauna Kea, Hawaii.

Ideally, the researchers want to find young planets that still have enough heat left over from their formation, and thus more infrared light for the spectrographs to see. They also want to find planets located far from their stars, and out of the blinding starlight. NASA's infrared Spitzer and Wide-field Infrared Survey Explorer (WISE) missions, and its ultraviolet Galaxy Evolution Explorer, now led by the California Institute of Technology, Pasadena, have helped identify candidate young stars that may host planets meeting these criteria.

"We're looking for super-Jupiter planets located faraway from their star," said Vasisht. "As our technique develops, we hope to be able to acquire molecular compositions of smaller, and slightly older, gas planets."

Still lower-mass planets, down to the size of Saturn, will be targets for imaging studies by the James Webb Space Telescope.

"Rocky Earth-like planets are too small and close to their stars for the current technology, or even for James Webb to detect. The feat of cracking the chemical compositions of true Earth analogs will come from a future space mission such as the proposed Terrestrial Planet Finder," said Charles Beichman, a co-author of the P1640 result and executive director of NASA's Exoplanet Science Institute at Caltech.

Though the larger, gas planets are not hospitable to life, the current studies are teaching astronomers how the smaller, rocky ones form.

"The outer giant planets dictate the fate of rocky ones like Earth. Giant planets can migrate in toward a star, and in the process, tug the smaller, rocky planets around or even kick them out of the system. We're looking at hot Jupiters before they migrate in, and hope to understand more about how and when they might influence the destiny of the rocky, inner planets," said Vasisht.

NASA's Exoplanet Science Institute manages time allocation on the Keck telescope for NASA. JPL manages NASA's Exoplanet Exploration program office. Caltech manages JPL for NASA.

A visualization from the American Museum of Natural History showing where the HR 8799 system is in relation to our solar system is online at http://www.youtube.com/watch?v=yDNAk0bwLrU .

More information about exoplanets and NASA's planet-finding program is at http://planetquest.jpl.nasa.gov .


TAG:Extrasolar Planets NASA Pluto Astronomy Space Exploration Space Telescopes

Carbon dioxide at NOAA's Mauna Loa Observatory reaches new milestone: Tops 400 parts per million

Carbon dioxide at NOAA's Mauna Loa Observatory reaches new milestone: Tops 400 parts per million

May 10, 2013 — On May 9, the daily mean concentration of carbon dioxide in the atmosphere of Mauna Loa, Hawaii, surpassed 400 parts per million (ppm) for the first time since measurements began in 1958. Independent measurements made by both NOAA and the Scripps Institution of Oceanography have been approaching this level during the past week. It marks an important milestone because Mauna Loa, as the oldest continuous carbon dioxide (CO2) measurement station in the world, is the primary global benchmark site for monitoring the increase of this potent heat-trapping gas.






Carbon dioxide pumped into the atmosphere by fossil fuel burning and other human activities is the most significant greenhouse gas (GHG) contributing to climate change. Its concentration has increased every year since scientists started making measurements on the slopes of the Mauna Loa volcano more than five decades ago. The rate of increase has accelerated since the measurements started, from about 0.7 ppm per year in the late 1950s to 2.1 ppm per year during the last 10 years.

"That increase is not a surprise to scientists," said NOAA senior scientist Pieter Tans, with the Global Monitoring Division of NOAA's Earth System Research Laboratory in Boulder, Colo. "The evidence is conclusive that the strong growth of global CO2 emissions from the burning of coal, oil, and natural gas is driving the acceleration."

Before the Industrial Revolution in the 19th century, global average CO2 was about 280 ppm. During the last 800,000 years, CO2 fluctuated between about 180 ppm during ice ages and 280 ppm during interglacial warm periods. Today's rate of increase is more than 100 times faster than the increase that occurred when the last ice age ended.

It was researcher Charles David Keeling of the Scripps Institution of Oceanography, UC San Diego, who began measuring carbon dioxide at Mauna Loa in 1958, initiating now what is known as the "Keeling Curve." His son, Ralph Keeling, also a geochemist at Scripps, has continued the Scripps measurement record since his father's death in 2005.

"There's no stopping CO2 from reaching 400 ppm," said Ralph Keeling. "That's now a done deal. But what happens from here on still matters to climate, and it's still under our control. It mainly comes down to how much we continue to rely on fossil fuels for energy."

NOAA scientists with the Global Monitoring Division have made around-the-clock measurements there since 1974. Having two programs independently measure the greenhouse gas provides confidence that the measurements are correct.

Moreover, similar increases of CO2 are seen all over the world by many international scientists. NOAA, for example, which runs a global, cooperative air sampling network, reported last year that all Arctic sites in its network reached 400 ppm for the first time. These high values were a prelude to what is now being observed at Mauna Loa, a site in the subtropics, this year. Sites in the Southern Hemisphere will follow during the next few years. The increase in the Northern Hemisphere is always a little ahead of the Southern Hemisphere because most of the emissions driving the CO2 increase take place in the north.

Once emitted, CO2 added to the atmosphere and oceans remains for thousands of years. Thus, climate changes forced by CO2 depend primarily on cumulative emissions, making it progressively more and more difficult to avoid further substantial climate change.

On the Web:

  • NOAA carbon dioxide data: http://www.esrl.noaa.gov/gmd/ccgg/trends/weekly.html
  • Scripps Institution of Oceanography carbon dioxide data: http://www.keelingcurve.ucsd.edu/
  • NOAA's Maua Loa Observatory: http://www.esrl.noaa.gov/gmd/obop/mlo/
  • ANIMATION (carbon dioxide levels over 800,000 years): http://www.esrl.noaa.gov/gmd/ccgg/trends/history.html
  • IMAGES: http://www.esrl.noaa.gov/gmd/Photo_Gallery/Field_Sites/MLO/


TAG:Global Warming Climate Air Quality Environmental Issues Environmental Policy Forest

Potential flu pandemic lurks: Influenza viruses circulating in pigs, birds could pose risk to humans

Potential flu pandemic lurks: Influenza viruses circulating in pigs, birds could pose risk to humans

A new study from MIT reveals that there are many strains of H3N2 circulating in birds and pigs that are genetically similar to the 1968 strain and have the potential to generate a pandemic if they leap to humans. The researchers, led by Ram Sasisekharan, the Alfred H. Caspary Professor of Biological Engineering at MIT, also found that current flu vaccines might not offer protection against these strains.

"There are indeed examples of H3N2 that we need to be concerned about," says Sasisekharan, who is also a member of MIT's Koch Institute for Integrative Cancer Research. "From a pandemic-preparedness point of view, we should potentially start including some of these H3 strains as part of influenza vaccines."

The study, which appears in the May 10 issue of the journal Scientific Reports, also offers the World Health Organization and public-health agencies' insight into viral strains that should raise red flags if detected.

Influenza evolution

In the past 100 years, influenza viruses that emerged from pigs or birds have caused several notable flu pandemics. When one of these avian or swine viruses gains the ability to infect humans, it can often evade the immune system, which is primed to recognize only strains that commonly infect humans.

Strains of H3N2 have been circulating in humans since the 1968 pandemic, but they have evolved to a less dangerous form that produces a nasty seasonal flu. However, H3N2 strains are also circulating in pigs and birds.

Sasisekharan and his colleagues wanted to determine the risk of H3N2 strains re-emerging in humans, whose immune systems would no longer recognize the more dangerous forms of H3N2. This type of event has a recent precedent: In 2009, a strain of H1N1 emerged that was very similar to the virus that caused a 1918 pandemic that killed 50 million to 100 million people.

"We asked if that could happen with H3," Sasisekharan says. "You would think it's more readily possible with H3 because we observe that there seems to be a lot more mixing of H3 between humans and swine."

Genetic similarities

In the new study, the researchers compared the 1968 H3N2 strain and about 1,100 H3 strains now circulating in pigs and birds, focusing on the gene that codes for the viral hemagglutinin (HA) protein.

After comparing HA genetic sequences in five key locations that control the viruses' interactions with infected hosts, the researchers calculated an "antigenic index" for each strain. This value indicates the percentage of these genetic regions identical to those of the 1968 pandemic strain and helps determine how well an influenza virus can evade a host's immune response.

The researchers also took into account the patterns of attachment of the HA protein to sugar molecules called glycans. The virus' ability to attach to glycan receptors found on human respiratory-tract cells is key to infecting humans.

Seeking viruses with an antigenic index of at least 49 percent and glycan-attachment patterns identical to those of the 1968 virus, the research team identified 581 H3 viruses isolated since 2000 that could potentially cause a pandemic. Of these, 549 came from birds and 32 from pigs.

The researchers then exposed some of these strains to antibodies provoked by the current H3 seasonal-flu vaccines. As they predicted, these antibodies were unable to recognize or attack these H3 strains. Of the 581 HA sequences, six swine strains already contain the standard HA mutations necessary for human adaptation, and are thus capable of entering the human population either directly or via genetic reassortment, Sasisekharan says.

"One of the amazing things about the influenza virus is its ability to grab genes from different pools," he says. "There could be viral genes that mix among pigs, or between birds and pigs."

Sasisekharan and colleagues are now doing a similar genetic study of H5 influenza strains. The H3 study was funded by the National Institutes of Health and the National Science Foundation.


TAG:Bird Flu Influenza Cold and Flu Bird Flu Research Microbes and More Virology

What Nook could buy Microsoft

What Nook could buy Microsoft

Microsoft has bid $1 billion to buy the digital assets of Nook Media, the e-book business led by Barnes & Noble, according to a report published late Wednesday.The news sparked immediate enthusiasm on Wall Street, where the bookseller's stock price was up 18% as of early Thursday afternoon. But, despite the excitement, an all-important question remains: If Microsoft goes on to consummate the rumored deal, what will it get out of the acquisition?

TechCrunch broke the news, claiming it had acquired internal documents that revealed not only Microsoft's offer, but also plans to phase out Nook's Android-based tablets by the end of 2014. An anonymous source told The New York Times that the documents are authentic and only a few weeks old. According to the Times, which noted that Nook was valued at $1.8 billion as recently as December, it is not yet clear if a deal will be closed. Microsoft has so far declined to comment, but the Times source claimed any announcements are at least weeks away.

Click to read the rest of this story on InformationWeek.


TAG:Barnes and Noble Microsoft Nook E Reader

Kestrels, other urban birds are stressed by human activity

Kestrels, other urban birds are stressed by human activity

The peer-reviewed paper "Reproductive failure of a human-tolerant species, the American kestrel, is associated with stress and human disturbance," was published in the British Ecological Society's Journal of Applied Ecology (May 10, 2013). Boise State University graduate student Erin Strasser, now with the Rocky Mountain Bird Observatory, and Julie Heath, a professor in the Boise State Department of Biological Sciences and Raptor Research Center, conducted the research.

Strasser and Heath conducted research along one of Idaho's major expressways, Interstate 84, and in suburban and rural areas south of the state's capital city of Boise. Since 1987, researchers from Boise State and the U.S. Geological Survey have monitored a number of nest boxes located along the area's roadways, in people's back yards, and in sagebrush-steppe habitat.

In this study, Strasser and Heath were interested in understanding how human-dominated landscapes affect breeding kestrels, with particular attention paid to the link between disturbance, stress and nest failure. The two monitored the boxes to determine nest fate, and collected a small blood sample from adult birds. The researchers were looking at corticosterone levels, which indicate stress levels (the equivalent in humans is cortisol). Corticosterone can lead to behavioral and physiological changes that allow individuals to cope with stressful situations, while suppressing other activities such as reproduction.

The data showed that female kestrels nesting in areas with high human activity, such as along noisy roadways, have higher corticosterone levels, but males do not. This could be because females spend more time in the nesting box and thus are exposed more often to stressors such as vehicle noise. Too much ambient noise may make it difficult for them to assess the level of danger, leading to higher stress levels and increased vigilance behavior, decreased parental care or the decision to abandon their nest. Kestrels nesting in high disturbance areas were almost 10 times more likely to abandon their nest than those in more isolated areas, and this effect lessened the further a nest was from the road.

"We hypothesized that this was a mechanism for how humans are impacting wildlife," Heath said. "To birds, areas with human activity may be perceived as a high-risk environment."

Given that the vast majority of land in the continental United States is within a mile of a road, wildlife increasingly are exposed to chronic levels of road noise. The resulting increase in stress levels could cause fundamental changes in physiology and behavior across species inhabiting human-dominated environments, which over time could lead to population declines.

As scientists continue to connect the dots between human disturbances and the resulting long-term effects on wildlife, changes already are yielding positive results. Research conducted in preserve areas, such as state parks, has led to reduced speeds and attempts to limit noise, although noise mitigation, while locally effective, may not protect widespread populations such as kestrels from the pervasive threat of traffic noise.

The study concludes that until regulations or economic incentives are developed to encourage engineering innovations that result in quieter roads, projects in areas of human activity with favorable habitat should be discouraged to decrease the risk of ecological traps. In the meantime, Boise State's nesting boxes have been moved from freeway locations to more suitable areas.

"Birds evolved in an environment that was not dominated by humans," Heath noted. "In recent history, human roads and structures have left few areas untouched. We're just starting to understand the real consequences."


TAG:Birds Nature Pollution Environmental Issues Urbanization Land Management

Markets erode moral values

Markets erode moral values

May 10, 2013 — Many people express objections against child labor, exploitation of the workforce or meat production involving cruelty against animals. At the same time, however, people ignore their own moral standards when acting as market participants, searching for the cheapest electronics, fashion or food. Thus, markets reduce moral concerns. This is the main result of an experiment conducted by economists from the Universities of Bonn and Bamberg.






The results are presented in the latest issue of the journal Science.

Prof. Dr. Armin Falk from the University of Bonn and Prof. Dr. Nora Szech from the University of Bamberg, both economists, have shown in an experiment that markets erode moral concerns. In comparison to non-market decisions, moral standards are significantly lower if people participate in markets.

In markets, people ignore their individual moral standards

"Our results show that market participants violate their own moral standards," says Prof. Falk. In a number of different experiments, several hundred subjects were confronted with the moral decision between receiving a monetary amount and killing a mouse versus saving the life of a mouse and foregoing the monetary amount. "It is important to understand what role markets and other institutions play in moral decision making. This is a question economists have to deal with," says Prof. Szech.

"To study immoral outcomes, we studied whether people are willing to harm a third party in exchange to receiving money. Harming others in an intentional and unjustified way is typically considered unethical," says Prof. Falk. The animals involved in the study were so-called "surplus mice," raised in laboratories outside Germany. These mice are no longer needed for research purposes. Without the experiment, they would have all been killed. As a consequence of the study many hundreds of young mice that would otherwise all have died were saved. If a subject decided to save a mouse, the experimenters bought the animal. The saved mice are perfectly healthy and live under best possible lab conditions and medical care.

Simple bilateral markets affect moral decisions

A subgroup of subjects decided between life and money in a non-market decision context (individual condition). This condition allows for eliciting moral standards held by individuals. The condition was compared to two market conditions in which either only one buyer and one seller (bilateral market) or a larger number of buyers and sellers (multilateral market) could trade with each other. If a market offer was accepted a trade was completed, resulting in the death of a mouse. Compared to the individual condition, a significantly higher number of subjects were willing to accept the killing of a mouse in both market conditions. This is the main result of the study. Thus markets result in an erosion of moral values. "In markets, people face several mechanisms that may lower their feelings of guilt and responsibility," explains Nora Szech. In market situations, people focus on competition and profits rather than on moral concerns. Guilt can be shared with other traders. In addition, people see that others violate moral norms as well.

"If I don't buy or sell, someone else will."

In addition, in markets with many buyers and sellers, subjects may justify their behavior by stressing that their impact on outcomes is negligible. "This logic is a general characteristic of markets," says Prof. Falk. Excuses or justifications appeal to the saying, "If I don't buy or sell now, someone else will." For morally neutral goods, however, such effects are of minor importance. Nora Szech explains: "For goods without moral relevance, differences in decisions between the individual and the market conditions are small. The reason is simply that in such cases the need to share guilt or excuse behavior is absent."



TAG:Consumer Behavior Behavior Perception Ethics Bioethics Economics

Huawei CEO dismisses security, spying concerns

Huawei CEO dismisses security, spying concerns

The founder and CEO of Chinese networking equipment manufacturer Huawei, in his first-ever media interview, Thursday dismissed allegations that backdoors may have been built into the company's products to facilitate Chinese espionage.

"Huawei has no connection to the cybersecurity issues the U.S. has encountered in the past, current and future," Huawei CEO Ren Zhengfei, 68, told local reporters -- through an interpreter -- while on a visit to New Zealand this week, according to news reports.

Since founding the company 26 years ago, Ren had previously refused to conduct media interviews. But during his visit this week to New Zealand, he agreed to meet with reporters from four of the country's news outlets.

Click to read the rest of this article on Information Week.
TAG:security China Huawei ZTE backdoor cyber spy telecommunications cyber espionage

Epson AR glasses aim at industry

Epson AR glasses aim at industry

PORTLAND, Ore. -- When Epson first announced its augmented reality (AR) head-mounted display (HMD) called Moverio last year, it was aimed at the same consumer applications as the Google Glass research project, only Epson's solution was here today. However, if Epson's experience is any indicator, Google Glass will be a failure. Now Epson is one generation ahead of Google Glass, since it is readying a second-generation Moverio, but this time their aim is the industrial market, for which they already have succeeded by signing up professional customers such as Scope Technologies (Edmonton, Alberta).

"We immediately saw the potential for Epson's Moverio for industrial applications, such as eliminating the need for service manuals by showing technicians exactly how to perform maintenance tasks with heads up information displayed right on the device being repaired," said founder of Scope Technologies founder, Scott Montgomerie.

Scope Technologies uses Moverio to project tactical information -- such as showing virtual tools performing the necessary task right on the equipment to be repaired. Thus instead of consulting a service manual, the technician merely dons the Moverio glasses and looks at the device to be repaired, with the location of the faults and the steps to repair them appearing to be projected right onto the parts.


Epson's Moverio augmented reality (AR) glasses obsolete the service manual by directing personnel in how-to with arrows and text added right on top of the device being serviced. SOURCE: Epson
Click on image to enlarge.

"We are currently working with our industrial partners like Scope Technologies, to bring this wearable see-through technology to the professional market," said Eric Mizufuka, new markets product manager at Epson.
Next: Micro-miniaturized projectors
TAG:EETimes NextGenLog Electronics

Earliest archaeological evidence of human ancestors hunting and scavenging

Earliest archaeological evidence of human ancestors hunting and scavenging

May 10, 2013 — A recent Baylor University research study has shed new light on the diet and food acquisition strategies of some the earliest human ancestors in Africa.






Beginning around two million years ago, early stone tool-making humans, known scientifically as Oldowan hominin, started to exhibit a number of physiological and ecological adaptations that required greater daily energy expenditures, including an increase in brain and body size, heavier investment in their offspring and significant home-range expansion. Demonstrating how these early humans acquired the extra energy they needed to sustain these shifts has been the subject of much debate among researchers.

A recent study led by Joseph Ferraro, Ph.D., assistant professor of anthropology at Baylor, offers new insight in this debate with a wealth of archaeological evidence from the two million-year-old site of Kanjera South (KJS), Kenya. The study's findings were recently published in PLOS One.

"Considered in total, this study provides important early archaeological evidence for meat eating, hunting and scavenging behaviors -cornerstone adaptations that likely facilitated brain expansion in human evolution, movement of hominins out of Africa and into Eurasia, as well as important shifts in our social behavior, anatomy and physiology," Ferraro said.

Located on the shores of Lake Victoria, KJS contains "three large, well-preserved, stratified" layers of animal remains. The research team worked at the site for more than a decade, recovering thousands of animal bones and rudimentary stone tools.

According to researchers, hominins at KJS met their new energy requirements through an increased reliance on meat eating. Specifically, the archaeological record at KJS shows that hominins acquired an abundance of nutritious animal remains through a combination of both hunting and scavenging behaviors. The KJS site is the earliest known archaeological evidence of these behaviors.

"Our study helps inform the 'hunting vs. scavenging' debate in Paleolithic archaeology. The record at KJS shows that it isn't a case of either/or for Oldowan hominins two million years ago. Rather hominins at KJS were clearly doing both," Ferraro said.

The fossil evidence for hominin hunting is particularly compelling. The record shows that Oldowan hominins acquired and butchered numerous small antelope carcasses. These animals are well represented at the site by most or all of their bones from the tops of their head to the tips of their hooves, indicating to researchers that they were transported to the site as whole carcasses.

Many of the bones also show evidence of cut marks made when hominins used simple stone tools to remove animal flesh. Some bones also bear evidence that hominins used fist-sized stones to break them open to acquire bone marrow.

In addition, modern studies in the Serengeti--an environment similar to KJS two million years ago--have also shown that predators completely devour antelopes of this size within minutes of their deaths. As a result, hominins could only have acquired these valuable remains on the savanna through active hunting.

The site also contains a large number of isolated heads of wildebeest-sized antelopes. In contrast to small antelope carcasses, the heads of these somewhat larger individuals are able to be consumed several days after death and could be scavenged, as even the largest African predators like lions and hyenas were unable to break them open to access their nutrient-rich brains.

"Tool-wielding hominins at KJS, on the other hand, could access this tissue and likely did so by scavenging these heads after the initial non-human hunters had consumed the rest of the carcass," Ferraro said. "KJS hominins not only scavenged these head remains, they also transported them some distance to the archaeological site before breaking them open and consuming the brains. This is important because it provides the earliest archaeological evidence of this type of resource transport behavior in the human lineage."

Other contributing authors to the study include: Thomas W. Plummer of Queens College & NYCEP; Briana L. Pobiner of the National Museum of Natural History, Smithsonian Institution; James S. Oliver of Illinois State Museum and Liverpool John Moores University; Laura C. Bishop of Liverpool John Moores University; David R. Braun of George Washington University; Peter W. Ditchfield of University of Oxford; John W. Seaman III , Katie M. Binetti and John W. Seaman Jr. of Baylor University; Fritz Hertel of California State University and Richard Potts of the National Museum of Natural History, Smithsonian Institution and National Museums of Kenya.

The research was supported by funding from the National Science Foundation, Leakey Foundation, Wenner-Gren Foundation, National Geographic Society, The Leverhulme Trust, University of California, Baylor University and the City University of New York. Additional logistical support was provided by the Smithsonian Institution's Human Origins Program and the Peter Buck Fund for Human Origins Research, the British Institute of Eastern Africa and the National Museums of Kenya.



TAG:Environmental Policy Global Warming Sustainability Cultures Ancient Civilizations Lost Treasures

2013年5月10日星期五

TSMC's sales boom in April

TSMC's sales boom in April


LONDON – Leading foundry chip maker Taiwan Semiconductor Manufacturing Co. Ltd. has seen its sales in 2013 running at 25 percent higher than in 2012 with the prospect of strong second half to the year.

TSMC has reported net sales for April of NT$50.07 billion (about $1.69 billion), an increase of 13.5 percent from March 2013 and up 23.5 percent over April 2012.

The company's revenues for the first four months of the year were NT$182.83 billion (about $6.16 billion), an increase of 25.1 percent compared to the same period in 2012.

At the most recent analysts' conference call Morris Chang, chairman and CEO said that in 2013 TSMC would grow at much higher rate than the 10 percent he forecast for the foundry sector as a whole. Chang applies a top-down view and said that for 2013 TSMC's estimate of global GDP remains unchanged at about 2.6 percen growth. The semiconductor market he forecasts wlll grow at about 4 percent. The fabless industry is set to grow at 9 percent and the foundry sector at 10 percent.

Much of TSMC's current success is based on its commanding position in the supply of 28-nm CMOS and the demand for it for mobile applications. In the call Chang said that TSMC would triple production and revenue from 28-nm wafers in 2013 compared with 2012. The high-K metal gate variant will start shipping in higher volume than the oxynitride version of 28-nm in third quarter of 2013, Chang said.

April sales at rival foundry United Microelectronics Corp. were flat sequentially and its year-to-date sales of NT$10.28 billion (about $350 million) was up by 3.9 percent compared to the same period in 2012. Manufacturing capacity utilization at UMC was 78 percent in 1Q13, down from 80 percent in 4Q12.


Related links and articles:

TSMC races up MEMS foundry ranking

ARM continues to outperform market

TSMC posts strong outlook

TSMC starts FinFETs in 2013, tries EUV at 10 nm








TAG:Morris Chang TSMC foundry sales April semiconductor UMC

IBM analytics solving cancer

IBM analytics solving cancer

PORTLAND, Ore. -- IBM's medical diagnostics analytics are not exactly a 'cure' for cancer, but they are aiming to lower the cost of a promising new remedy for destroying existing tumors. The new technique uses a high-energy particle accelerator -- a room-sized version of the gigantic accelerators used to unravel physics -- that directs a proton beam to precisely kill the cancer cells inside tumors, leaving adjacent tissue untouched. IBM Research (Austin, Texas) is aiming to reduce the cost of this promising new therapy, using software analytics running on a Power7 cluster supercomputer.

The stakes are huge. Last year there were over 12 million cancer patients worldwide receiving various therapies, and that number is predicted to increase to 21 million by 2030. Unfortunately, today there are only 10 centers offering proton therapy, however 17 new ones are currently under construction at a cost of over $200 million each. Over $3.4 billion is being invested in current-generation proton accelerators, and smaller, less expensive accelerators are also being designed to make the technique more affordable.

The big bottleneck, however, is the computational workload required to utilize proton therapy.

Today proton therapy requires a long involved preparation process, starting with a magnetic resonance imaging (MRI) or computational tomography (CT) scan to identify the tumor's location, after which a bevy of doctors and technicians spend over a week mapping out exactly how to use a proton beam to destroy it. Unlike traditional radiation therapy, proton beams do not affect human tissue as they pass through it, only releasing their energy at the very end of the path, which must be carefully plotted to end precisely within the tumor.


Today it takes a team of doctors and technicians a week to map out the path for a proton beam to kill tumors, but in the meantime it grew, making success less likely. IBM's analytics running on a Power 730 cluster computer maps out the same proton beam path in 15 minutes. SOURCE: IBM
Click on image to enlarge.
"The protons are accelerated to half the speed of light, but do not loose that energy until they hit a threshold, after which they release burst of kinetic energy," said IBM Research scientist, Sani Nassif. "Therefore, they can go deep into the body -- almost six inches deep -- where they create a very dynamic hot spot while not touching anything between the skin and that point."
Next: How it works
TAG:Medical Cancer Tumor Cure EETimes NextGenLog Electronics

Tower upbeat despite loss-making Q1

Tower upbeat despite loss-making Q1


LONDON – Tower Semiconductor Ltd., the specialty foundry that trades as TowerJazz, made a net loss of $23.2 million on revenues of $112.6 million in the first quarter of 2013. Revenues were down 23.7 percent from $147.6 million in the previous quarter and down 33.0 percent from the same quarter a year before.

The decline in revenue was on the low side of previously given guidance for the quarter of $110 million to $120 million. The steep drop in revenue was predicted to come from the phasing out of supply contracts with Micron Technology Inc. at a wafer fab in Nishiwaki, Japan. Tower (Migdal Haemek, Israel) acquired the fab from Micron and is in the process of bringing up replacement customers.

Tower said it expects 2Q13 revenue to be in the range $122 million to $132 million with a mid-point up 13 percent on first quarter revenue.

"We leave first quarter confident in our tactics and strategies, demonstrated by quarter over quarter double digit guidance growth, and projected quarter over quarter growth throughout the year. This growth is driven by a record number of full mask tape-outs into our 8-inch facilities in Newport Beach, Migdal HaEmek and Nishiwaki and strong and increasing demand in our Israeli 6-inch factory," said Russell Ellwanger, CEO of Tower, in a statement.


Related links and articles:

Tower sees soft Q1, then growth

Tower, On Semi team on display chip

Tower in talks over Micron fab

India fab decision likely this quarter



TAG:Tower TowerJazz semiconductor financial results

Ice-free Arctic may be in our future

Ice-free Arctic may be in our future

"While existing geologic records from the Arctic contain important hints about this time period, what we are presenting is the most continuous archive of information about past climate change from the entire Arctic borderlands. As if reading a detective novel, we can go back in time and reconstruct how the Arctic evolved with only a few pages missing here and there," says Brigham-Grette.

Results of analyses that provide "an exceptional window into environmental dynamics" never before possible were published this week in Science and have "major implications for understanding how the Arctic transitioned from a forested landscape without ice sheets to the ice- and snow-covered land we know today," she adds.

Their data come from analyzing sediment cores collected in the winter of 2009 from ice-covered Lake El'gygytgyn, the oldest deep lake in the northeast Russian Arctic, located 100 km north of the Arctic Circle. "Lake E" was formed 3.6 million years ago when a meteorite, perhaps a kilometer in diameter, hit the Earth and blasted out an 11-mile (18 km) wide crater. It has been collecting sediment layers ever since. Luckily for geoscientists, it lies in one of the few Arctic areas not eroded by continental ice sheets during ice ages, so a thick, continuous sediment record was left remarkably undisturbed. Cores from Lake E reach back in geologic time nearly 25 times farther than Greenland ice cores that span only the past 140,000 years.

"One of our major findings is that the Arctic was very warm in the middle Pliocene and Early Pleistocene [~ 3.6 to 2.2 million years ago] when others have suggested atmospheric CO2 was not much higher than levels we see today. This could tell us where we are going in the near future. In other words, the Earth system response to small changes in carbon dioxide is bigger than suggested by earlier climate models," the authors state.

Important to the story are the fossil pollen found in the core, including Douglas fir and hemlock. These allow the reconstruction of vegetation around the lake in the past, which in turn paints a picture of past temperatures and precipitation.

Another significant finding is documentation of sustained warmth in the Middle Pliocene, with summer temperatures of about 59 to 61 degrees F [15 to 16 degrees C], about 14.4 degrees F [8 degrees C] warmer than today, and regional precipitation three times higher. "We show that this exceptional warmth well north of the Arctic Circle occurred throughout both warm and cold orbital cycles and coincides with a long interval of 1.2 million years when other researchers have shown the West Antarctic Ice Sheet did not exist," Brigham-Grette notes. Hence both poles share some common history, but the pace of change differed.

Her co-authors, Martin Melles of the University of Cologne and Pavel Minyuk of Russia's Northeast Interdisciplinary Scientific Research Institute, Magadan, led research teams on the project. Robert DeConto, also at UMass Amherst, led climate modeling efforts. These data were compared with ecosystem reconstructions performed by collaborators at universities of Berlin and Cologne.

The Lake E cores provide a terrestrial perspective on the stepped pacing of several portions of the climate system through the transition from a warm, forested Arctic to the first occurrence of land ice, Brigham-Grette says, and the eventual onset of major glacial/interglacial cycles. "It is very impressive that summer temperatures during warm intervals even as late as 2.2 million years ago were always warmer than in our pre-Industrial reconstructions."

Minyuk notes that they also observed a major drop in Arctic precipitation at around the same time large Northern Hemispheric ice sheets first expanded and ocean conditions changed in the North Pacific. This has major implications for understanding both what drove the onset of the ice ages

The sediment core also reveals that even during the first major "cold snap" to show up in the record 3.3 Million years ago, temperatures in the western Arctic were similar to recent averages of the past 12,000 years. "Most importantly, conditions were not 'glacial,' raising new questions as to the timing of the first appearance of ice sheets in the Northern Hemisphere," the authors add.

This week's paper is the second article published in Science by these authors using data from the Lake E project. Their first, in July 2012, covered the period from the present to 2.8 million years ago, while the current work addresses the record from 2.2 to 3.6 million years ago. Melles says, "This latest paper completes our goal of providing an overview of new knowledge of the evolution of Arctic change across the western borderlands back to 3.6 million years and places this record into a global context with comparisons to records in the Pacific, the Atlantic and Antarctica."

The new Lake E paleoclimate reconstructions and climate modeling are consistent with estimates made by other research groups that support the idea that Earth's climate sensitivity to CO2 may well be higher than suggested by the 2007 report of the Intergovernmental Panel on Climate Change.


TAG:Climate Global Warming Ice Ages Early Climate Fossils Origin of Life

Heady mathematics: Describing popping bubbles in a foam

Heady mathematics: Describing popping bubbles in a foam

May 9, 2013 — Bubble baths and soapy dishwater, the refreshing head on a beer and the luscious froth on a cappuccino. All are foams, beautiful yet ephemeral as the bubbles pop one by one.






Two University of California, Berkeley, researchers have now described mathematically the successive stages in the complex evolution and disappearance of foamy bubbles, a feat that could help in modeling industrial processes in which liquids mix or in the formation of solid foams such as those used to cushion bicycle helmets.

Applying these equations, they created mesmerizing computer-generated movies showing the slow and sedate disappearance of wobbly foams one burst bubble at a time.

The applied mathematicians, James A. Sethian and Robert I. Saye, will report their results in the May 10 issue of Science. Sethian, a UC Berkeley professor of mathematics, leads the mathematics group at Lawrence Berkeley National Laboratory (LBNL). Saye will graduate from UC Berkeley this May with a PhD in applied mathematics.

"This work has application in the mixing of foams, in industrial processes for making metal and plastic foams, and in modeling growing cell clusters," said Sethian. "These techniques, which rely on solving a set of linked partial differential equations, can be used to track the motion of a large number of interfaces connected together, where the physics and chemistry determine the surface dynamics."

The problem with describing foams mathematically has been that the evolution of a bubble cluster a few inches across depends on what's happening in the extremely thin walls of each bubble, which are thinner than a human hair.

"Modeling the vastly different scales in a foam is a challenge, since it is computationally impractical to consider only the smallest space and time scales," Saye said. "Instead, we developed a scale-separated approach that identifies the important physics taking place in each of the distinct scales, which are then coupled together in a consistent manner."

Saye and Sethian discovered a way to treat different aspects of the foam with different sets of equations that worked for clusters of hundreds of bubbles. One set of equations described the gravitational draining of liquid from the bubble walls, which thin out until they rupture. Another set of equations dealt with the flow of liquid inside the junctions between the bubble membranes. A third set handled the wobbly rearrangement of bubbles after one pops.

Using a fourth set of equations, the mathematicians solved the physics of a sunset reflected in the bubbles, taking account of thin film interference within the bubble membranes, which can create rainbow hues like an oil slick on wet pavement. Solving the full set of equations of motion took five days using supercomputers at the LBNL's National Energy Research Scientific Computing Center (NERSC).

The mathematicians next plan to look at manufacturing processes for small-scale new materials.

"Foams were a good test that all the equations coupled together," Sethian said. "While different problems are going to require different physics, chemistry and models, this sort of approach has applications to a wide range of problems."

The work is supported by the Department of Energy, National Science Foundation and National Cancer Institute.

Video: http://www.youtube.com/watch?feature=player_embedded&v=ciciWBz8m_Y



TAG:Nature of Water Chemistry Physics Computer Modeling Math Puzzles Computer Science

Moon and Earth have common water source

Moon and Earth have common water source

Water inside the Moon's mantle came from primitive meteorites, new research finds, the same source thought to have supplied most of the water on Earth. The findings raise new questions about the process that formed the Moon.

The Moon is thought to have formed from a disc of debris left when a giant object hit Earth 4.5 billion years ago, very early in Earth's history. Scientists have long assumed that the heat from an impact of that size would cause hydrogen and other volatile elements to boil off into space, meaning the Moon must have started off completely dry. But recently, NASA spacecraft and new research on samples from the Apollo missions have shown that the Moon actually has water, both on its surface and beneath.

By showing that water on the Moon and on Earth came from the same source, this new study offers yet more evidence that the Moon's water has been there all along.

"The simplest explanation for what we found is that there was water on the proto-Earth at the time of the giant impact," said Alberto Saal, associate professor of Geological Sciences at Brown University and the study's lead author. "Some of that water survived the impact, and that's what we see in the Moon."

The research was co-authored by Erik Hauri of the Carnegie Institution of Washington, James Van Orman of Case Western Reserve University, and Malcolm Rutherford from Brown and published online in Science Express.

To find the origin of the Moon's water, Saal and his colleagues looked at melt inclusions found in samples brought back from the Apollo missions. Melt inclusions are tiny dots of volcanic glass trapped within crystals called olivine. The crystals prevent water escaping during an eruption and enable researchers to get an idea of what the inside of the Moon is like.

Research from 2011 led by Hauri found that the melt inclusions have plenty of water -- as much water in fact as lavas forming on Earth's ocean floor. This study aimed to find the origin of that water. To do that, Saal and his colleagues looked at the isotopic composition of the hydrogen trapped in the inclusions. "In order to understand the origin of the hydrogen, we needed a fingerprint," Saal said. "What is used as a fingerprint is the isotopic composition."

Using a Cameca NanoSIMS 50L multicollector ion microprobe at Carnegie, the researchers measured the amount of deuterium in the samples compared to the amount of regular hydrogen. Deuterium is an isotope of hydrogen with an extra neutron. Water molecules originating from different places in the solar system have different amounts of deuterium. In general, things formed closer to the sun have less deuterium than things formed farther out.

Saal and his colleagues found that the deuterium/hydrogen ratio in the melt inclusions was relatively low and matched the ratio found in carbonaceous chondrites, meteorites originating in the asteroid belt near Jupiter and thought to be among the oldest objects in the solar system. That means the source of the water on the Moon is primitive meteorites, not comets as some scientists thought.

Comets, like meteorites, are known to carry water and other volatiles, but most comets formed in the far reaches of the solar system in a formation called the Oort Cloud. Because they formed so far from the sun, they tend to have high deuterium/hydrogen ratios -- much higher ratios than in the Moon's interior, where the samples in this study came from.

"The measurements themselves were very difficult," Hauri said, "but the new data provide the best evidence yet that the carbon-bearing chondrites were a common source for the volatiles in the Earth and Moon, and perhaps the entire inner solar system."

Recent research, Saal said, has found that as much as 98 percent of the water on Earth also comes from primitive meteorites, suggesting a common source for water on Earth and water on Moon. The easiest way to explain that, Saal says, is that the water was already present on the early Earth and was transferred to the Moon.

The finding is not necessarily inconsistent with the idea that the Moon was formed by a giant impact with the early Earth, but presents a problem. If the Moon is made from material that came from Earth, it makes sense that the water in both would share a common source. However, there's still the question of how that water was able to survive such a violent collision.

"The impact somehow didn't cause all the water to be lost," Saal said. "But we don't know what that process would be."

It suggests, the researchers say, that there are some important processes we don't yet understand about how planets and satellites are formed.

"Our work suggests that even highly volatile elements may not be lost completely during a giant impact," said Van Orman. "We need to go back to the drawing board and discover more about what giant impacts do, and we also need a better handle on volatile inventories in the Moon."

Funding for the research came from NASA's Cosmochemistry and LASER programs and the NASA Lunar Science Institute.


TAG:Moon Asteroids Comets and Meteors Solar System Water Near Earth Object Impacts Environmental Issues

Dust in the clouds: Cirrus clouds form around mineral dust and metallic particles

Dust in the clouds: Cirrus clouds form around mineral dust and metallic particles

May 9, 2013 — At any given time, cirrus clouds -- the thin wisps of vapor that trail across the sky -- cover nearly one-third of the globe. These clouds coalesce in the upper layers of the troposphere, often more than 10 miles above the Earth's surface.






Cirrus clouds influence global climate, cooling the planet by reflecting incoming solar radiation and warming it by trapping outgoing heat. Understanding the mechanisms by which these clouds form may help scientists better predict future climate patterns.

Now an interdisciplinary team from MIT, the National Oceanic and Atmospheric Administration (NOAA), and elsewhere has identified the major seeds on which cirrus clouds form. The team sampled cirrus clouds using instruments aboard high-altitude research aircraft, analyzing particles collected during multiple flights over a nine-year period. They found that the majority of cloud particles freeze, or nucleate, around two types of seeds: mineral dust and metallic aerosols.

The absence of certain particles in the clouds also proved interesting. While scientists have observed that substances like black carbon and fungal spores readily form cloud particles in the lab, the team detected barely a trace of these particles in the upper atmosphere.

"We think we're really looking at the seed, the nucleus of these ice crystals," says Dan Cziczo, an associate professor of atmospheric chemistry at MIT. "These results are going to allow us to better understand the climatic implications of these clouds in the future."

Cziczo and his colleagues have published their results this week in Science.

Up in the air

Cirrus clouds typically form at altitudes higher than most commercial planes fly. To sample at such heights, the team enlisted three high-altitude research aircraft from NASA and the National Science Foundation (NSF): a B-57 bomber, a DC-8 passenger jet, and a G-V business jet, all of which were repurposed to carry scientific instruments.

From 2002 to 2011, the team conducted four flight missions in regions of North America and Central America where cirrus clouds often form. Before takeoff, the team received weather forecasts, including information on where and when clouds might be found.

"More often than not, the forecast is solid, and it's up to the pilot to hit a cloud," Cziczo says. "If they find a good spot, they can call back on a satellite phone and tell us if they're inside a cloud, and how thick it is."

For each mission, Cziczo and Karl Froyd, of NOAA's Earth System Resource Laboratory, mounted one or two instruments to the nose of each plane: a single particle mass spectrometer and a particle collector.

Each flight followed essentially the same protocol: As a plane flew through a cloud, ice particles flowed through a specialized inlet into the nose of the plane. As they flowed in, the particles thawed, evaporating most of the surrounding ice. What's left was a tiny kernel, or seed, which was then analyzed in real time by the onboard mass spectrometer for size and composition. The particle collector stored the seeds for further analysis in the lab.

A human effect on cloud formation

After each flight, Cziczo and his colleagues analyzed the collected particles in the lab using high-resolution electron microscopy. They compared their results with analyses from the onboard mass spectrometer and found the two datasets revealed very similar cloud profiles: More than 60 percent of cloud particles consisted of mineral dust blown into the atmosphere, as well as metallic aerosols.

Cziczo notes that while mineral dust is generally regarded as a natural substance originating from dry or barren regions of the Earth, agriculture, transportation and industrial processes also release dust into the atmosphere.

"Mineral dust is changing because of human activities," Cziczo says. "You may think of dust as a natural particle, but some percentage of it is manmade, and it really points to a human ability to change these clouds."

He adds that some global-modeling studies predict higher dust concentrations in the future due to desertification, land-use change and changing rainfall patterns due to human-induced climate effects.

Cziczo's team also identified a "menagerie of metal compounds," including lead, zinc and copper, that may point to a further human effect on cloud formation. "These things are very strange metal particles that are almost certainly from industrial activities, such as smelting and open-pit burning of electronics," Cziczo adds. Lead is also emitted in the exhaust of small planes.

Contrary to what many lab experiments have found, the team observed very little evidence of biological particles, such as bacteria or fungi, or black carbon emitted from automobiles and smokestacks. Froyd says knowing what particles are absent in clouds is just as important as knowing what's present: Such information, he says, can be crucial in developing accurate models for climate change.

"There's been a lot of research efforts spent on looking at how these particle types freeze under various conditions," Froyd says. "Our message is that you can ignore those, and can instead look at mineral dust as the dominant driving force for the formation of this type of cloud."

The group's experimental approach was an impressive feat in itself, says Brian Toon, a professor of atmospheric and oceanic sciences at the University of Colorado. In a typical cirrus cloud, only one particle in 100,000 forms an ice crystal, making the odds of capturing such crystals slim at best.

"This group has used an instrument in a plane flying at speeds of two football fields per second to catch individual ice crystals, evaporate them and measure the composition of the tiny remnant particles -- an amazing technological achievement," says Toon, who was not involved in the research. "Now these measurements need to be repeated over a wide range of locations to be sure they are general."

This research was funded by NASA and the NSF.



TAG:Atmosphere Climate Global Warming Storms Severe Weather Air Pollution

Using bacteria to stop malaria

Using bacteria to stop malaria

A study in the current issue of Science shows that the transmission of malaria via mosquitoes to humans can be interrupted by using a strain of the bacteria Wolbachia in the insects. In a sense, Wolbachia would act as a vaccine of sorts for mosquitoes that would protect them from malaria parasites. Treating mosquitoes would prevent them from transmitting malaria to humans, a disease that in 2010 affected 219 million people and caused an estimated 660,000 deaths.

"Wolbachia-based malaria control strategy has been discussed for the last two decades," said Zhiyong Xi, MSU assistant professor of microbiology and molecular genetics. "Our work is the first to demonstrate Wolbachia can be stably established in a key malaria vector, the mosquito species Anopheles stephensi, which opens the door to use Wolbachia for malaria control."

First, Xi's team successfully demonstrated how Wolbachia can be carried by this malaria mosquito vector and how the insects can spread the bacteria throughout the entire mosquito population. Secondly, researchers showed that the bacteria can prevent those mosquitoes from transmitting malaria parasites to humans.

"We developed the mosquito line carrying a stable Wolbachia infection," Xi said. "We then seeded them into uninfected populations and repeatedly produced a population of predominantly Wolbachia-infected mosquitoes."

The basis for Xi's latest findings is connected to the success of his work using Wolbachia to halt Dengue fever. For this research, Xi focused on the mosquito species Aedes albopictus and Aedes aegypti. This work helped launch a global effort to develop Wolbachia-based strategies to eliminate dengue and other diseases.

The key to the malaria research was identifying the correct species of Wolbachia -- wAlbB -- and then injecting it into mosquito embryos. Out of the thousands of embryos injected by research associate Guowu Bian, one developed into a female that carried Wolbachia. The mosquito line derived from this female has maintained Wolbachia wAlbB infection with a 100 percent infection frequency through 34 generations. The number could grow higher as this is simply the last generation the researchers have bred thus far, Xi said.

The team then introduced various ratios of Wolbachia-infected females into a noninfected mosquito population. In each case, the entire population carried the bacteria in eight generations or less.

Using this promising approach to tackle malaria -- the biggest vector-borne disease -- gives scientists and world health officials another important tool to fight malaria.

Once Wolbachia has been released into a mosquito population, it is quite possible that it won't need to be reapplied, making it more economical than other methods like pesticide or human vaccine. This adds special value to the feasibility of this control strategy, considering most of the malaria endemic areas are suffering from poverty, Xi said.


TAG:Malaria Infectious Diseases Dentistry Bacteria Pests and Parasites Insects (including Butterflies)

Coral reefs suffering, but collapse not inevitable

Coral reefs suffering, but collapse not inevitable

"People benefit by reefs' having a complex structure -- a little like a Manhattan skyline, but underwater," said Peter Mumby of The University of Queensland and University of Exeter. "Structurally complex reefs provide nooks and crannies for thousands of species and provide the habitat needed to sustain productive reef fisheries. They're also great fun to visit as a snorkeler or diver. If we carry on the way we have been, the ability of reefs to provide benefits to people will seriously decline."

To predict the reefs' future, the researchers spent two years constructing a computer model of how reefs work, building on hundreds of studies conducted over the last 40 years. They then combined their reef model with climate models to make predictions about the balance between forces that will allow reefs to continue growing their complex calcium carbonate structures and those such as hurricanes and erosion that will shrink them.

Ideally, Mumby said, the goal is a carbonate budget that remains in the black for the next century at least. Such a future is possible, the researchers' model shows, but only with effective local protection and assertive action on greenhouse gases.

"Business as usual isn't going to cut it," he said. "The good news is that it does seem possible to maintain reefs -- we just have to be serious about doing something. It also means that local reef management -- efforts to curb pollution and overfishing -- are absolutely justified. Some have claimed that the climate change problem is so great that local management is futile. We show that this viewpoint is wrongheaded."

Mumby and his colleagues also stress the importance of reef function in addition to reef diversity. Those functions of reefs include the provision of habitat for fish, the provision of a natural breakwater to reduce the size of waves reaching the shore, and so on. In very practical terms, hundreds of millions of people depend directly on reefs for their food, livelihoods, and even building materials.

"If it becomes increasingly difficult for people in the tropics to make their living on coral reefs, then this may well increase poverty," said the study's first author, Emma Kennedy. It's in everyone's best interest to keep that from happening.


TAG:Extinction Fisheries Coral Reefs Ecology Ocean Policy Environmental Policies

Rejuvenating hormone found to reverse symptoms of heart failure

Rejuvenating hormone found to reverse symptoms of heart failure

May 9, 2013 — Heart failure is one of the most debilitating conditions linked to old age, and there are no specific therapies for the most common form of this condition in the elderly. A study published by Cell Press May 9th in the journal Cell reveals that a blood hormone known as growth differentiation factor 11 (GDF11) declines with age, and old mice injected with this hormone experience a reversal in signs of cardiac aging. The findings shed light on the underlying causes of age-related heart failure and may offer a much-needed strategy for treating this condition in humans.






"There has been evidence that circulating bloodstream factors exist in mammals that can rejuvenate tissues, but they haven't been identified. This study found the first factor like this," says senior study author Richard Lee of the Harvard Stem Cell Institute and Brigham and Women's Hospital.

Heart failure is a condition in which the heart can't pump enough blood to meet the body's needs, causing shortness of breath and fatigue, and it is becoming increasingly prevalent in the elderly. The most common form of age-related heart failure involves thickening of heart muscle tissue. But until now, the molecular causes and potential treatment strategies for this condition have been elusive.

To identify molecules in the blood responsible for age-related heart failure, a team led by Lee and Amy Wagers of the Harvard Stem Cell Institute and Joslin Diabetes Center used a well-established experimental technique: they surgically joined pairs of young and old mice so that their blood circulatory systems merged into one. After being exposed to the blood of young mice, old mice experienced a reversal in the thickening of heart muscle tissue. The researchers then screened the blood for molecules that change with age, discovering that levels of the hormone GDF11 were lower in old mice compared with young mice.

Moreover, old mice treated with GDF11 injections experienced a reversal in signs of cardiac aging. Heart muscle cells became smaller, and the thickness of the heart muscle wall resembled that of young mice. "If some age-related diseases are due to loss of a circulating hormone, then it's possible that restoring levels of that hormone could be beneficial," Wagers says. "We're hoping that some day, age-related human heart failure might be treated this way."



TAG:Heart Disease Stroke Prevention Healthy Aging Vioxx Cholesterol Diseases and Conditions

Mapping the embryonic epigenome: How genes are turned on and off during early human development

Mapping the embryonic epigenome: How genes are turned on and off during early human development

After an egg has been fertilized, it divides repeatedly to give rise to every cell in the human body -- from the patrolling immune cell to the pulsing neuron. Each functionally distinct generation of cells subsequently differentiates itself from its predecessors in the developing embryo by expressing only a selection of its full complement of genes, while actively suppressing others. "By applying large-scale genomics technologies," explains Bing Ren, PhD, Ludwig Institute member and a professor in the Department of Cellular and Molecular Medicine at the UC San Diego School of Medicine, "we could explore how genes across the genome are turned on and off as embryonic cells and their descendant lineages choose their fates, determining which parts of the body they would generate."

One way cells regulate their genes is by DNA methylation, in which a molecule known as a methyl group is tacked onto cytosine -- one of the four DNA bases that write the genetic code. Another is through scores of unique chemical modifications to proteins known as histones, which form the scaffolding around which DNA winds in the nucleus of the cell. One such silencing modification, called H3K27me3, involves the highly specific addition of three methyl groups to a type of histone named H3. "People have generally not thought of these two 'epigenetic' modifications as being very different in terms of their function," says Ren.

The current study puts an end to that notion. The researchers found in their analysis of those modifications across the genome -- referred to, collectively, as the epigenome -- that master genes that govern the regulation of early embryonic development tend largely to be switched off by H3K27me3 histone methylation. Meanwhile, those that orchestrate the later stages of cellular differentiation, when cells become increasingly committed to specific functions, are primarily silenced by DNA methylation.

"You can sort of glean the logic of animal development in this difference," says Ren. "Histone methylation is relatively easy to reverse. But reversing DNA methylation is a complex process, one that requires more resources and is much more likely to result in potentially deleterious mutations. So it makes sense that histone methylation is largely used to silence master genes that may be needed at multiple points during development, while DNA methylation is mostly used to switch off genes at later stages, when cells have already been tailored to specific functions, and those genes are less likely to be needed again."

The researchers also found that the human genome is peppered with more than 1,200 large regions that are consistently devoid of DNA methylation throughout development. It turns out that many of the genes considered master regulators of development are located in these regions, which the researchers call DNA methylation valleys (DMVs). Further, the team found that the DMVs are abnormally methylated in colon cancer cells. While it has long been known that aberrant DNA methylation plays an important role in various cancers, these results suggest that changes to the cell's DNA methylation machinery itself may be a major step in the evolution of tumors.

Further, the researchers catalogued the regulation of DNA sequences known as enhancers, which, when activated, boost the expression of genes. They identified more than 103,000 possible enhancers and charted their activation and silencing in six cell types. Researchers will in all likelihood continue to sift through the data generated by this study for years to come, putting the epigenetic phenomena into biological context to investigate a variety of cellular functions and diseases.

"These data are going to be very useful to the scientific community in understanding the logic of early human development," says Ren. "But I think our main contribution is the creation of a major information resource for biomedical research. Many complex diseases have their roots in early human development."

Laboratories led by Michael Zhang, at the University of Texas, Dallas, and Wei Wang, at the University of California, La Jolla, contributed extensively to the computational analysis of data generated by the epigenetic mapping.


TAG:Genes Human Biology Epigenetics Developmental Biology Epigenetics Research Biotechnology

Social connections drive the 'upward spiral' of positive emotions and health

Social connections drive the 'upward spiral' of positive emotions and health

The research, led by Barbara Fredrickson of the University of North Carolina at Chapel Hill and Bethany Kok of the Max Planck Institute for Human Cognitive and Brain Sciences also found it is possible for a person to self-generate positive emotions in ways that make him or her physically healthier.

"People tend to liken their emotions to the weather, viewing them as uncontrollable," says Fredrickson. "This research shows not only that our emotions are controllable, but also that we can take the reins of our daily emotions and steer ourselves toward better physical health."

To study the bodily effects of up-regulating positive emotions, the researchers zeroed in on vagal tone, an indicator of how a person's vagus nerve is functioning. The vagus nerve helps regulate heart rate and is also a central component of a person's social-engagement system.

Because people who have higher vagal tone tend to be better at regulating their emotions, the researchers speculated that having higher vagal tone might lead people to experience more positive emotions, which would then boost perceived positive social connections. Having more social connections would in turn increase vagal tone, thereby improving physical health and creating an "upward spiral."

To see whether people might be able to harness this upward spiral to steer themselves toward better health, Kok, Fredrickson, and their colleagues conducted a longitudinal field experiment.

Half of the study participants were randomly assigned to attend a 6-week loving-kindness meditation (LKM) course in which they learned how to cultivate positive feelings of love, compassion, and goodwill toward themselves and others. They were asked to practice meditation at home, but how often they meditated was up to them. The other half of the participants remained on a waiting list for the course.

Each day, for 61 consecutive days, participants in both groups reported their "meditation, prayer, or solo spiritual activity," their emotional experiences, and their social interactions within the last day. Their vagal tone was assessed twice, once at the beginning and once at the end of the study.

The data provided clear evidence to support the hypothesized upward spiral, with perceived social connections serving as the link between positive emotions and health.

Participants in the LKM group who entered the study with higher vagal tone showed steeper increases in positive emotions over the course of the study. As participants' positive emotions increased, so did their reported social connections. And, as social connections increased, so did vagal tone. In contrast, participants in the wait-list group showed virtually no change in vagal tone over the course of the study.

"The daily moments of connection that people feel with others emerge as the tiny engines that drive the upward spiral between positivity and health," Fredrickson explains.

These findings add another piece to the physical health puzzle, suggesting that positive emotions may be an essential psychological nutrient that builds health, just like getting enough exercise and eating leafy greens.

"Given that costly chronic diseases limit people's lives and overburden healthcare systems worldwide, this is a message that applies to nearly everyone, citizens, educators, health care providers, and policy-makers alike," Fredrickson observes.

This work was supported National Institute of Mental Health Grant MH59615.


TAG:Health Policy Mental Health Research Psychology Perception Social Issues Public Health