Monday, August 26, 2013

Nature resets body’s clock

After a week in the wild, people went to bed — and got up — earlier


A short camping trip could help people rise and shine, researchers report. After a week living in tents in Colorado’s Rockies, campers’ internal clocks shifted about two hours earlier. It transformed even night owls into early birds.
“It’s a clever study, and it makes a dramatic point,” says Katherine Sharkey. A sleep researcher and physician at Brown University in Providence, R.I., she did not work on the new study. People get much more light outside than they do indoors, she notes. And that can reset their internal body clocks.
A master clock in the brain controls the release of melatonin. This hormone prepares the body for sleep. Melatonin levels rise in the early evening and then taper off in the morning before a person wakes up.
But many people today spend their days indoors and their nights bathed in the glow of electric lights (including the light emitted by TVs and computers). Too little early morning light and too much evening lighting can throw the body’s clock out of sync. This unnatural lighting can trigger the body to ramp up melatonin levels later at night. It can also lead the hormone levels to fall later than normal in the morning — often after a person has woken up. Lingering levels of this sleep hormone can make people groggy.
Kenneth Wright Jr., a sleep researcher at the University of Colorado, Boulder, and colleagues whisked eight volunteers away for a summer camping trip. After nightfall, the campers used only campfires for lighting. No flashlights (or cellphones) allowed.
Each day, the campers soaked up four times as much light as they got indoors. They also went to sleep and naturally woke up more than an hour earlier than they had before the trip.
Tests done after the volunteers got home again showed that their melatonin levels now climbed around sunset. They also petered out at sunrise — two hours earlier than before they had gone camping. Wright’s team published its findings August 1 in Current Biology.
People might not even need to rough it to nudge their internal clocks back. Typical office and school lighting is less than one percent as bright as a midsummer day. So even brief stints outside might help. This would be especially true if people encountered outdoor light early in the morning. That’s when the body’s clock is most susceptible to resetting.
“Start your day off with a morning walk, and open the [window] shades to expose yourself to sunlight,” Wright advises.

Fake memories

A flash of light in the brain plants false memories


A little light in just the right spot: That’s all it takes to get a mouse to remember something that never happened. The spot that needs to be “enlightened” lies deep inside the brain. It’s within a seahorse-shaped region called the hippocampus. By tweaking cells here, scientists have created fake memories — in mice.
Errors frequently plague our memories. The new study shows not only that memory can be unreliable but also that memories can be deliberately altered, at least in the lab.
“It’s fairly astounding,” Mark Mayford told Science News. Mayford, who studies the brain and nervous system at the Scripps Research Institute in La Jolla, Calif., was not involved in the new tests. He says the new study shows that stimulating “a small amount of cells can put a thought into an animal’s head.”
Scientists want to know how memories form and falter. To find out, neuroscientist Susumu Tonegawa and his coworkers went straight to the hippocampus. It’s known to play a role in making memories. The researchers were looking for traces of memory stored in brain cells. Scientists refer to these traces as engrams. Understanding how those cells work — or misfire — is difficult without studying them in animals, Tonegawa told Science News. He is a neuroscientist at the Massachusetts Institute of Technology in Cambridge.
To explore memories, his team added genes to memory-making cells in a mouse’s hippocampus. Proteins are the molecules that keep a cell working correctly. And genes are the instructions that tell cells which proteins to make. The bonus genes that Tonegawa’s team added tell certain brain cells in mice to make a new protein — one that is sensitive to light. The team also added a chemical “tag” to the affected cells; it would help identify those cells later.
Now, when a treated mouse made a memory, affected brain cells also started making the new, light-sensitive protein. The scientists could find these memory cells by shining a light (from a tiny optical fiber embedded in the mouse’s brain). Only cells with the new protein would respond to the light, becoming active. This use of light to study genes is known as optogenetics.
In one experiment, mice explored a new room. As they created memories of this experience, the special protein tagged those memory cells. The next day, the mice explored a different room. After a while, mice exploring it received a small shock to their feet.
At the same time the mice received that shock, the scientists shined a light through the optical fibers. This flash lit hippocampus cells holding a memory of the first room. The light triggered the memory cells to activate another nerve cell. And it transported information.
The next day, each mouse returned to the first room. And immediately it froze; it seemed terrified. Even though none of the mice had ever received a shock here, the animals wanted nothing to do with this room. This showed that the light flash had altered their memory of that room. The scientists concluded that they had successfully planted a false memory of being shocked there.
Tonegawa told Science News that about 30,000 cells were involved in making the memories. That may seem like a large number, but the hippocampus contains millions of cells.
“It is really surprising that these things can be done by stimulating this relatively small amount of cells,” he said.
The study may help scientists better understand how memory fails. But Tonegawa notes that understanding what will happen in people is not as simple as looking for the same cells in them. He notes that memory-making is probably more complex in people than in mice.

Feasting black hole

A huge gas cloud is being stretched, shredded and destroyed by the black hole at the center of the Milky Way


The giant black hole at the center of our galaxy soon will shred a giant cloud of gas. And thanks to powerful telescopes, astronomers on Earth will be watching closely when the black hole’s gravity begins ripping the cloud apart, sometime in 2014.
Astronomers first noticed the cloud in 2011. Called G2, it was dangerously close to the black hole at the center of the Milky Way. A black hole is a region in space where nothing that gets close — even light— can escape its gravity.
Even in 2011, the black hole’s gravity had begun to stretch and squash G2. By April 2013, astronomers captured more images of this cloud with the Very Large Telescope in Chile’s Atacama Desert. Astronomers now find that the edge of the cloud already has been whisked to the black hole’s far side.
“If you think of the cloud as a roller coaster train, the first carriage has already swung by the black hole,” Stefan Gillessen told Science News. “The main part of the train is still in approach.” An astronomer, Gillessen works at the Max Planck Institute for Extraterrestrial Physics in Garching, Germany. His team’s findings will appear soon in Astrophysical Journal.
Because of the black hole’s pull, G2 now races through space at 100 times Earth’s speed around the sun. That pull of gravity also has stretched the cloud to twice as long as it had been a year earlier. Astronomers predict that most of the cloud will meet the black hole at some time before the summer of 2014. They expect to use telescopes around the world to watch the spectacle.
As the cloud stretches, it may vanish from view — perhaps even before Jan. 1, 2014, Dimitrios Giannios told Science News. This astrophysicist at Purdue University in West Lafayette, Ind., did not work on the new study. The black hole may take an additional few years to suck in the last shreds of G2. And when it does, a spectacular cosmic fireworks display may erupt.
Explains Giannios, “It would be a last echo of the death of this cloud.”

Teen fighting may harm IQ

Blows to the head may explain these effects on the brain


It’s not a topic many people want to talk about, but youth violence is common. “No community — affluent, poor, urban, suburban or rural — is immune from the devastating effects of youth violence,” notes the U.S. Centers for Disease Control and Prevention. It reports that each year U.S. emergency rooms treat more than 692,000 people between the ages of 10 and 24 for injuries from violent assaults. A new study now concludes that some of those injuries — ones due to teen fighting — can cause a type of harm that no hospital can cure: a lowered IQ.
Although boys sustain more fighting-related injuries each year, girls appear more vulnerable to an IQ drop from fighting. That’s one of the new findings being reported by Joseph Schwartz and Kevin Beaver. As criminologists at Florida State University in Tallahassee, they study issues related to crime.
For their new study, the pair has just analyzed data from the long-running National Longitudinal Study of Adolescent Health. Between 1994 and 2002, it collected information about 20,000 U.S. adolescents and young adults. Funded by the U.S. government, this study asked questions about health and behavior. It started when the participants were in middle- and high school. Most were followed for eight years, until some were as old as 25. On several occasions, the boys and girls took an IQ test. They also were asked, at that time, if they had been hurt badly enough in a fight, during the past year, to need treatment from a doctor.
During the study, at least 1 in 10 males and nearly 1 in 20 females reported being the victim of such serious violence at least once. Some participants reported many such injuries. Levels of violence among U.S. teens have been falling in recent decades. Still, current rates “remain staggeringly high,” Beaver toldScience News for Kids.
He and Schwartz compared IQ scores for the study participants over time. And those IQ scores dropped among people who had reported being victims of serious fighting-related injuries. The pair’s findings will be published soon in the Journal of Adolescent Health.
On average, each serious injury from fighting was linked to a drop of not quite 2 IQ points, they found. But the drop differed by gender. Among boys, each injury logged during the study was linked to a drop of 1.62 IQ points. Girls experienced a drop almost twice that for each serious fighting-related injury that they reported. The girls’ higher vulnerability may reflect their bodies having less protection from injury, Beaver and Schwartz say.
How big a deal is a 2-point drop in IQ? “That’s a good question,” Beaver says. “If I took away one of your IQ points, would you be the same person? Yeah. If I took away four, would you? Probably. But if I took away six or eight?” Now, he says, that change could well be big enough to show an obvious difference in an individual’s cognition. (That’s the ability to think and reason.) And in the new study, participants who reported having sustained 10 or more serious injuries from fighting tended to experience a roughly 19-point drop in IQ over an 8-year span.
While IQ can affect a student’s grades, Beaver points out that adolescent IQ also predicts “a wide range of things that most people care about.” He says that “it predicts whether you’ll go to college, whether you will graduate with a high grade-point average, what your salary will likely be in adulthood — even whether you’ll come in contact with the criminal justice system [and be arrested].” It doesn’t predict these things with certainty, he says, but it does offer a good gauge.
Which injuries most harmful to IQ?
The data used by the Florida State team did not log the particular type of serious injury that each victim sustained. Some might have broken bones, bruised ribs or received cuts that needed stitching up. But there’s no reason to suspect such injuries should affect IQ, Beaver says. Instead, “Our general interpretation is that the IQ effect will have been the result of a hit in the head.”

But Thomas W. McAllister says that’s not a safe assumption. He’s a psychiatrist at Indiana University School of Medicine, in Indianapolis. The data analyzed by the Florida State researchers “did not distinguish brain injury from other body injuries,” he notes. Even some head injuries, such as cuts, would not be expected to cause brain injury. Moreover, he observes: Fighting can be associated with a variety of other issues that can impact cognition.” Among these mental threats to thinking and learning that can be triggered by fighting, he says, are depression, drug abuse and post-traumatic stress disorder.
So using the 1994 to 2002 data to determine that fighting harmed IQ through brain injury “would be difficult,” concludes McAllister.
Other sources of head injuries also can affect IQ or cognition. Among them: car crashes andconcussions from football and other contact sports. In fact, McAllister’s group reported troubling signs of such problems in a study published last year in the journal Neurology.
His group compared scores on several tests of cognition in two groups of college athletes: those who played football or ice hockey and those who performed in track events, rowed (in crewing events) or skied. The first two types of sports are “contact” sports, where athletes may often and deliberately knock into each other. The other, “non-contact” sports involve no intentional collisions. After one season, those students who reported head impacts during contact sports were more likely to have subtle learning and memory problems.
The good news, McAllister’s group reported: No dramatic changes emerged. And there were even hints that subtle problems might repair themselves in the off-season.
But another important point to consider: Unlike kids who get into teen fights, football and hockey players wear helmets designed to protect them from head injury.
Also worrying…
At least one British study recently showed that IQ may change — even dramatically — during the teen years. Brain scans attributed those changes to increases or losses of gray matter. This tissue in the brain processes information (in contrast to white matter, which serves more as information highways for the brain). If findings from both the new study and the British study hold up, that may suggest that fighting can damage gray matter or impair the body’s ability to preserve its function.
Especially troubling: Most violent crimes affecting U.S. teens and young adults are never reported to the police. For instance, between 2002 and 2010 (the latest data available), victims failed to tell police about nearly three out of every four assaults. Assault is a legal term used to describe the threat of physical violence. The findings emerged in a December 2012 report issued by the U.S. Department of Justice.
“When I was in high school and two kids got into a fight, it was often dismissed as ‘boys being boys,’” recalls Beaver. “But if our study is to be believed, one of the consequences of such fights could be some type of [IQ] decline.”

Camels linked to mystery disease

Livestock may be a “reservoir” of germs that can infect people




A mysterious and deadly virus has sickened 94 people — killing 46 — in parts of the Middle East, Europe and northern Africa. A new study finds that camels (the one-humped type) may have introduced the new disease to people.
The germ responsible is a virus that lives in people’s lungs, throats and noses. Scientists recently named the disease it causes Middle East respiratory syndrome, or MERS.
Scientists discovered it after a few people became sick with severe pneumonia. This condition inflames and damages lungs. After examining the germ’s DNA, researchers discovered that the virus is related to some that infect bats. But no one with the disease had any known contact with bats.
Now researchers find that 50 retired racing camels from the Middle East nation of Oman carry antibodies against the MERS virus in their blood. Antibodies are proteins made by the immune system. They help identify or destroy foreign substances in the blood. They also serve as a marker of which particular foreign substance their host had encountered.
Finding antibodies in the blood of dromedary camels suggests the animals had been exposed to MERS. (Dromedary camels are the one-humped type common in North Africa and the Middle East.) An international team of researchers described their findings August 9 in the medical journal Lancet Infectious Diseases.
The researchers also found low levels of antibodies against the MERS virus in the blood of dromedary camels from the Canary Islands, off of Africa’s northwest coast.
None of the exposed camels appeared sick. And neither Oman nor the Canary Islands has reported human cases of MERS. But unconfirmed reports suggest some people with MERS in other countries may have been around camels or goats before falling ill.
The results could mean that camels and camel relatives, such as goats, may be a link in a chain of infection that can sicken people. It might also be that a virus similar to MERS has been in camels for a long time but only recently gained the ability to infect people.
Camels are a common livestock species in the Middle East and North Africa, where they are used for racing. They also are a source of meat and milk. So there are many ways people might contact infected animals, the researchers note.

Quakes cause faraway sloshing

The massive 2011 quake off Japan rocked deep inland bays in Norway



In March 2011, a killer earthquake shook the seafloor off the eastern coast of Japan. It triggered powerful tsunami waves. Some towered more than 40 meters (131 feet) by the time they hit the coast. Where the land was especially flat, those waves roared more than 10 kilometers (6 miles) into Japan, killing thousands of people. But Asian waters weren’t the only ones that got sloshed around.
Right after the magnitude-9 quake, scientists knew that its tremors had set distant waters in northern Europe rocking and rolling. Now, in Geophysical Research Letters, they explain how ground motions caused this sloshing in some parts of Norway.
Long ago, ancient glaciers carved long, narrow inlets into northern coastlines. Norway hosts many of these inlets, called fjords (fee-YORDZ). The water in some of them started sloshing back and forth soon after the quake. This phenomenon, called a seiche (SAYSH), wasn’t caused by some tsunami arriving from Japan. Instead, quake vibrations spread globally, through rock. These vibrations also sped much more quickly than did any tsunamis.
Waters in the fjords along Norway’s western coast were completely calm in the wee hours of March 11, 2011. But soon after 7:00 a.m., people living along some shorelines witnessed an abrupt and frightening change: The water in some fjords suddenly began rolling back and forth, notes Stein Bondevik. He’s a geologist at Sogn og Fjordane University College in Sogndal, Norway. Some people, he recalls, “said ‘The sea was boiling.’” He recalls others saying that it seemed “the fjord changed between high and low tide continuously.”
People snapped videos of the amazing sloshing water on their cell phones. Some outdoor security cameras captured the unexpected water motions as well. Together, these recordings gave scientists plenty of data about the seiches, such as their size and their timing.
For instance, the time needed for one forward-and-back slosh ranged between 67 and 100 seconds. The length, depth and shape of a fjord determined the timing of these seiche cycles, says Bondevik. At one site, the water along one pier moved up and down a ladder. Based on a video of this, his team now estimates that the water there rose and fell between 1.2 and 1.5 meters (4 and 5 feet) during each slosh. In some fjords, these seiches continued nonstop for nearly 3 hours.
One surprise that Bondevik’s group found: These seiches were triggered by a type of ground vibration that causes smaller movements than most.
Quakes cause three major types of seismic vibrations, he explains. The ones that travel through Earth’s crust fastest are called “P” waves. Similar to the sound waves that travel through air, these pressure waves vibrate very quickly. Another type of ground motion — called “S” waves — travel more slowly. The first S waves typically shake the ground back and forth sideways once every 50 to 60 seconds. The slowest ground motions, known as surface waves, travel along Earth’s surface just like tsunamis crossing the ocean.
Although sluggish, surface waves typically move the ground farther back and forth than P waves and S waves. So most scientists had assumed surface waves triggered the seiches. But the video from Norway clearly showed that water in the fjords started sloshing long before the arrival of any surface waves from the Japanese quake. It now appears S waves triggered the seiches, the geologist says.
Where the seiches occurred provides even more support for that notion, he adds.
According to both cameras and eyewitness reports, the seiches affected just six locations. At five of these spots, an imaginary line down the middle of the fjord runs from southwest to northeast. On a globe, that line points directly toward the spot where the Japanese quake struck, more than 8,300 kilometers (5,160 miles) away. That’s no coincidence, Bondevik concludes. The reason: “S” waves arriving from that direction would tend to shake the landscape from side to side, or back and forth from the northwest to the southeast. That would send water directly from one side of the fjord to the other. In fjords pointed in other directions, water wouldn’t pile up against the shores so readily. Instead, it would flow up and down the fjord and not be so noticeable.
Seiches triggered by quakes have been reported many times before. But which type of seismic waves caused the sloshing water “hasn’t been nailed down in detail before,” says Daniel McNamara. He’s a geophysicist (a scientist who studies Earth, including its motions and energy transfer within the ground) at the U.S. Geological Survey in Golden, Colo. “People just assumed [seiches] were caused by the largest ground motions.”
Other things can cause seiches, McNamara notes. For instance, when strong storms move across an area, winds can blow some of the water in large lakes from one side to the other. That shoves water from the upwind side, he explains, boosting water levels on the downwind side. Then, when the winds slow or the storm moves on, the high water on the downwind begins to “run downhill,” back to where it had been. Like a big splash in a bathtub, this causes sloshing. In a large enclosed body of water, like Lake Michigan, such a seiche can move back and forth for hours, if not for a day or more.
Human activity can cause seiches too, especially in small, narrow bodies of water. A couple of years ago, McNamara studied seiches in Panama’s Lake Gatun. That long, narrow lake is part of the Panama Canal system that links the Atlantic and Pacific Oceans. Because sloshing in the lake is more pronounced during the daytime hours, he now believes that ships passing through the canal are triggering some of the region’s small seiches.
All bodies of water can experience seiches. Indeed, McNamara points out, even the water in swimming pools can slosh around in response to passing seismic waves.

Climate change: The long reach

Earth may face far warmer temperatures than previous estimates had indicated



Earth is warming. Sea levels are rising. There’s more carbon in the air, and Arctic ice is melting faster than at any time in recorded history. Scientists who study the environment to better gauge Earth’s future climate now argue that these changes may not reverse for a very long time. Think millennia.
People burn fossil fuels like coal and oil for energy. That burning releases carbon dioxide, a colorless gas. In the air, this gas traps heat at Earth’s surface. And the more carbon dioxide released, the more the planet warms. If current consumption of fossil fuels doesn’t slow, the long-term climate impacts could lastthousands of years — and be more severe than scientists had been expecting. Climatologist Richard Zeebe of the University of Hawaii at Manoa offers this conclusion in a new paper. It appeared August 5 in the Proceedings of the National Academy of Sciences.
Most climate-change studies look at what’s going to happen in the next century or so. During that time, changes in the planet’s environment could nudge global warming even higher. For example: Snow and ice reflect sunlight back into space. But as these melt, sunlight can now reach — and warm — the exposed ground. This extra heat raises the air temperature even more, causing even more snow to melt. This type of rapid exaggeration of impacts is called a “fast feedback.”
Zeebe says it’s important to look at fast feedbacks. However, he adds, they’re limited. From a climate change perspective, “This century is the most important time for the next few generations,” he told Science News. “But the world is not ending in 2100.”
For his new study, Zeebe now focuses on “slow feedbacks.” While fast feedback events unfold over decades or centuries, slow feedbacks can take thousands of years. Melting of continental ice sheets and the migration of plant life — as they relocate to more comfortable areas — are two examples.
Zeebe gathered information from previously published studies investigating how such processes played out over thousands of years during past dramatic changes in climate. Then he came up with a forecast for the future that accounts for both slow and fast feedback processes.
Climate forecasts that use only fast feedbacks predict a 4.5 degree Celsius (8.1 degree Fahrenheit) change by the year 3000. But slow feedbacks added another 1.5° C — for a 6° total increase, Zeebe reports. He also found that slow feedback events will cause global warming to persist for thousands of years after people run out of fossil fuels to burn.
“This study uses our understanding of how the climate works to build an idea of what might happen in the future,” Ana Christina Ravelo told Science News. Ravelo is a climate scientist at the University of California, Santa Cruz. She pointed out that Zeebe’s study also is conservative — which means it might greatly underestimate the true boost in Earth’s temperature.

Sleepyheads prefer junk food

A night without sleep changes the brain and how appetizing people find high-calorie foods




Pulling an all-nighter does a number on the brain, a new study finds. People who lost a night of sleep also lost much of their willpower to eat right. This connection could help explain why people who don’t regularly get a good night’s sleep are more likely to be obese.
For the new study, scientists recruited 23 adult volunteers to take part in tests at a sleep lab on two separate nights. On night one, the men and women were encouraged to sleep normally. On the other night (at least one week before or after the other), the scientists kept their recruits awake all night long.
After each night in the lab, the people reported how hungry they were. That didn’t differ after each night. But what foods the recruits found appetizing did — a lot.
The scientists showed the volunteers 80 pictures of food while they underwent brain scans. After seeing each picture, the men and women rated how much they desired that food. The brain scans recorded which parts of the brain became active during the viewing of each picture.
Low-calorie foods, like carrots, looked equally appetizing to volunteers after sleeping soundly or not at all. But junk foods (such as doughnuts and potato chips) looked a lot yummier after volunteers stayed awake all night. And the sleepier the recruits felt, the better those sweet and fatty foods looked.
The head scans showed that a night without sleep triggered changes in a test subject’s brain activity, too. Brain areas involved in making decisions about what to eat became less active. At the same time, brain activity rose in an area thought to promote eating.
Stephanie Greer of the University of California, Berkeley and her coworkers reported their findings August 6 in Nature Communications.
At the University of Chicago, Eve Van Cauter has shown a link between appetite and sleep. She found that blood levels of hormones that tell the body how hungry or full we are — as well as food preferences — differ, depending on how well-rested people are. Almost a decade ago, she studied people who were allowed to sleep only four hours for two nights in a row. Her sleep-deprived recruits ate more — almost 25 percent more calories — than when they had gotten a full night’s sleep. And the sleepyheads selected mostly high-calorie foods.
Think that amount of sleep is crazy low? Not for many teens. A survey of high school students a few years ago by the Centers for Disease Control and Prevention found that about 10 percent of teens reported sleeping an average of just five hours per night. Almost 6 percent claimed to regularly get no more than four hours of shut-eye a night.
People are the only animals to voluntarily ignore their sleep needs, Van Cauter told Science News. They stay up to play, work, hang out with friends or surf the Web. But doing so, she says, is fighting our biology “because we are not wired for sleep deprivation.”

Kepler telescope can’t be fixed

                                 

Since its launch on March 6, 2009, the $600 million Kepler space telescope has been hunting for planets outside Earth’s solar system. And to date, it’s turned up thousands.  But in July 2012, the National Aeronautics and Space Administration announced that one important part on the spacecraft had failed. In May of this year, a sister part failed. NASA initially hoped it might fix the broken parts. No more. On August 15, space-agency officials announced that Kepler’s damage was beyond repair.
The spacecraft relies on four “reaction wheels” to help turn the telescope toward the stars that scientists want to target. Two of those reaction wheels no longer work correctly. By losing the ability to precisely point the spacecraft toward targeted stars, the telescope can no longer detect the small dips in starlight that signify the existence of distant planets.
Last month, engineers forced each of the faulty wheels back into action, one at a time. But as each spun, it encountered unexpectedly high friction. This resistance to spin is a death sentence for telescopes that rely on reaction wheels.
Earlier this month, engineers tried to direct the telescope using the remaining two healthy wheels and the better of its two troubled ones. All seemed to work fine for about six hours. But then the telescope automatically turned itself off. The reason: The faulty wheel again had encountered too much friction.
“The wheels are sufficiently damaged that they cannot sustain spacecraft pointing control” — at least not for long, reported Charles Sobeck in a telephone briefing for reporters. He’s Kepler’s deputy project manager and works at the NASA Ames Research Center in northern California.
The good news: The spacecraft is not dead. In fact, Kepler scientists are now exploring what the telescope might be able to do with just its two undamaged reaction wheels. NASA had planned to spend roughly $18 million on Kepler experiments this year. Soon, the space agency will decide whether to go ahead and spend all or part of that money for a reduced mission. It will, however, be a tough sell: Kepler’s precision focus is what made it an unprecedented scientific asset.
Prior to the Kepler mission, astronomers had identified an estimated 350 exoplanets — planets beyond the solar system. In just four years, the Kepler telescope found over 3,000 more.
Those numbers boosted the case for funding NASA’s next exoplanet-hunting mission. Called the Transiting Exoplanet Survey Satellite, or TESS, it’s scheduled for a 2017 launch. Unlike Kepler, which fixed its gaze on distant stars, TESS will focus on bright, nearby stars. If TESS finds planets around them, powerful telescopes like the upcoming James Webb Space Telescope will be able to probe their atmospheres.
The $200-million telescope on TESS will not be as sensitive as Kepler’s is. Still, the Kepler telescope was so successful at finding exoplanets that TESS scientists are hopeful theirs will uncover plenty of planets in our neighborhood, including a handful of Earth-sized worlds.

The Crisis of Big Science




Last year physicists commemorated the centennial of the discovery of the atomic nucleus. In experiments carried out in Ernest Rutherford’s laboratory at Manchester in 1911, a beam of electrically charged particles from the radioactive decay of radium was directed at a thin gold foil. It was generally believed at the time that the mass of an atom was spread out evenly, like a pudding. In that case, the heavy charged particles from radium should have passed through the gold foil, with very little deflection. To Rutherford’s surprise, some of these particles bounced nearly straight back from the foil, showing that they were being repelled by something small and heavy within gold atoms. Rutherford identified this as the nucleus of the atom, around which electrons revolve like planets around the sun.
This was great science, but not what one would call big science. Rutherford’s experimental team consisted of one postdoc and one undergraduate. Their work was supported by a grant of just £70 from the Royal Society of London. The most expensive thing used in the experiment was the sample of radium, but Rutherford did not have to pay for it—the radium was on loan from the Austrian Academy of Sciences.
Nuclear physics soon got bigger. The electrically charged particles from radium in Rutherford’s experiment did not have enough energy to penetrate the electrical repulsion of the gold nucleus and get into the nucleus itself. To break into nuclei and learn what they are, physicists in the 1930s invented cyclotrons and other machines that would accelerate charged particles to higher energies. The late Maurice Goldhaber, former director of Brookhaven Laboratory, once reminisced:
The first to disintegrate a nucleus was Rutherford, and there is a picture of him holding the apparatus in his lap. I then always remember the later picture when one of the famous cyclotrons was built at Berkeley, and all of the people were sitting in the lap of the cyclotron.

1.

After World War II, new accelerators were built, but now with a different purpose. In observations of cosmic rays, physicists had found a few varieties of elementary particles different from any that exist in ordinary atoms. To study this new kind of matter, it was necessary to create these particles artificially in large numbers. For this physicists had to accelerate beams of ordinary particles like protons—the nuclei of hydrogen atoms—to higher energy, so that when the protons hit atoms in a stationary target their energy could be transmuted into the masses of particles of new types. It was not a matter of setting records for the highest-energy accelerators, or even of collecting more and more exotic species of particles, like orchids. The point of building these accelerators was, by creating new kinds of matter, to learn the laws of nature that govern all forms of matter. Though many physicists preferred small-scale experiments in the style of Rutherford, the logic of discovery forced physics to become big.
In 1959 I joined the Radiation Laboratory at Berkeley as a postdoc. Berkeley then had the world’s most powerful accelerator, the Bevatron, which occupied the whole of a large building in the hills above the campus. The Bevatron had been built specifically to accelerate protons to energies high enough to create antiprotons, and to no one’s surprise antiprotons were created. What was surprising was that hundreds of types of new, highly unstable particles were also created. There were so many of these new types of particles that they could hardly all be elementary, and we began to doubt whether we even knew what was meant by a particle being elementary. It was all very confusing, and exciting.
After a decade of work at the Bevatron, it became clear that to make sense of what was being discovered, a new generation of higher-energy accelerators would be needed. These new accelerators would be too big to fit into a laboratory in the Berkeley hills. Many of them would also be too big as institutions to be run by any single university. But if this was a crisis for Berkeley, it wasn’t a crisis for physics. New accelerators were built, at Fermilab outside Chicago, at CERN near Geneva, and at other laboratories in the US and Europe. They were too large to fit into buildings, but had now become features of the landscape. The new accelerator at Fermilab was four miles in circumference, and was accompanied by a herd of bison, grazing on the restored Illinois prairie.
By the mid-1970s the work of experimentalists at these laboratories, and of theorists using the data that were gathered, had led us to a comprehensive and now well-verified theory of particles and forces, called the Standard Model. In this theory, there are several kinds of elementary particles. There are strongly interacting quarks, which make up the protons and neutrons inside atomic nuclei as well as most of the new particles discovered in the 1950s and 1960s. There are more weakly interacting particles called leptons, of which the prototype is the electron.
There are also “force carrier” particles that move between quarks and leptons to produce various forces. These include (1) photons, the particles of light responsible for electromagnetic forces; (2) closely related particles called W and Z bosons that are responsible for the weak nuclear forces that allow quarks or leptons of one species to change into a different species—for instance, allowing negatively charged “down quarks” to turn into positively charged “up quarks” when carbon-14 decays into nitrogen-14 (it is this gradual decay that enables carbon dating); and (3) massless gluons that produce the strong nuclear forces that hold quarks together inside protons and neutrons.
Successful as the Standard Model has been, it is clearly not the end of the story. For one thing, the masses of the quarks and leptons in this theory have so far had to be derived from experiment, rather than deduced from some fundamental principle. We have been looking at the list of these masses for decades now, feeling that we ought to understand them, but without making any sense of them. It has been as if we were trying to read an inscription in a forgotten language, like Linear A. Also, some important things are not included in the Standard Model, such as gravitation and the dark matter that astronomers tell us makes up five sixths of the matter of the universe.
So now we are waiting for results from a new accelerator at CERN that we hope will let us make the next step beyond the Standard Model. This is the Large Hadron Collider, or LHC. It is an underground ring seventeen miles in circumference crossing the border between Switzerland and France. In it two beams of protons are accelerated in opposite directions to energies that will eventually reach 7 TeV in each beam, that is, about 7,500 times the energy in the mass of a proton. The beams are made to collide at several stations around the ring, where detectors with the mass of World War II cruisers sort out the various particles created in these collisions.
Some of the new things to be discovered at the LHC have long been expected. The part of the Standard Model that unites the weak and electromagnetic forces, presented in 1967–1968, is based on an exact symmetry between these forces. The W and Z particles that carry the weak nuclear forces and the photons that carry electromagnetic forces all appear in the equations of the theory as massless particles. But while photons really are massless, the W and Z are actually quite heavy. Therefore, it was necessary to suppose that this symmetry between the electromagnetic and weak interactions is “broken”—that is, though an exact property of the equations of the theory, it is not apparent in observed particles and forces.
The original and still the simplest theory of how the electroweak symmetry is broken, the one proposed in 1967–1968, involves four new fields that pervade the universe. A bundle of the energy of one of these fields would show up in nature as a massive, unstable, electrically neutral particle that came to be called the Higgs boson.1 All the properties of the Higgs boson except its mass are predicted by the 1967–1968 electroweak theory, but so far the particle has not been observed. This is why the LHC is looking for the Higgs—if found, it would confirm the simplest version of the electroweak theory. In December 2011 two groups reported hints that the Higgs boson has been created at the LHC, with a mass 133 times the mass of the proton, and signs of a Higgs boson with this mass have since then turned up in an analysis of older data from Fermilab. We will know by the end of 2012 whether the Higgs boson has really been seen.
The discovery of the Higgs boson would be a gratifying verification of present theory, but it will not point the way to a more comprehensive future theory. We can hope, as was the case with the Bevatron, that the most exciting thing to be discovered at the LHC will be something quite unexpected. Whatever it is, it’s hard to see how it could take us all the way to a final theory, including gravitation. So in the next decade, physicists are probably going to ask their governments for support for whatever new and more powerful accelerator we then think will be needed.

2.

That is going to be a very hard sell. My pessimism comes partly from my experience in the 1980s and 1990s in trying to get funding for another large accelerator.
In the early 1980s the US began plans for the Superconducting Super Collider, orSSC, which would accelerate protons to 20 TeV, three times the maximum energy that will be available at the CERN Large Hadron Collider. After a decade of work, the design was completed, a site was selected in Texas, land bought, and construction begun on a tunnel and on magnets to steer the protons.
Then in 1992 the House of Representatives canceled funding for the SSC. Funding was restored by a House–Senate conference committee, but the next year the same happened again, and this time the House would not go along with the recommendation of the conference committee. After the expenditure of almost two billion dollars and thousands of man-years, the SSC was dead.
One thing that killed the SSC was an undeserved reputation for over-spending. There was even nonsense in the press about spending on potted plants for the corridors of the administration building. Projected costs did increase, but the main reason was that, year by year, Congress never supplied sufficient funds to keep to the planned rate of spending. This stretched out the time and hence the cost to complete the project. Even so, the SSC met all technical challenges, and could have been completed for about what has been spent on the LHC, and completed a decade earlier.
Spending for the SSC had become a target for a new class of congressmen elected in 1992. They were eager to show that they could cut what they saw as Texas pork, and they didn’t feel that much was at stake. The cold war was over, and discoveries at the SSC were not going to produce anything of immediate practical importance. Physicists can point to technological spin-offs from high-energy physics, ranging from synchotron radiation to the World Wide Web. For promoting invention, big science in this sense is the technological equivalent of war, and it doesn’t kill anyone. But spin-offs can’t be promised in advance.
weinberg_2-051012.jpg
Ernest Rutherford holding the apparatus he used to disintegrate a nitrogen nucleus, circa 1917
What really motivates elementary particle physicists is a sense of how the world is ordered—it is, they believe, a world governed by simple universal principles that we are capable of discovering. But not everyone feels the importance of this. During the debate over the SSC, I was on the Larry King radio show with a congressman who opposed it. He said that he wasn’t against spending on science, but that we had to set priorities. I explained that theSSC was going to help us learn the laws of nature, and I asked if that didn’t deserve a high priority. I remember every word of his answer. It was “No.”
What does motivate legislators is the immediate economic interests of their constituents. Big laboratories bring jobs and money into their neighborhood, so they attract the active support of legislators from that state, and apathy or hostility from many other members of Congress. Before the Texas site was chosen, a senator told me that at that time there were a hundred senators in favor of the SSC, but that once the site was chosen the number would drop to two. He wasn’t far wrong. We saw several members of Congress change their stand on the SSC after their states were eliminated as possible sites.
Another problem that bedeviled the SSC was competition for funds among scientists. Working scientists in all fields generally agreed that good science would be done at the SSC, but some felt that the money would be better spent on other fields of science, such as their own. It didn’t help that the SSC was opposed by the president-elect of the American Physical Society, a solid-state physicist who thought the funds for the SSC would be better used in, say, solid-state physics. I took little pleasure from the observation that none of the funds saved by canceling the SSC went to other areas of science.
All these problems will emerge again when physicists go to their governments for the next accelerator beyond the LHC. But it will be worse, because the next accelerator will probably have to be an international collaboration. We saw recently how a project to build a laboratory for the development of controlled thermonuclear power, ITER, was nearly killed by the competition between France and Japan to be the laboratory’s site.
There are things that can be done in fundamental physics without building a new generation of accelerators. We will go on looking for rare processes, like an extremely slow conjectured radioactive decay of protons. There is much to do in studying the properties of neutrinos. We get some useful information from astronomers. But I do not believe that we can make significant progress without also pushing back the frontier of high energy. So in the next decade we may see the search for the laws of nature slow to a halt, not to be resumed again in our lifetimes.
Funding is a problem for all fields of science. In the past decade, the National Science Foundation has seen the fraction of grant proposals that it can fund drop from 33 percent to 23 percent. But big science has the special problem that it can’t easily be scaled down. It does no good to build an accelerator tunnel that only goes halfway around the circle.

3.

Astronomy has had a very different history from physics, but it has wound up with much the same problems. Astronomy became big science early, with substantial support from governments, because it was useful in a way that, until recently, physics was not.2 Astronomy was used in the ancient world for geodesy, navigation, time-keeping, and making calendars, and in the form of astrology it was imagined to be useful for predicting the future. Governments established research institutes: the Museum of Hellenistic Alexandria; the House of Wisdom of ninth-century Baghdad; the great observatory in Samarkand built in the 1420s by Ulugh Beg; Uraniborg, Tycho Brahe’s observatory, built on an island given by the king of Denmark for this purpose in 1576; the Greenwich Observatory in England; and later the US Naval Observatory.
In the nineteenth century rich private individuals began to spend generously on astronomy. The third Earl of Rosse used a huge telescope called Leviathan in his home observatory to discover that the nebulae now known as galaxies have spiral arms. In America observatories and telescopes were built carrying the names of donors such as Lick, Yerkes, and Hooker, and more recently Keck, Hobby, and Eberly.
But now astronomy faces tasks beyond the resources of individuals. We have had to send observatories into space, both to avoid the blurring of images caused by the earth’s atmosphere and to observe radiation at wavelengths that cannot penetrate the atmosphere. Cosmology has been revolutionized by satellite observatories such as the Cosmic Background Explorer, the Hubble Space Telescope, and the Wilkinson Microwave Anisotropy Probe, working in tandem with advanced ground-based observatories. We now know that the present phase of the Big Bang started 13.7 billion years ago. We also have good evidence that, before that, there was a phase of exponentially fast expansion known as inflation.
But cosmology is in danger of becoming stuck, in much the same sense as elementary particle physics has been stuck for decades. The discovery in 1998 that the expansion of the universe is now accelerating can be accommodated in various theories, but we don’t have observations that would point to the right theory. The observations of microwave radiation left over from the early universe have confirmed the general idea of an early era of inflation, but do not give detailed information about the physical processes involved in the expansion. New satellite observatories will be needed, but will they be funded?
The recent history of the James Webb Space Telescope, planned as the successor to Hubble, is disturbingly reminiscent of the history of the SSC. At the funding level requested by the Obama administration last year, the project would continue, but at a level that would not allow the telescope ever to be launched into orbit. In July the House Appropriations Committee voted to cancel the Webb telescope altogether. There were complaints about cost increases, but as was the case with the SSC, most of the increase came because year by year the project was not adequately funded. Funding for the telescope has recently been restored, but the prognosis for future funding is not bright. The project is no longer under the authority of NASA’s Science Mission Directorate. The technical performance of the Webb project has been excellent, and billions have already been spent, but the same was true of the SSC, and did not save it from cancellation.
Meanwhile, in the past few years funding has dropped for astrophysics at NASA. In 2010 the National Research Council carried out a survey of opportunities for astronomy in the next ten years, setting priorities for new observatories that would be based in space. The highest priorities went first to WFIRST, an infrared survey telescope; next to Explorer, a program of mid-sized observatories similar in scale to the Wilkinson Microwave Anisotropy Probe; then to LISA, a gravitational wave observatory; and finally to an international X-ray observatory. No funds are in the budget for any of these.
Some of the slack in big science is being taken up by Europe, as for instance with the LHC and a new microwave satellite observatory named Planck. But Europe has worse financial problems than the US, and the European Union Commission is now considering the removal of large science projects from the EU budget.
Space-based astronomy has a special problem in the US. NASA, the government agency responsible for this work, has always devoted more of its resources to manned space flight, which contributes little to science. All of the space-based observatories that have contributed so much to astronomy in recent years have been unmanned. The International Space Station was sold in part as a scientific laboratory, but nothing of scientific importance has come from it. Last year a cosmic ray observatory was carried up to the Space Station (after NASA had tried to remove it from the schedule for shuttle flights), and for the first time significant science may be done on the Space Station, but astronauts will have no part in its operation, and it could have been developed more cheaply as an unmanned satellite.
The International Space Station was partly responsible for the cancellation of theSSC. Both came up for a crucial vote in Congress in 1993. Because the Space Station would be managed from Houston, both were seen as Texas projects. After promising active support for the SSC, in 1993 the Clinton administration decided that it could only support one large technological project in Texas, and it chose the Space Station. Members of Congress were hazy about the difference. At a hearing before a House committee, I heard a congressman say that he could see how the Space Station would help us to learn about the universe, but he couldn’t understand that about the SSC. I could have cried. As I later wrote, the Space Station had the great advantage that it cost about ten times more than the SSC, so that NASA could spread contracts for its development over many states. Perhaps if the SSC had cost more, it would not have been canceled.

4.

Big science is in competition for government funds, not only with manned space flight, and with various programs of real science, but also with many other things that we need government to do. We don’t spend enough on education to make becoming a teacher an attractive career choice for our best college graduates. Our passenger rail lines and Internet services look increasingly poor compared with what one finds in Europe and East Asia. We don’t have enough patent inspectors to process new patent applications without endless delays. The overcrowding and understaffing in some of our prisons amount to cruel and unusual punishment. We have a shortage of judges, so that civil suits take years to be heard.
The Securities and Exchange Commission, moreover, doesn’t have enough staff to win cases against the corporations it is charged to regulate. There aren’t enough drug rehabilitation centers to treat addicts who want to be treated. We have fewer policemen and firemen than before September 11. Many people in America cannot count on adequate medical care. And so on. In fact, many of these other responsibilities of government have been treated worse in the present Congress than science. All these problems will become more severe if current legislation forces an 8 percent sequestration—or reduction, in effect—of nonmilitary spending after this year.
We had better not try to defend science by attacking spending on these other needs. We would lose, and would deserve to lose. Some years ago I found myself at dinner with a member of the Appropriations Committee of the Texas House of Representatives. I was impressed when she spoke eloquently about the need to spend money to improve higher education in Texas. What professor at a state university wouldn’t want to hear that? I naively asked what new source of revenue she would propose to tap. She answered, “Oh, no, I don’t want to raise taxes. We can take the money from health care.” This is not a position we should be in.
It seems to me that what is really needed is not more special pleading for one or another particular public good, but for all the people who care about these things to unite in restoring higher and more progressive tax rates, especially on investment income. I am not an economist, but I talk to economists, and I gather that dollar for dollar, government spending stimulates the economy more than tax cuts. It is simply a fallacy to say that we cannot afford increased government spending. But given the anti-tax mania that seems to be gripping the public, views like these are political poison. This is the real crisis, and not just for science.3