A look at Rwanda’s genocide helps explain why ordinary people kill their neighbors

A string of state-directed, targeted mass killings left a bloody stain on the 20th century. A genocide more recent than the Holocaust is providing new insights into why some people join in such atrocities.

Adolf Hitler’s many accomplices in his campaign to exterminate Jews throughout Europe have justifiably attracted the attention of historians and social scientists. But a 100-day spasm of unprecedented violence in 1994 that wiped out about three-quarters of the ethnic Tutsi population in the African nation of Rwanda has the potential to reveal much about how mass killings unfold at ground level.
There is no guarantee that a better, although inevitably incomplete, understanding of why certain members of Rwanda’s majority Hutu population nearly eliminated a Tutsi minority will prevent future large-scale slaughters. The research is worth the effort, though, especially in a 21st century already marked by massacres of hundreds of thousands of people in western Sudan’s Darfur region and in Syria.

Researchers have an advantage in Rwanda. When hostilities ended, Rwanda’s government gathered extensive data on genocide victims and suspected perpetrators through a national survey. And local courts tried more than 1 million cases of alleged involvement in the violence, making the case documents available to researchers.

Genocide studies have often split offenders into organizers — mainly political and community leaders — and “ordinary men” who kill out of blind obedience to central or local authorities and hatred of those deemed enemies. But the extensive data from Rwanda tell a different story: An individual’s willingness to take part in genocidal violence depends on many personal and social factors that influence whether and how deeply a person participates, says sociologist and Rwanda genocide researcher Hollie Nyseth Brehm of Ohio State University in Columbus.

Nyseth Brehm’s findings may not apply to some of Rwanda’s most avid killers, who eluded capture and fled the country as soon as hostilities stopped. But when it comes to the ordinary citizens swept up in the deadly campaign, involvement was not primarily about following political leaders’ orders to eliminate Tutsis.

New reports by Nyseth Brehm and others fuel skepticism about the popular idea that regular folks tend to do as they’re told by authorities. And a fresh look at a famous 1960s psychology study adds further doubt that people will blindly follow orders to harm or kill others.
In reality, only about 20 percent of Hutu men, an estimated 200,000, seriously injured or killed at least one person during the genocidal outbreak, estimates Rwanda genocide researcher Omar McDoom of the London School of Economics and Political Science.

“Why did four in five Hutu men not engage in the killing?” McDoom asks. That puzzle goes against the ordinary man thesis that “implies there are no individual differences in genocide participation,” he says. He suspects participation hinged on personal motivations, such as wanting to defend Rwanda from enemies or make off with a Tutsi neighbor’s possessions. Social circumstances, such as living in high-violence areas or having friends or family members who had already murdered Tutsis, probably played a role too. Nyseth Brehm agrees.

Local triggers
Genocides often fester before exploding. In Rwanda, Tutsi rebels attacked the Hutu-led government and set off a civil war several years before mass killings started. A turning point came when unidentified forces killed Rwanda’s president, shooting down his plane on April 6, 1994. Over the next three months, the government orchestrated a massacre of Tutsis and any Hutus deemed friendly or helpful to Tutsis. Most scholars place the death toll at around 800,000, although estimates range from 500,000 to 1.2 million. Bands of Hutus scoured the countryside for their sworn enemies. Killings took place at roadblocks and in raids on churches, schools and other community facilities. Hutu women killed on a much smaller scale than men did, although they often aided those involved in the carnage.

In many parts of Rwanda, local authorities appointed by the national government recruited Hutu men into groups that burned and looted homes of their Tutsi neighbors, killing everyone they encountered, says political scientist Scott Straus of the University of Wisconsin–Madison. In his 2016 book Fundamentals of Genocide and Mass Atrocity Prevention, Straus describes how Rwandan recruitment efforts coalesced into a killing machine. Politicians, business people, soldiers and others encouraged Hutu farmers to kill an enemy described as “cockroaches” in need of extermination. Similarly, Nazis portrayed Jews as cockroaches and vermin.

Despite the Rwandan state’s best efforts to encourage nationwide Tutsi annihilation, local conditions shaped how the 1994 genocide unfolded, Nyseth Brehm reported in February in Criminology. She looked at 142 of the nation’s 145 municipalities, known as communes. Some experienced as few as 71 killings, while in others, as many as 54,700 people were murdered, she found.

Communes with the fewest killings were those that had the highest marriage and employment rates, Nyseth Brehm says. In those settings, mainly farming communities where people knew and trusted each other, most citizens valued a peaceful status quo and discouraged a descent into mass killing, she suspects.
Curiously, violence was worse in areas with the largest numbers of educated people. That points to the effectiveness of anti-Tutsi teachings in Rwandan schools, Nyseth Brehm suggests.

Her study relied on data from a postgenocide survey, published in 2004 by Rwanda’s government, intended to document every person killed during the atrocity. Citizens throughout Rwanda told interviewers about individuals in their communities who had been killed during the outburst of slaughter. Reported and confirmed deaths were checked against records of human remains linked to the 1994 genocide. Comparisons were also made to Rwanda’s 1991 census.

However, any data on killings during mass violence, including from the Rwandan survey, will be incomplete, Nyseth Brehm cautions. So she also analyzed data from 1,068,192 genocide-related cases tried in local Rwandan courts from 2002 to 2012. Of particular note, although most nongenocidal murders in Rwanda are carried out by men in their 20s, the average age of accused genocide perpetrators was 34.7 years old, Nyseth Brehm reported in the November 2016 Criminology.

Hutu men in their 30s joined the genocidal fray as a way to fulfill adult duties by defending their communities against an outside threat, she suggests. Preliminary analyses show that perpetrators tended to cluster in families; if one of several brothers killed Tutsis, the others were far more likely to follow suit.

Additional scouring of court data indicated that Rwandans who had siblings convicted of genocide killings were especially likely to have murdered Tutsis themselves. In earlier interviews of 130 Rwandans, some who had killed Tutsis and others who hadn’t, McDoom similarly found that perpetrators tended to cluster in families.

Missing murderers
Unfortunately, the Rwandan genocide’s most prolific players have eluded both the law and science, says political scientist Cyanne Loyle of Indiana University Bloomington. Investigators have so far interviewed only a handful of the powerful “big fish” who orchestrated the genocide, plus several hundred people tried and imprisoned for genocide participation. Survey and court data are limited to killers who either stayed in Rwanda after atrocities ended or were caught trying to flee the country.

But perpetrators with the most blood on their hands traveled in bands, wiping out tens of thousands of people at a time before hiding abroad, Loyle says. For instance, local officials lured large numbers of Tutsis to a school near the town of Murambi, where Hutu militias used machine guns, explosives and other weapons to kill more than 40,000 people in just three days.

“Scholars have studied Rwandans who killed on the sidelines while a larger and deadlier campaign was under way,” Loyle says. “They have mistaken a sideshow for the main event.”

Perpetrators of colossal atrocities at Murambi and elsewhere were less powerful than the government’s genocide masterminds, Loyle says. These “murderers in the middle,” however, were better equipped and far more effective at killing than common folk who got caught up in events, she contends.

There are no good estimates of how many members of large-scale killing squads escaped Rwanda and now live elsewhere. From 15,000 to 22,000 members of the Rwandan army and local militia groups were at large in the Democratic Republic of the Congo, near Rwanda’s border, in January 2003, according to a report by the International Crisis Group, a nonprofit organization.

Nyseth Brehm acknowledges the difficulty of accounting for genocide perpetrators who eluded justice. She and others, including Straus, have interviewed genocide offenders who stayed in Rwanda, often imprisoned for their crimes. Many of those who fled must have traveled in groups that murdered on a grand scale, she says. Those mass killers represent crucial missing data on who participates in genocide, and for what reasons.
Vicious virtue
In interviews by Nyseth Brehm, McDoom and others, perpetrators listed many reasons for joining the 1994 killing spree — hatred of Tutsis, a perceived need to protect nation and family, a desire to claim a neighbor’s property or a decision to join a suddenly popular cause, to name a few. Blind obedience to brutal leaders was far from the only reason cited.

That finding conflicts with the late psychologist Stanley Milgram’s interpretation of his famous “obedience to authority” experiments. Milgram described those trials, in which volunteers were told to administer increasingly intense shocks to another person, as a demonstration of people’s frequent willingness to follow heinous commands. He saw the experiments as approximating the more extreme situations in which Germans had participated in the Holocaust.
On closer inspection, though, Milgram’s study aligns closely with what’s known about Rwandan genocide perpetrators, says S. Alexander Haslam, a psychologist at the University of Queensland in Australia.
In Milgram’s experiments, as in Rwanda and Nazi Germany, “those willing to harm others were not so much passive ciphers as motivated instruments of a collective cause,” Haslam says. “They perceived themselves as acting virtuously and doing good things.”

Although Milgram’s tests upset some volunteers, most participants identified with his scientific mission to understand human behavior and wanted to prove themselves as worthy of the project, Haslam and psychologist Stephen Reicher of the University of St. Andrews in Fife, Scotland, conclude in a research review scheduled to appear in the 2017 Annual Review of Law and Social Science.

Milgram conducted 23 obedience experiments with New Haven, Conn., residents in 1961 and 1962 (SN: 9/21/13, p. 30). Most attention has focused on only one of those experiments. Volunteers designated as “teachers” were asked by an experimenter to continue upping the intensity of what they thought were electric shocks to a “learner” — who was actually in league with Milgram — who erred time and again on a word-recall test. Through screams, shouts and eventually dead silence from the learner, 26 of 40 volunteers, or 65 percent, administered shocks all the way to a maximum of 450 volts.

But experiments that undermined participants’ identification with the scientific mission lowered their willingness to deliver the harshest shocks, Haslam and Reicher say. Fewer volunteers shocked to the bitter end if, for instance, the study was conducted in an office building rather than a university laboratory or if the experimenter was not physically present. An analysis of data available from 21 of the 23 experiments finds that 43.6 percent of 740 volunteers shocked learners to the limit.
Participants were most compliant when an experimenter encouraged them to continue shocking for the sake of the experiment (by saying, “The experiment requires that you continue”), the psychologists add. Participants never followed the order: “You have no choice, you must continue.”

Milgram’s archives at Yale University contain letters and survey responses from former participants reporting high levels of support for Milgram’s project and for science in general. Many former volunteers told Milgram that they administered shocks out of a duty to collaborate on what they viewed as important research, even if it caused them distress at the time. Still, Milgram’s recruits often admitted having had suspicions during the experiments that learners were not really being zapped.

Milgram was right that his experiments applied to real-world genocides, Haslam concludes, but erred in assuming that obedience to authority explained his results. From Milgram’s laboratory to Rwanda’s killing squads and Nazi concentration camps, orders to harm others are carried out by motivated followers, not passive conformists, he asserts.

If anything, that makes genocide all the more horrifying.

Why are the loops in the sun’s atmosphere so neat and tidy?

When the Aug. 21 solar eclipse unveils the sun’s normally dim atmosphere, the corona will look like an intricate, orderly network of loops, fans and streamers. These features trace the corona’s magnetic field, which guides coronal plasma to take on the shape of tubes and sheets.

These wispy coronal structures arise from the magnetic field on the sun’s visible surface, or its photosphere. Unlike the corona, the photosphere’s magnetism is a complete mess.
“It’s not a static surface like the ground, it’s more like an ocean,” says solar physicist Amir Caspi of the Southwest Research Institute in Boulder, Colo. “And not just an ocean. It’s like a boiling ocean.”

Because the corona’s loops and streamers all originate in the turbulent photosphere, their roots should get twisted and turned around.

“And yet these structures in the corona are not tangled and snarled and matted like kelp or seaweed in the ocean,” Caspi says. “They seem to still be these organized, smooth loops. Nobody understands why.”

To unknot the photosphere’s tangled mats, the corona must release some of the energy stored there, Caspi says. So during the eclipse, he and his colleagues will be looking for the release valves that set the corona free.
One possibility is that wave motion in the corona’s magnetic field lines helps untie the snarls. Magnetic waves in plasma, called Alfvén waves, are thought to ripple through the sun’s magnetic field lines like vibrations in a guitar string. Researchers have directly observed Alfvén waves in the lower corona, within about half a solar radius of the surface (SN: 4/11/09, p. 12), but not farther out where similar waves with higher amplitudes would travel. Those close-in waves were too weak to explain the corona’s features, but perhaps more distant waves could shake things up enough.
Another option is that little hypothetical spurts of magnetic energy could help release the tangles. These nanoflares and nanojets would be like solar flares but with a billionth of the energy. By going off all the time, nanoflares and nanojets could collectively release enough energy to give the corona some structure, simulations have shown.

“Both are symptoms of tiny rearrangements of the magnetic field — magnetic reconnection,” says solar physicist Craig DeForest, also at the Southwest Research Institute. Solar flares and bigger outbursts called coronal mass ejections are also signs of magnetic reconnection, but they’re not frequent enough to account for the corona’s smoothness. “Nanojets and/or nanoflares in the middle corona would be a smoking gun that would explain why the corona is so organized,” DeForest says.

No one has actually seen any nanoflares or nanojets. Theories suggest that they’re too small and quick to see individually — but they should be visible as a cacophony of little pops when the solar eclipse reveals the lower corona.

The shaking from Alfvén waves and the flickers of nanoflares could not only loosen up the tangled skein of magnetism, but also transfer heat high up into the corona. Caspi, DeForest and their colleagues hope to see both effects on August 21, when they fly a pair of telescopes on twin NASA WB-57 high-altitude research jets along the path of the eclipse (SN Online: 8/14/17).

“We’re taking high-speed movies of the sun and analyzing them for things that look like waves,” Caspi says. “We’re just overall looking at the structure of the corona.”

On social media, privacy is no longer a personal choice

Some people might think that online privacy is a, well, private matter. If you don’t want your information getting out online, don’t put it on social media. Simple, right?

But keeping your information private isn’t just about your own choices. It’s about your friends’ choices, too. Results from a study of a now-defunct social media site show that the inhabitants of the digital age may need to stop and think about just how much they control their personal information, and where the boundaries of their privacy are.

When someone joins a social network, the first order of business is, of course, to find friends. To aid the process, many apps offer to import contact lists from someone’s phone or e-mail or Facebook, to find matches with people already in the network.

Sharing those contact lists seems innocuous, notes David Garcia, a computational social scientist at the Complexity Science Hub Vienna in Austria. “People giving contact lists, they’re not doing anything wrong,” he says. “You are their friend. You gave them the e-mail address and phone number.” Most of the time, you probably want to stay in touch with the person, possibly even via the social media site.

But the social network then has that information — whether or not the owner of it wanted it shared.

Social platforms’ ability to collect and curate this extra information into what are called shadow profiles first came to light with a Facebook bug in 2013. The bug inadvertently shared the e-mail addresses and phone numbers of some 6 million users with all of their friends, even when the information wasn’t public.

Facebook immediately addressed the bug. But afterward, some users noticed that the phone numbers on their Facebook profiles had still been filled in — even though they had not given Facebook their digits. Instead, Facebook had collected the numbers from the contact lists innocently provided by their friends, and filled in the missing information for them. A shadow profile had become reality.
It’s no surprise that a social platform could take names, e-mail addresses and phone numbers and match them up with other people on the same platform. But Garcia wondered if these shadow profiles could be extended to people not on the social platform at all.

He turned to a now-defunct social network called Friendster. A precursor to sites like MySpace and Facebook, Friendster launched in 2002. In 2008, the social site boasted more than 115 million users. But by 2009 people began to jump ship for other sites, and in 2015 Friendster closed for good. Millions of abandoned public profiles vanished into the ether.

Or did they? The Internet Archive — a nonprofit library — has an archive of more than 200 billion web pages, including Friendster. Garcia was able to use that repository to get data on 100 million public accounts from the social media site. Garcia dug through the records in a process he calls Internet Archaeology, after a satirical video from The Onion in which an internet archaeologist announces that he has, ironically, discovered Friendster. “The time scale of online media is very fast, but it’s still studying things in society that don’t exist anymore,” he explains.

Garcia hunted for patterns in the data. Most people don’t have a random assortment of friends. Married people tend to be friends with other married people, for example. But people also have connections that complicate the ability to predict who’s connected to who. People who identified as gay men were more likely to be friends with other gay men, but also likely to be friends with women. Straight women were more likely to be friends with men.

Using this information, Garcia was able to show that he could predict characteristics such as the marital status and sexual orientation of users’ friends who were not on the social media network. And the more people in the social network who shared their own personal information, the more information the network received about their contacts, and the better the prediction about people not on the network got.

“You are not in full control of your privacy,” he concludes. If your friend is on a social platform, so are you. And you don’t have a choice in the matter. Garcia published his findings August 4 in Science Advances.

This does not mean that social platforms are creating shadow profiles of your social media–averse friends, Garcia notes. But with the information people give to social networks and with the platforms’ computing abilities, they certainly could. To prevent the data being used this way, Garcia only used the most basic, public information. He didn’t predict anything about specific people. He only checked to see if it was possible. Garcia also kept the power of his predictions low and very general. And he was careful to not construct an algorithm that could actually build a shadow profile, to make sure that others cannot misuse the findings.

But the results do show that information from your friends on a social network could accurately predict your marital status, location, sexual orientation or political affiliation — information that you may not want anyone to know, let alone in a social network you’re not even on.

“It’s a good illustration of an issue we have in society, which is that we no longer have control over what people can infer about us,” says Elena Zheleva, a computer scientist at the University of Illinois in Chicago. “If I decide not to participate in a certain social network, that doesn’t mean that people won’t be able to find things about me on that network.”

This means we might need to think differently about what privacy means. “We’re used to thinking of having a private space,” Garcia says. “We think we’ve got a room with keys and we let some people in.” But a better image, he argues, might be to imagine ourselves covered in the wet paint of our personal information. If we touch someone else, we leave a handprint. “The more you touch other people, the more you leave on them,” he explains. Touch enough people, and anyone who looks at those people and their paint-covered sleeves will be able to pick out your personal shade of teal.

And because we are no longer in full control of our privacy, Garcia notes, it also means that protecting privacy isn’t something any one person can do. “In some sense it resembles climate change,” he says. “It’s not something you can solve on your own. It’s everyone’s problem or it’s no one’s problem.”

Invasive earthworms may be taking a toll on sugar maples

Earthworms are great for soil, right? Well, not always. In places where there have been no earthworms for thousands of years, foreign worms can wreak havoc on soils. And that can cause a cascade of problems throughout an area’s food web. Now comes evidence that invader worms in the Upper Great Lakes may be stressing the region’s sugar maples.

There are native earthworms in North America, but not in regions that had been covered in glaciers during the Ice Age. Once the ice melted, living things returned. Earthworms don’t move that quickly, though, and even after 10,000 years, they’ve only made small inroads into the north on their own.

But people have inadvertently intervened. Sometimes they’ve dumped their leftover bait in worm-free zones. Or they’ve accidentally brought worms or eggs in the soil stuck to cars or trucks. And the worms took up residence as far north as Alberta’s boreal forests.

Earthworms “are not really supposed to be in some of these areas,” says Tara Bal, a forest health scientist at Michigan Technological University in Houghton. “In a garden, they’re good,” she notes. They help to mix soil. But that isn’t a good thing in a northern forest where soil is naturally stratified and nutrients tend to be found only in the uppermost layer near the leaf litter. “That’s what the trees have been used to,” Bal says. Those trees include sugar maples, which have shallow roots to get those nutrients. But worms mix up the soils and take away that nutrient-rich layer.
Bal didn’t start out studying worms in northern regions. She and her colleagues were brought in to address a problem that sugar maple growers were experiencing. Some of the trees appeared to be stressed out. They were experiencing what’s called dieback, when whole branches die, fall off and regrow. This is worrisome because if enough of the tree dies off, “it’s a slow spiral from there,” Bal says — the whole tree eventually dies.
To investigate, the researchers collected data on trees and anything that could be affecting them, from soil type to slope to insects. They looked at trees in 120 plots in Michigan, Wisconsin and Minnesota. And they compared trees that were on growers’ land with those on public land, thinking that how the trees were managed might have some effect.
When the researchers analyzed the data, “the same thing that kept coming up over and over again was the forest floor condition,” Bal says. “That is directly related to the presence of earthworms.” They didn’t go out to look for the worms, but they could see signs of them in the amount of carbon in the soil and in changes in the ground cover. Wildflowers, for instance, were replaced by grasses and sedges, the researchers report July 26 in Biological Invasions.

Bal and her team can’t say what this means for maple syrup production. It might not mean anything at all. But “worms are ecosystem engineers,” she notes. “They’re changing the food chain.” Everything from insects to birds to salamanders could be affected by the arrival of worms.

Even if the sugar maples take a hit, though, there could be an upside, Bal says. These trees are often grown with few other types of trees around. Such a grove is naturally less resilient to climate change and extreme weather. So replacing some of those sugar maples with other trees could result in a healthier, more resilient forest in the future, Bal says.

Zika could one day help combat deadly brain cancer

Zika’s damaging neurological effects might someday be enlisted for good — to treat brain cancer.

In human cells and in mice, the virus infected and killed the stem cells that become a glioblastoma, an aggressive brain tumor, but left healthy brain cells alone. Jeremy Rich, a regenerative medicine scientist at the University of California, San Diego, and colleagues report the findings online September 5 in the Journal of Experimental Medicine.

Previous studies had shown that Zika kills stem cells that generate nerve cells in developing brains (SN: 4/2/16, p. 26). Because of similarities between those neural precursor cells and stem cells that turn into glioblastomas, Rich’s team suspected the virus might also target the cells that cause the notoriously deadly type of cancer. In the United States, about 12,000 people are expected to be diagnosed with glioblastoma in 2017. (It’s the type of cancer U.S. Senator John McCain was found to have in July.) Even with treatment, most patients live only about a year after diagnosis, and tumors frequently recur.
In cultures of human cells, Zika infected glioblastoma stem cells and halted their growth, Rich and colleagues report. The virus also infected full-blown glioblastoma cells but at a lower rate, and didn’t infect normal brain tissues. Zika-infected mice with glioblastoma either saw their tumors shrink or their tumor growth slow compared with uninfected mice. The virus-infected mice lived longer, too. In one trial, almost half of the mice survived more than six weeks after being infected with Zika, while all of the uninfected mice died within two weeks of receiving a placebo.

Using a virus to knock out cancer isn’t a completely new idea. Treatments that rely on modified polioviruses to target tumors such as glioblastomas are already in clinical trials in the United States, and there’s a modified herpesvirus approved by the U.S. Food and Drug Administration for treating melanoma.

These cancer-fighting viruses seem to work in two ways, says Andrew Zloza, head of surgical oncology research at the Rutgers Cancer Institute of New Jersey in New Brunswick. First, the viruses infect and kill cancer cells. Then, as those cancer cells split open, previously hidden tumor fragments become visible to the immune system, which can recognize and fight them.

“Right now we don’t know what kind of viruses are best” for fighting cancer, Zloza says — whether it’s more effective to use a common virus that many people have been exposed to or something more unusual. But now, Zika is yet another candidate.

Further testing is needed to determine whether the virus is safe and effective in humans. Since Zika’s effects are more harmful in developing brains, a Zika-based cancer therapy might be safe only in adults. And the virus would need to be genetically modified to make it safer and less transmissible.
Rich and colleagues are now testing in mice whether combining Zika with traditional cancer treatments such as chemotherapy is more effective than either treatment by itself. Because Zika targets the cells that generate tumor cells, it might prevent tumors from recurring.

Chong Liu one-ups plant photosynthesis

For Chong Liu, asking a scientific question is something like placing a bet: You throw all your energy into tackling a big and challenging problem with no guarantee of a reward. As a student, he bet that he could create a contraption that photosynthesizes like a leaf on a tree — but better. For the now 30-year-old chemist, the gamble is paying off.

“He opened up a new field,” says Peidong Yang, a chemist at the University of California, Berkeley who was Liu’s Ph.D. adviser. Liu was among the first to combine bacteria with metals or other inorganic materials to replicate the energy-generating chemical reactions of photosynthesis, Yang says. Liu’s approach to artificial photosynthesis may one day be especially useful in places without extensive energy infrastructure.

Liu first became interested in chemistry during high school, and majored in the subject at Fudan University in Shanghai. He recalls feeling frustrated in school when he would ask questions and be told that the answer was beyond the scope of what he needed to know. Research was a chance to seek out answers on his own. And the problem of artificial photosynthesis seemed like something substantial to throw himself into — challenging enough “so [I] wouldn’t be jobless in 10 or 15 years,” he jokes.
Photosynthesis is a simple but powerful process: Sunlight helps transform carbon dioxide and water into chemical energy stored in the chemical bonds of sugar molecules. But in nature, the process isn’t particularly efficient, converting just 1 percent of solar energy into chemical energy. Liu thought he could do better with a hybrid system.
The efficiency of natural photosynthesis is limited by light-absorbing pigments in plants or bacteria, he says. People have designed materials that absorb light far more efficiently. But when it comes to transforming that light energy into fuel, bacteria shine.

“By taking a hybrid approach, you leverage what each side is better at,” says Dick Co, managing director of the Solar Fuels Institute at Northwestern University in Evanston, Ill.

Liu’s early inspiration was an Apollo-era attempt at a life-support system for manned space missions. The idea was to use inorganic materials with specialized bacteria to turn astronauts’ exhaled carbon dioxide into food. But early attempts never went anywhere.

“The efficiency was terribly low, way worse than you’d expect from plants,” Liu says. And the bacteria kept dying — probably because other parts of the system were producing molecules that were toxic to the bacteria.

As a graduate student, Liu decided to use his understanding of inorganic chemistry to build a system that would work alongside the bacteria, not against them. He first designed a system that uses nanowires coated with bacteria. The nanowires collect sunlight, much like the light-absorbing layer on a solar panel, and the bacteria use the energy from that sunlight to carry out chemical reactions that turn carbon dioxide into a liquid fuel such as isopropanol.

As a postdoctoral fellow in the lab of Harvard University chemist Daniel Nocera, Liu collaborated on a different approach. Nocera had been working on a “bionic leaf” in which solar panels provide the energy to split water into hydrogen and oxygen gases. Then, Ralstonia eutropha bacteria consume the hydrogen gas and pull in carbon dioxide from the air. The microbes are genetically engineered to transform the ingredients into isopropanol or another liquid fuel. But the project faced many of the same problems as other bacteria-based artificial photosynthesis attempts: low efficiency and lots of dead bacteria.
“Chong figured out how to make the system extremely efficient,” Nocera says. “He invented biocompatible catalysts” that jump-start the chemical reactions inside the system without killing off the fuel-generating bacteria. That advance required sifting through countless scientific papers for clues to how different materials might interact with the bacteria, and then testing many different options in the lab. In the end, Liu replaced the original system’s problem catalysts — which made a microbe-killing, highly reactive type of oxygen molecule — with cobalt-phosphorus, which didn’t bother the bacteria.

Chong is “very skilled and open-minded,” Nocera says. “His ability to integrate different fields was a big asset.”

The team published the results in Science in 2016, reporting that the device was about 10 times as efficient as plants at removing carbon dioxide from the air. With 1 kilowatt-hour of energy powering the system, Liu calculated, it could recycle all the carbon dioxide in more than 85,000 liters of air into other molecules that could be turned into fuel. Using different bacteria but the same overall setup, the researchers later turned nitrogen gas into ammonia for fertilizer, which could offer a more sustainable approach to the energy-guzzling method used for fertilizer production today.

Soil bacteria carry out similar reactions, turning atmospheric nitrogen into forms that are usable by plants. Now at UCLA, Liu is launching his own lab to study the way the inorganic components of soil influence bacteria’s ability to run these and other important chemical reactions. He wants to understand the relationship between soil and microbes — not as crazy a leap as it seems, he says. The stuff you might dig out of your garden is, like his approach to artificial photosynthesis, “inorganic materials plus biological stuff,” he says. “It’s a mixture.”

Liu is ready to place a new bet — this time on re-creating the reactions in soil the same way he’s mimicked the reactions in a leaf.

Six-month-old babies know words for common things, but struggle with similar nouns

Around the six-month mark, babies start to get really fun. They’re not walking or talking, but they are probably babbling, grabbing and gumming, and teaching us about their likes and dislikes. I remember this as the time when my girls’ personalities really started making themselves known, which, really, is one of the best parts of raising a kid. After months of staring at those beautiful, bald heads, you start to get a glimpse of what’s going on inside them.

When it comes to learning language, it turns out that a lot has already happened inside those baby domes by age 6 months. A new study finds that babies this age understand quite a bit about words — in particular, the relationships between nouns.
Work in toddlers, and even adults, reveals that people can struggle with word meanings under difficult circumstances. We might briefly falter with “shoe” when an image of a shoe is shown next to a boot, for instance, but not when the shoe appears next to a hat. But researchers wanted to know how early these sorts of word relationships form.

Psychologists Elika Bergelson of Duke University and Richard Aslin, formerly of the University of Rochester in New York and now at Haskins Laboratories in New Haven, Conn., put 51 6-month-olds to a similar test. Outfitted with eye-tracking gear, the babies sat on a parent’s lap and looked at a video screen that showed pairs of common objects. Sometimes the images were closely related: mouth and nose, for instance, or bottle and spoon. Other pairs were unrelated: blanket and dog, or juice and car.

When both objects were on the screen, the parents would say a sentence using one of the words: “Where is the nose?” for instance. If babies spent more time looking at the nose than the other object, researchers inferred that the babies had a good handle on that word.

When the babies were shown tricky pairs of closely related objects, like a cup of juice and a cup of milk, the babies spent nearly equal time looking at both pictures, no matter what word their parents said. But when the images were really distinct (juice and car, for instance) the babies spent more time looking at the spoken word.
These babies detected a difference between the “milk-juice” pair and the “juice-car” pair, recognizing that one pair is similar and the other isn’t, the researchers conclude November 20 in the Proceedings of the National Academy of Sciences.
To see whether this ability was tied to domestic life, the researchers sent the babies home with specialized gear: vests with audio recorders and adorable hats outfitted with small video cameras, one just above each ear. A camera on a tripod in a corner of the home also captured snippets of daily life. The resulting video and audio recordings revealed that babies whose caregivers used more nouns for objects in the room were better at the word task in the lab.

That means that babies learn words well when they can actually see the object being talked about. Hearing, “Open your mouth. Here comes the spoon!” as they watch the spoon come flying toward their face makes a bigger vocabulary impression than “Did you like riding in the car yesterday?”

A similar idea came from a recent study on preschoolers. These kids learned best when they saw one picture at a time (or when parents pointed at the relevant object). Babies — and older kids, too — like to see what you’re talking about.

The results are too early to provide advice to parents, says Bergelson, a cognitive and developmental psychologist. “But I think one thing suggested by our work is that parents should consider their young baby to be a real conversational partner,” she says. “Even young infants are listening and learning about words and the world around them before they start talking themselves, and their caregivers make that possible.”

There’s still lots to figure out about how babies soak up vocabulary. And as scientists come up with more ways to peer into the mysterious inner workings of a baby’s mind, those answers might lead to even more interesting conversations with our babies.

When tumors fuse with blood vessels, clumps of breast cancer cells can spread

PHILADELPHIA — If you want to beat them, join them. Some breast cancer tumors may follow that strategy to spread through the body.

Breast cancer tumors can fuse with blood vessel cells, allowing clumps of cancer cells to break away from the main tumor and ride the bloodstream to other locations in the body, suggests preliminary research. Cell biologist Vanesa Silvestri of Johns Hopkins University School of Medicine presented the early work December 4 at the American Society for Cell Biology/European Molecular Biology Organization meeting.

Previous research has shown that cancer cells traveling in clusters have a better chance of spreading than loners do (SN: 1/10/15, p. 9). But how clusters of cells get into the bloodstream in the first place has been a mystery, in part because scientists couldn’t easily see inside tumors to find out.

So Silvestri and colleagues devised a see-through synthetic version of a blood vessel. The vessel ran through a transparent gel studded with tiny breast cancer tumors. A camera attached to a microscope allowed the researchers to record the tumors invading the artificial blood vessel. Sometimes the tumors pinched the blood vessel, eventually sealing it off. But in at least one case, a small tumor merged with the cells lining the faux blood vessel. Then tiny clumps of cancer cells broke away from the tumor and floated away in the fluid flowing through the capillary. More work is needed to confirm that the same process happens in the body, Silvestri said.

How to keep humans from ruining the search for life on Mars

T he Okarian rover was in trouble. The yellow Humvee was making slow progress across a frigid, otherworldly landscape when planetary scientist Pascal Lee felt the rover tilt backward. Out the windshield, Lee, director of NASA’s Haughton Mars Project, saw only sky. The rear treads had broken through a crack in the sea ice and were sinking into the cold water.

True, there are signs of water on Mars, but not that much. Lee and his crew were driving the Okarian (named for the yellow Martians in Edgar Rice Burroughs’ novel The Warlord of Mars) across the Canadian Arctic to a research station in Haughton Crater that served in this dress rehearsal as a future Mars post. On a 496-kilometer road trip along the Northwest Passage, crew members pretended they were explorers on a long haul across the Red Planet to test what to expect if and when humans go to Mars.

What they learned in that April 2009 ride may become relevant sooner rather than later. NASA has declared its intention to send humans to Mars in the 2030s (SN Online: 5/24/16). The private sector plans to get there even earlier: In September, Elon Musk announced his aim to launch the first crewed SpaceX mission to Mars as soon as 2024.

“That’s not a typo,” Musk said in Australia at an International Astronautical Congress meeting. “Although it is aspirational.”

Musk’s six-year timeline has some astrobiologists in a panic. If humans arrive too soon, these researchers fear, any chance of finding evidence of life — past or present — on Mars may be ruined.

“It’s really urgent,” says astrobiologist Alberto Fairén of the Center for Astrobiology in Madrid and Cornell University. Humans take whole communities of microorganisms with them everywhere, spreading those bugs indiscriminately.

Planetary geologist Matthew Golombek of NASA’s Jet Propulsion Laboratory in Pasadena, Calif., agrees, adding, “If you want to know if life exists there now, you kind of have to approach that question before you send people.”

A long-simmering debate over how rigorously to protect other planets from Earth life, and how best to protect life on Earth from other planets, is coming to a boil. The prospect of humans arriving on Mars has triggered a flurry of meetings and a spike in research into what “planetary protection” really means.

One of the big questions is whether Mars has regions that might be suitable for life and so deserve special protection. Another is how big a threat Earth microbes might be to potential Martian life (recent studies hint less of a threat than expected). Still, the specter of human biomes mucking up the Red Planet before a life-hunting mission can even launch has raised bitter divisions within the Mars research community.
Mind the gaps
Before any robotic Mars mission launches, the spacecraft are scrubbed, scoured and sometimes scorched to remove Earth microbes. That’s so if scientists discover a sign of life on Mars, they’ll know the life did not just hitchhike from Cape Canaveral. The effort is also intended to prevent the introduction of harmful Earth life that could kill off any Martians, similar to how invasive species edge native organisms out of Earth’s habitats.

“If we send Earth organisms to a place where they can grow and thrive, then we might come back and find nothing but Earth organisms, even though there were Mars organisms there before,” says astrobiologist John Rummel of the SETI Institute in Mountain View, Calif. “That’s bad for science; it’s bad for the Martians. We’d be real sad about that.”

To avoid that scenario, spacefaring organizations have historically agreed to keep spacecraft clean. Governments and private companies alike abide by Article IX of the 1967 Outer Space Treaty, which calls for planetary exploration to avoid contaminating both the visited environment and Earth. In the simplest terms: Don’t litter, and wipe your feet before coming back into the house.

But this guiding principle doesn’t tell engineers how to avoid contamination. So the international Committee on Space Research (called COSPAR) has debated and refined the details of a planetary protection policy that meets the treaty’s requirement ever since. The most recent version dates from 2015 and has a page of guidelines for human missions.

In the last few years, the international space community has started to add a quantitative component to the rules for humans — specifying how thoroughly to clean spacecraft before launch, for instance, or how many microbes are allowed to escape from human quarters.

“It was clear to everybody that we need more refined technical requirements, not just guidelines,” says Gerhard Kminek, planetary protection officer for the European Space Agency and chair of COSPAR’s planetary protection panel, which sets the standards. And right now, he says, “we don’t know enough to do a good job.”

In March 2015, more than 100 astronomers, biologists and engineers met at NASA’s Ames Research Center in Moffett Field, Calif., and listed 25 “knowledge gaps” that need more research before quantitative rules can be written.

The gaps cover three categories: monitoring astronauts’ microbes, minimizing contamination and understanding how matter naturally travels around Mars. Rather than prevent contamination — probably impossible — the goal is to assess the risks and decide what risks are acceptable. COSPAR prioritized the gaps in October 2016 and will meet again in Houston in February to decide what specific experiments should be done.
Stick the landing
The steps required for any future Mars mission will depend on the landing spot. COSPAR currently says that robotic missions are allowed to visit “special regions” on Mars, defined as places where terrestrial organisms are likely to replicate, only if robots are cleaned before launch to 0.03 bacterial spores per square meter of spacecraft. In contrast, a robot going to a nonspecial region is allowed to bring 300 spores per square meter. These “spores,” or endospores, are dormant bacterial cells that can survive environmental stresses that would normally kill the organism.

To date, any special regions are hypothetical, because none have been conclusively identified on Mars. But if a spacecraft finds that its location unexpectedly meets the special criteria, its mission might have to change on the spot.

The Viking landers, which in 1976 brought the first and only experiments to look for living creatures on Mars, were baked in an oven for hours before launch to clean the craft to special region standards.

“If you’re as clean as Viking, you can go anywhere on Mars,” says NASA planetary protection officer Catharine Conley. But no mission since, from the Pathfinder mission in the 1990s to the current Curiosity rover to the upcoming Mars 2020 and ExoMars rovers, has been cleared to access potentially special regions. That’s partly because of cost. A 2006 study by engineer Sarah Gavit of the Jet Propulsion Lab found that sterilizing a rover like Spirit or Opportunity (both launched in 2003) to Viking levels would cost up to 14 percent more than sterilizing it to a lower level. NASA has also backed away from looking for life after Viking’s search for Martian microbes came back inconclusive. The agency shifted focus to seeking signs of past habitability.

Although no place on Mars currently meets the special region criteria, some areas have conditions close enough to be treated with caution. In 2015, geologist Colin Dundas of the U.S. Geological Survey in Flagstaff, Ariz., and colleagues discovered what looked like streaks of salty water that appeared and disappeared in Gale Crater, where Curiosity is roving. Although those streaks were not declared special regions, the Curiosity team steered the rover clear of the area.
But evidence of flowing water on Mars bit the dust. In November, Dundas and colleagues reported in Nature Geoscience that the streaks are more likely to be tiny avalanches of sand. The reversal highlights how difficult it is to tell if a region on Mars is special or not.


However, on January 12 in Science, Dundas and colleagues reported finding eight slopes where layers of water ice were exposed at shallow depths (SN Online: 1/11/18). Those very steep spots would not be good landing sites for humans or rovers, but they suggest that nearby regions might have accessible ice within a meter or two of the surface.

If warm and wet conditions exist, that’s exactly where humans would want to go. Golombek has helped choose every Mars landing site since Pathfinder and has advised SpaceX on where to land its Red Dragon spacecraft, originally planned to bring the first crewed SpaceX mission to Mars. (Since then, SpaceX has announced it will use its BFR spacecraft instead, which might require shifts in landing sites.) The best landing sites for humans have access to water and are as close to the equator as possible, Golombek says. Low latitudes mean warmth, more solar power and a chance to use the planet’s rotation to help launch a rocket back to Earth.

That narrows the options. NASA’s first workshop on human landing sites, held in Houston in October 2015, identified more than 40 “exploration zones” within 50 degrees latitude of the equator, where astronauts could do science and potentially access raw materials for building and life support, including water.

Golombek helped SpaceX whittle its list to a handful of sites, including Arcadia Planitia and Deuteronilus Mensae, which show signs of having pure water ice buried beneath a thin layer of soil.

What makes these regions appealing for humans also makes them more likely to be good places for microbes to grow, putting a crimp in hopes for boots on the ground. But there are ways around the apparent barriers, Conley says. In particular, humans could land a safe distance from special regions and send clean robots to do the dirty work.

That suggestion raises a big question: How far is far enough? To figure out a safe distance, scientists need to know how well Earth microbes would survive on Mars in the first place, and how far those organisms would spread from a human habitat.
The most desirable places on Mars for human visits offer access to water in some form and are near the equator (for increased solar power and to get a boost when launching a return rocket). Rovers and landers have found evidence of a watery Martian past. Planners of future robotic and human missions have potential landing spots in mind. Map excludes polar regions.

Hover over or tap the map points to explore.
A no-grow zone
Initial results suggest that Mars does a good job of sterilizing itself. “I’ve been trying to grow Earth bacteria in Mars conditions for 15 years, and it’s actually really hard to do,” says astrobiologist Andrew Schuerger of the University of Florida in Gainesville. “I think that risk is much lower than the scientific community might think.”

In 2013 in Astrobiology, Schuerger and colleagues published a list of more than a dozen factors that microbes on Mars would have to overcome, including a lot of ultraviolet radiation from the sun; extreme dryness, low pressure and freezing temperatures; and high levels of salts, oxidants and heavy metals in Martian soils.
Schuerger has tried to grow hundreds of species of bacteria and fungi in the cold, low-pressure and low-oxygen conditions found on Mars. Some species came from natural soils in the dry Arctic and other desert environments, and others were recovered from clean rooms where spacecraft were assembled.

Of all those attempts, he has had success with 31 bacteria and no fungi. Seeing how difficult it is to coax these hardy microbes to thrive gives him confidence to say: “The surface conditions on Mars are so harsh that it’s very unlikely that terrestrial bacteria and fungi will be able to establish a niche.”

There’s one factor Schuerger does worry about, though: salts, which can lower the freezing temperature of water. In a 2017 paper in Icarus, Schuerger and colleagues tested the survival of Bacillus subtilis, a well-studied bacterium found in human gastrointestinal tracts, in simulated Martian soils with various levels of saltiness.

B. subtilis can form a tough spore when stressed, which could keep it safe in extreme environments. Schuerger showed that dormant B. subtilis spores were mostly unaffected for up to 28 days in six different soils. But another bacterium that does not form spores was killed off. That finding suggests that spore-forming microbes — including ones that humans carry with them — could survive in soils moistened by briny waters.

The Okarian’s trek across the Arctic offers a ray of hope: Spores might not make it very far from human habitats. At three stops during the journey across the Arctic, Pascal Lee, of the SETI Institute, collected samples from the pristine snow ahead and dirtier snow behind the vehicle, as well as from the rover’s interior. Later, Lee sent the samples to Schuerger’s lab.

The researchers asked, if humans drive over a microbe-free pristine environment, would they contaminate it? “The answer was no,” Schuerger says.

And that was in an Earth environment with only one or two of Schuerger’s biocidal factors (low temperatures and slightly higher UV radiation than elsewhere on Earth) and with a rover crawling with human-associated microbes. The Okarian hosted 69 distinct bacteria and 16 fungi, Schuerger and Lee reported in 2015 in Astrobiology.

But when crew members ventured outside the rover, they barely left a mark. The duo found one fungus and one bacterium on both the rover and two snow sites, one downwind and one ahead of the direction of travel. Other than that, nothing, even though crew members made no effort to contain their microbes — they breathed and ate openly.

“We didn’t see dispersal when conditions were much more conducive to dispersal” than they will be on Mars, Schuerger says.
The International Space Station may be an even better place to study what happens when inhabited space vessels leak microbes. Michelle Rucker, an engineer at NASA’s Johnson Space Center in Houston, and her colleagues are testing a tool for astronauts to swab the outside of their spacesuits and the space station, and collect whatever microbes are already there.

“At this point, no one has defined what the allowable levels of human contamination are,” Rucker says. “We don’t know if we’d meet them, but more importantly, we’ve never checked our human systems to see where we’re at.”

Rucker and colleagues have had astronauts test the swab kit as part of their training on Earth. The researchers plan to present the first results from those tests in March in Big Sky, Mont., at the IEEE Aerospace Conference. If the team gets the tool flight-certified to test it on the ISS, the results could fill a knowledge gap about how much spaceships carrying humans will leak and vent microbes.

A Russian experiment on the ISS may be giving the first clues. In November 2017, Russian cosmonauts told TASS news service that they had found living bacteria on the outside of the ISS. Some of those microbes, swabbed near vents during spacewalks, were not found on the spacecraft’s exterior when it launched.

Blowing in the wind
These results are important, says Conley, but they don’t give enough information alone to write quantitative contamination rules.

That’s partly because of another knowledge gap: how dust and wind move around on Mars. If Martian dust storms carry microbes far enough, the invaders could contaminate potential special regions even if humans land a safe distance away.

To find out, COSPAR’s Kminek suggests sending a fleet of Mars landers to act as meteorological stations at several fixed locations. The landers could measure atmospheric conditions and dust properties over a long time. Such landers would be relatively inexpensive to build, he says, and could launch in advance of humans.

But these weather stations would have to get in line. There’s a launch window between Earth and Mars every two years, and the next few are already booked. Weather stations would have to be stationary, so they couldn’t be added to rover missions like ExoMars or Mars 2020.

That means it’s possible that SpaceX or another company will try to send humans to Mars before the reconnaissance missions necessary to write rules for planetary protection are even built. If COSPAR is the tortoise in this race, SpaceX is the hare, along with a few other private companies. Only SpaceX has a stated timeline. Other contenders, including Washington-based Blue Origin, founded by Amazon executive Jeff Bezos, and United Launch Alliance, based in Colorado, are developing rockets that some analysts say could be part of a mission to the moon or Mars.

Now or never
Those looming launches prompted Fairén and colleagues to make a controversial proposal. In an article in the October 2017 Astrobiology, provocatively titled “Searching for life on Mars before it is too late,” the team suggested sending existing or planned rovers, even those not at the height of cleanliness, to look directly for signs of Martian life.

Given the harsh Martian conditions, rovers are unlikely to contaminate regions that might turn out to be special on a closer look, the group argues. The invasive species argument is misleading, they say: Don’t compare a microbe transfer to taking Asian parrots to the Amazon rainforest, where they could thrive and edge out local parrots. It would be closer to taking them to Antarctica to freeze to death.

Even if Earth microbes did replicate on Mars, the researchers wrote, technology is advanced enough that scientists would be able to distinguish hitchhikers from Earth from true Mars life (SN: 4/30/16, p. 28).

In a sharp rebuttal, published in the same issue of Astrobiology, Rummel and Conley disagreed. “Why would you want to go there with a dirty spacecraft?” says Rummel, who was NASA’s planetary protection officer before Conley. “To spend a billion dollars to go find life from Florida on Mars is both irresponsible and completely scientifically indefensible.”

There’s also concern for the health and safety of future astronauts. Conley says she mentioned the idea that scientists shouldn’t worry about getting sick if they encounter Earth organisms on Mars to a November meeting of epidemiologists who study the risks of Earth-based pandemics.

“The room burst out laughing,” she says. “This is a room full of medical doctors who deal with Ebola. The idea that we know about Earth organisms, and therefore they can’t hurt us, was literally laughable to them.”

Fairén has already drafted a response for a future issue of Astrobiology: “We acknowledge [that Rummel and Conley’s points] are informed and literate. Unfortunately, they are also unconvincing.”

The issue might come to a head in July in Pasadena, Calif., at the next meeting of COSPAR’s Scientific Assembly. Fairén and colleagues plan to push for more relaxed cleanliness rules.

That’s not likely to happen anytime soon. But with no concrete rules in place for humans, would a human mission even be allowed off the ground, whether NASA or SpaceX was at the helm? Currently, private U.S. companies must apply to the Federal Aviation Administration for a launch license, and for travel to another planet, that agency would probably ask NASA to weigh in.

It’s hard to know if anyone will actually be ready to send humans to Mars in the next decade. “You’d have to actually believe them to be scared,” says Rummel. “There are many unanswered questions about what Elon Musk wants to do. But I think we can calm down about people showing up on Mars unannounced.”

But SpaceX has defied expectations before and may give slow and steady a kick in the pants.

Top 10 papers from Physical Review’s first 125 years

No anniversary list is ever complete. Just last month, for instance, my Top 10 scientific anniversaries of 2018 omitted the publication two centuries ago of Mary Shelley’s Frankenstein. It should have at least received honorable mention.

Perhaps more egregious, though, was overlooking the 125th anniversary of the physics journal Physical Review. Since 1893, the Physical Review has published hundreds of thousands of papers and has been long regarded as the premier repository for reports of advances in humankind’s knowledge of the physical world. In recent decades it has split itself into subjournals (A through E, plus L — for Letters — and also X) to prevent excessive muscle building by librarians and also better organize papers by physics subfield. (You don’t want to know what sorts of things get published in X.)
To celebrate the Physical Review anniversary, the American Physical Society (which itself is younger, forming in 1899 and taking charge of the journal in 1913), has released a list, selected by the journals’ editors, of noteworthy papers from Physical Review history.

The list comprises more than four dozen papers, oblivious to the concerns of journalists composing Top 10 lists. If you prefer the full list without a selective, arbitrary and idiosyncratic Top 10 filter, you can go straight to the Physical Review journals’ own list. But if you want to know which two papers the journal editors missed, you’ll have to read on.

  1. Millikan measures the electron’s charge, 1913.
    When J.J. Thomson discovered the electron in 1897, it was by proving the rays in cathode ray tubes were made up of a stream of particles. They carried a unit of electrical charge (hence their name). Thomson did not publish in the Physical Review. But Robert Millikan did in 1913 when he measured the strength of the electric charge on a single electron. He used oil drops, measuring how fast they fell through an electric field. Interacting with ions in the air gave each drop more or fewer electric charges, affecting how fast the drops fell. It was easy to calculate the smallest amount of charge consistent with the various changes in speed. (OK, it was not easy at all — it was a tough experiment and the calculations required corrections for all sorts of things.) Millikan’s answer was very close to today’s accepted value, and he won the Nobel Prize in 1923.
  2. Wave nature of electron, Davisson and Germer, 1927.
    J.J. Thomson’s son George also experimented with electrons, and showed that despite his father’s proof that they were particles, they also sometimes behaved like waves. George did not publish in the Physical Review. But Clinton Davisson and Lester Germer did; their paper established what came to be called the wave-particle duality. Their experiment confirmed the suspicions of Louis de Broglie, who had suggested the wave nature of electrons in 1924.
  3. Particle nature of X-rays, Compton, 1923.
    Actually, wave-particle duality was already on the physics agenda before de Broglie’s paper or Davisson and Germer’s experiment, thanks to Arthur Holly Compton. His experiments on X-rays showed that when they collided with electrons, momentum was transferred just as in collisions of particles. Nevertheless X-rays were definitely a form of electromagnetic radiation that moved as a wave, like light. Compton’s result was good news for Einstein, who had long argued that light had particle-like properties and could travel in the form of packets (later called photons).
  4. Discovery of antimatter, Carl Anderson, 1933.
    In the late 1920s, in the wake of the arrival of quantum mechanics, English physicist Paul Dirac was also interested in electrons. He applied his mathematical powers to devise an equation to explain them, and he succeeded. But he got out more than he put in. His equation yielded correct answers for an electron’s energy but also contained a negative root. That perplexed him; a negative energy for an electron seemed to make no physical sense. Still, the math was the math, and Dirac couldn’t ignore his own equation’s solutions. After some false steps, he decided that the negative energy implied the existence of a new kind of particle, identical to an electron except with an opposite electric charge (equal in magnitude to the charge that Millikan had measured). Dirac did not publish in the Physical Review. But Carl Anderson, who actually found Dirac’s antimatter electron in 1933, did. In cloud chamber observations of cosmic rays, Anderson spotted tracks of a lightweight positively charged particle, apparently Dirac’s antielectron. He titled his paper “The Positive Electron” and referred to the new particles as positrons. They were the first example of antimatter.
  5. How stars shine, Hans Bethe, 1939.
    Since the dawn of science, astronomers had wondered how the sun shines. Some experiments in the 19th century suggested gravity. But a sun powered by gravitational contraction would have burned itself out long ago. A new option for powering the sun appeared in the 1930s when physicists began to understand the energy released in nuclear reactions. In the simplest such reaction, two protons fused. That made sense as a solar power source, because a proton is the nucleus of a hydrogen atom and stars are made mostly of hydrogen. But at a conference in April 1938, experts including Hans Bethe of Cornell University concluded that proton fusion could not create the temperatures observed in the brightest stars. On the train back to Cornell, though, Bethe figured out the correct, more complicated nuclear reactions and soon sent a paper to the Physical Review. He asked the journal to delay publishing it so he could enter it in a contest (open to unpublished papers only). Bethe won the contest and then OK’d publication of his paper, which appeared in March 1939. For winning the contest, he received $500. For the published paper, his prize was delayed — until 1967. In that year he got the Nobel Prize: $61,700.
  6. Is quantum mechanics complete? Einstein, Podolsky and Rosen, 1935.
    Einstein was famous for a lot of things, including a stubborn resistance to the implications of quantum mechanics. His main objection was articulated in the Physical Review in May 1935 in a paper coauthored with physicists Nathan Rosen and Boris Podolsky. It presented a complicated argument that is frequently misrepresented or misunderstood (as I’ve discussed here previously), but the gist is he thought quantum mechanics was incomplete. Its math could not describe properties that were simultaneously “real” for two separated particles that had previously interacted. Decades later multiple experiments showed that quantum mechanics was in fact complete; reality is not as simple a concept as Einstein and colleagues would have liked. The “EPR paper” stimulated an enormous amount of interest in the foundations of quantum mechanics, though. And some people continue to believe E, P and R had a point.
  7. Is quantum mechanics complete? (Yes.) Bohr, 1935.
    Here’s one of the missing papers. Physical Review’s editors somehow forgot to include Niels Bohr’s reply to the EPR paper. In October 1935, Bohr published a detailed response in the Physical Review, outlining the misunderstandings that EPR had perpetrated. Later EPR experiments turned out exactly as Bohr would have expected. (An early example from 1982 is among the Physical Review anniversary papers, but not this Top 10 list.) Yet some present-day critics still believe that somehow Bohr was wrong and Einstein was right. He wasn’t.
  8. Gravitational waves detected by LIGO, 2016.
    Einstein was right about gravitational waves. After devising his general theory of relativity to explain gravity, he realized that it implied ripples in the very fabric of spacetime itself. Later he backed off, doubting his original conclusion. But he was right the first time: A mass abruptly changing its speed or direction of movement should emit waves in space. Violent explosions or collisions would create ripples sufficiently strong to be detectable, if you spent a billion dollars or so to build some giant detectors. In a hopeful sign for humankind, the U.S. National Science Foundation put up the money and two black holes provided the collision in 2015, as reported in February 2016 in Physical Review Letters and widely celebrated by bloggers.
  9. Explaining nuclear fission, Bohr and Wheeler, 1939.
    On September 1, 1939, the opening day of World War II, the Physical Review published a landmark paper describing the theory of nuclear fission. It was a quick turnaround, as fission had been discovered only in December 1938, in Germany. While Einstein was writing a letter to warn President Roosevelt of fission’s potential danger in the hands of Nazis, Bohr and John Archibald Wheeler figured out how fission happened. Their paper provided essential theoretical knowledge for the Manhattan Project, which led to the development of the atomic bomb, and later to the use of nuclear energy as a power source.
  10. Oppenheimer and Snyder describe black holes, 1939.
    The process of black hole formation was first described by J. Robert Oppenheimer and Hartland Snyder in the same issue of the Physical Review as Bohr and Wheeler’s fission paper. Of course, the name black hole didn’t exist yet, but Oppenheimer and Snyder thoroughly explained how a massive star contracting under the inward pull of its own gravity would eventually disappear from view. “The star thus tends to close itself off from any communication with a distant observer; only its gravitational field persists,” they wrote. Nobody paid any attention to black holes then, though, because Oppenheimer soon became director of the Manhattan Project (requiring him to read Bohr and Wheeler’s paper). It wasn’t until the late 1960s when black holes became a household name thanks to Wheeler (who eventually got around to reading Oppenheimer and Snyder’s paper). Yet for some reason the Physical Review editors omitted the Oppenheimer-Snyder paper from their list, verifying that no such list is ever complete, even if you have dozens of items instead of only 10.