New CRISPR gene editors can fix RNA and DNA one typo at a time

New gene-editing tools can correct typos that account for about half of disease-causing genetic spelling errors.

Researchers have revamped the CRISPR/Cas9 gene editor so that it converts the DNA base adenine to guanine, biological chemist David Liu and colleagues report October 25 in Nature. In a separate study, published October 25 in Science, other researchers led by CRISPR pioneer Feng Zhang re-engineered a gene editor called CRISPR/Cas13 to correct the same typos in RNA instead of DNA.
Together with other versions of CRISPR/Cas9, the new editors offer scientists an expanded set of precision tools for correcting diseases.

CRISPR/Cas9 is a molecular scissors that snips DNA. Scientists can guide the scissors to the place they want to cut in an organism’s genetic instruction book with a guide RNA that matches DNA at the target site. The tool has been used to make mutations or correct them in animals and in human cells, including human embryos (SN: 10/14/17, p. 8).

A variety of innovations allow CRISPR/Cas9 to change genetic instructions without cutting DNA (SN: 9/3/16, p. 22). Earlier versions of these “base editors,” which target typos related to the other half of disease-causing genetic spelling errors, have already been used to alter genes in plants, fish, mice and even human embryos.
Such noncutting gene editors are possibly safer than traditional DNA-cutting versions, says Gene Yeo, an RNA biologist at the University of California, San Diego. “We know there are drawbacks to cutting DNA,” he said. Mistakes often arise when cellular machinery attempts to repair DNA breaks. And although accurate, CRISPR sometimes cuts DNA at places similar to the target, raising the possibility of introducing new mutations elsewhere. Such “permanent irreversible edits at the wrong place in the DNA could be bad,” Yeo says. “These two papers have different ways to solve that problem.”
The new editors allow researchers to rewrite all four bases that store information in DNA and RNA. Those four bases are adenine (A) which pairs with thymine (T) (or uracil (U) in RNA), and guanine (G) pairs with cytosine (C). Mutations that change C-G base pairs to T-A pairs happen 100 to 500 times every day in human cells. Most of those mutations are probably benign, but some may alter a protein’s structure and function, or interfere with gene activity, leading to disease. About half of the 32,000 mutations associated with human genetic diseases are this type of C-G to T-A change, says Liu, a Howard Hughes Medical Institute investigator at Harvard University. Until now, there was little anyone could do about it, he says.

In RNA, DNA’s chemical cousin, some naturally occurring enzymes can reverse this common mutation. Such enzymes chemically convert adenine to inosine (I), which the cell interprets as G. Such RNA editing happens frequently in octopuses and other cephalopods and sometimes in humans (SN: 4/29/17, p. 6).

Zhang, of the Broad Institute of MIT and Harvard, and colleagues made an RNA-editing enzyme called ADAR2 into a programmable gene-editing tool. The team started with CRISPR/Cas13, molecular scissors that normally cut RNA. Dulling the blades let the tool grasp instead of slice. Zhang and colleagues then bolted the A-to-I converting portion of ADAR2 onto CRISPR/Cas13. Dubbed REPAIR, the conglomerate tool edited from 13 percent to about 27 percent of RNAs of two genes in human cells grown in dishes. The researchers did not detect any undesired changes.

Editing RNA is good for temporary fixes, such as shutting down inflammation-promoting proteins. But to fix many mutations, it requires permanent DNA repairs, says Liu.

In 2016, Liu’s team made a base editor that converts C to T. Chinese researchers reported in Protein & Cell on September 23 that they used the old base editor in human embryos to repair a mutation that causes the blood disorder beta-thalassemia. But that editor couldn’t make the opposite change, switching A to G.

Unlike with RNA, no enzymes naturally make the A-to-I conversion in DNA. So Nicole Gaudelli in Liu’s lab forced E. coli bacteria to evolve one. Then the researchers bolted the E. coli DNA converter, TadA, to a “dead” version of Cas9, disabled so it couldn’t cut both strands of DNA. The result was a base editor, called ABE, that could switch A-T base pairs into G-C pairs in about 50 percent of human cells tested.

This base editor works more like a pencil than scissors, Liu says. In lab dishes, Liu’s team corrected a mutation in human cells from a patient with an iron-storage blood disorder called hereditary hemochromatosis. The team also re-created beneficial mutations that allow blood cells to keep making fetal hemoglobin. Those mutations are known to protect against sickle cell anemia.

Another group reported in the October Protein & Cell that base editing appears to be safer than traditional cut-and-paste CRISPR/Cas9 editing. Liu’s results seem to support that. His team found that about 14 percent of the time cut-and-paste CRISPR/Cas9 made changes at nine of 12 possible “off-target” sites. The new A-to-G base editor altered just four of the 12 off-target sites and only 1.3 percent of the time.

That’s not to say cut-and-paste editing isn’t useful, Liu says. “Sometimes, if your task is to cut something, you’re not going to do that with a pencil. You need scissors.”

Here’s why some water striders have fans on their legs

For an animal already amazing enough to walk on water, what could growing feather fans on its legs possibly add?

These fans have preoccupied Abderrahman Khila of the University of Lyon in France, who keeps some 30 species of bugs called water striders walking the tanks in his lab without getting their long, elegant legs wet.

“Walk” may be too humdrum a word. The 2,200 or so known species of water striders worldwide can zip, skim, skate and streak. Among such damp-defying acrobats, however, only the Rhagovelia genus grows a fan of delicate feathers on the middle pair of its six legs. Even little hatchlings head-banging their way out of underwater eggs have a pair of feathery microfluffs for their perilous swim up to cruise the water’s surface.
A first guess at a function — maybe plumes help support bigger adults — would be wrong, Khila says. The Rhagovelia are not giants among water striders. In a jar of alcohol in his lab, he treasures a specimen of a much bigger species, with a body about the size of a peanut and a leg span that can straddle a CD. Yet this King Kong among striders, found in Vietnam and China, slides over the water as other species do, cushioned by air trapped in dense hydrophobic leg bristles. No froufrou feathers needed.
Fans aren’t required either for water striders’ action-packed, often violent lives. “In the lab, they eat each other all the time,” Khila says. A newly molted strider, still soft and weak after 10 minutes of wriggling out of its old external skeleton, can get mobbed by cannibals. Any other insect, such as a mosquito, that lands on the water surface also triggers a frenzy. Small striders “start to attack the legs of the mosquito,” he says, “and seconds later there are 50 water striders gathered around.” With their tubelike mouthparts, the striders stab holes in the victim and inject enzymes to liquefy flesh into a meat shake to suck out.

For these Rhagovelia, Khila sees the fans as “one of those examples of ‘key evolutionary innovations,’” traits that just “pop up” in evolutionary history with no clear line of precursors or partial forms, he says. Now he and his colleagues have identified a fan benefit. When they removed plumes from the bugs or suppressed genes for fan formation, the mutant striders couldn’t turn as deftly or run upstream against the current as fully fanned Rhagovelia can, the researchers report in the Oct. 20 Science. Striders in a closely related but fanless genus were likewise hampered. The innovative fan opened up new territory, helping the insects navigate flowing water, the researchers conclude.

Fan-maker genes were intriguing in another way. Evolutionary biologists have long debated whether such evolutionary innovations just repurpose and recombine old developmental genes or actually rely on new ones. In the case of the fans, two genes, which the researchers named geisha and mother-of-geisha after geisha fans, are unique to this genus, but three other genes are repurposed. So in a twist on an old debate, Khila says, “neither hypothesis is wrong.”

Humans are driving climate change, federal scientists say

It is “extremely likely” that humans are driving warming on Earth since the 1950s. That statement — which indicates a 95 to 100 percent confidence in the finding — came in a report released November 3 by the U.S. Global Change Research Program. This interagency effort was established in 1989 by presidential initiative to help inform national science policy.

The 2017 Climate Science Special Report, which lays out the current state of scientific knowledge on climate change, will be rolled into the fourth National Climate Assessment, set to be released in late 2018.
The last national climate assessment, released in 2014, also concluded that recent warming was mostly due to humans, but didn’t give a confidence level (SN Online: 5/6/14). Things haven’t gotten better. Ice sheet melting has accelerated, the 2017 report finds. As a result, projections of possible average global sea level rise by 2100 under a high greenhouse gas emissions scenario (in which emissions rise unabated throughout the 21st century) have increased from 2 meters to as much as 2.6 meters.

In addition, the report notes that three of the warmest years on record — 2014, 2015 and 2016 — occurred since the last report was released; those years also had record-low sea ice extent in the Arctic Ocean in the summer.

The report also notes some still-unresolved questions that have become increasingly active areas of research. One big one: How will climate change alter atmospheric circulation in the mid-latitude areas? Scientists are wrangling with whether and how these changes will affect storm patterns and contribute to extreme weather events, including blizzards and drought.

Six-month-old babies know words for common things, but struggle with similar nouns

Around the six-month mark, babies start to get really fun. They’re not walking or talking, but they are probably babbling, grabbing and gumming, and teaching us about their likes and dislikes. I remember this as the time when my girls’ personalities really started making themselves known, which, really, is one of the best parts of raising a kid. After months of staring at those beautiful, bald heads, you start to get a glimpse of what’s going on inside them.

When it comes to learning language, it turns out that a lot has already happened inside those baby domes by age 6 months. A new study finds that babies this age understand quite a bit about words — in particular, the relationships between nouns.
Work in toddlers, and even adults, reveals that people can struggle with word meanings under difficult circumstances. We might briefly falter with “shoe” when an image of a shoe is shown next to a boot, for instance, but not when the shoe appears next to a hat. But researchers wanted to know how early these sorts of word relationships form.

Psychologists Elika Bergelson of Duke University and Richard Aslin, formerly of the University of Rochester in New York and now at Haskins Laboratories in New Haven, Conn., put 51 6-month-olds to a similar test. Outfitted with eye-tracking gear, the babies sat on a parent’s lap and looked at a video screen that showed pairs of common objects. Sometimes the images were closely related: mouth and nose, for instance, or bottle and spoon. Other pairs were unrelated: blanket and dog, or juice and car.

When both objects were on the screen, the parents would say a sentence using one of the words: “Where is the nose?” for instance. If babies spent more time looking at the nose than the other object, researchers inferred that the babies had a good handle on that word.

When the babies were shown tricky pairs of closely related objects, like a cup of juice and a cup of milk, the babies spent nearly equal time looking at both pictures, no matter what word their parents said. But when the images were really distinct (juice and car, for instance) the babies spent more time looking at the spoken word.
These babies detected a difference between the “milk-juice” pair and the “juice-car” pair, recognizing that one pair is similar and the other isn’t, the researchers conclude November 20 in the Proceedings of the National Academy of Sciences.
To see whether this ability was tied to domestic life, the researchers sent the babies home with specialized gear: vests with audio recorders and adorable hats outfitted with small video cameras, one just above each ear. A camera on a tripod in a corner of the home also captured snippets of daily life. The resulting video and audio recordings revealed that babies whose caregivers used more nouns for objects in the room were better at the word task in the lab.

That means that babies learn words well when they can actually see the object being talked about. Hearing, “Open your mouth. Here comes the spoon!” as they watch the spoon come flying toward their face makes a bigger vocabulary impression than “Did you like riding in the car yesterday?”

A similar idea came from a recent study on preschoolers. These kids learned best when they saw one picture at a time (or when parents pointed at the relevant object). Babies — and older kids, too — like to see what you’re talking about.

The results are too early to provide advice to parents, says Bergelson, a cognitive and developmental psychologist. “But I think one thing suggested by our work is that parents should consider their young baby to be a real conversational partner,” she says. “Even young infants are listening and learning about words and the world around them before they start talking themselves, and their caregivers make that possible.”

There’s still lots to figure out about how babies soak up vocabulary. And as scientists come up with more ways to peer into the mysterious inner workings of a baby’s mind, those answers might lead to even more interesting conversations with our babies.

Even a tiny oil spill spells bad news for birds

MINNEAPOLIS — Birds don’t need to be drenched in crude oil to be harmed by spills and leaks.

Ingesting even small amounts of oil can interfere with the animals’ normal behavior, researchers reported November 15 at the annual meeting of the Society of Environmental Toxicology and Chemistry North America. Birds can take in these smaller doses by preening slightly greasy feathers or eating contaminated food, for example.

Big oil spills, such as the 2010 Deepwater Horizon disaster, leave a trail of dead and visibly oily birds (SN: 4/18/15, p. 22). But incidents like last week’s 5,000-barrel spill from the Keystone pipeline — and smaller spills that don’t make national headlines — can also impact wildlife, even if they don’t spur dramatic photos.
To test how oil snacks might affect birds, researchers fed zebra finches small amounts of crude oil or peanut oil for two weeks, then analyzed the birds’ blood and behavior. Birds fed the crude oil were less active and spent less time preening their feathers than birds fed peanut oil, said study coauthor Christopher Goodchild, an ecotoxicologist at Oklahoma State University in Stillwater.

Oil-soaked birds will often preen excessively to try to remove the oil, sometimes at the expense of other important activities such as feeding. But in this case, the birds didn’t have any crude oil on their feathers, so the decrease in preening is probably a sign they’re not feeling well, the researchers say.

Exactly how the oil affects the birds’ activity levels isn’t clear. Researchers suspected that oil might deprive birds of oxygen by affecting hemoglobin, which carries oxygen in the blood. Blood tests didn’t turn up any evidence of damaged hemoglobin proteins but did find some evidence that oil-sipping birds might be anemic, Goodchild said. At the higher of two crude oil doses, birds’ blood contained less hemoglobin per red blood cell, a sign of anemia.
The findings, while preliminary, add to a growing pile of evidence that estimates of the number of animals impacted by oil spills might be too low. For instance, even a light sheen of oil on sandpipers’ wings makes it harder to fly, costing birds more energy, a different group of researchers reported earlier this year. That could affect everything from birds’ daily movements to long-distance migration.

When tumors fuse with blood vessels, clumps of breast cancer cells can spread

PHILADELPHIA — If you want to beat them, join them. Some breast cancer tumors may follow that strategy to spread through the body.

Breast cancer tumors can fuse with blood vessel cells, allowing clumps of cancer cells to break away from the main tumor and ride the bloodstream to other locations in the body, suggests preliminary research. Cell biologist Vanesa Silvestri of Johns Hopkins University School of Medicine presented the early work December 4 at the American Society for Cell Biology/European Molecular Biology Organization meeting.

Previous research has shown that cancer cells traveling in clusters have a better chance of spreading than loners do (SN: 1/10/15, p. 9). But how clusters of cells get into the bloodstream in the first place has been a mystery, in part because scientists couldn’t easily see inside tumors to find out.

So Silvestri and colleagues devised a see-through synthetic version of a blood vessel. The vessel ran through a transparent gel studded with tiny breast cancer tumors. A camera attached to a microscope allowed the researchers to record the tumors invading the artificial blood vessel. Sometimes the tumors pinched the blood vessel, eventually sealing it off. But in at least one case, a small tumor merged with the cells lining the faux blood vessel. Then tiny clumps of cancer cells broke away from the tumor and floated away in the fluid flowing through the capillary. More work is needed to confirm that the same process happens in the body, Silvestri said.

How to keep humans from ruining the search for life on Mars

T he Okarian rover was in trouble. The yellow Humvee was making slow progress across a frigid, otherworldly landscape when planetary scientist Pascal Lee felt the rover tilt backward. Out the windshield, Lee, director of NASA’s Haughton Mars Project, saw only sky. The rear treads had broken through a crack in the sea ice and were sinking into the cold water.

True, there are signs of water on Mars, but not that much. Lee and his crew were driving the Okarian (named for the yellow Martians in Edgar Rice Burroughs’ novel The Warlord of Mars) across the Canadian Arctic to a research station in Haughton Crater that served in this dress rehearsal as a future Mars post. On a 496-kilometer road trip along the Northwest Passage, crew members pretended they were explorers on a long haul across the Red Planet to test what to expect if and when humans go to Mars.

What they learned in that April 2009 ride may become relevant sooner rather than later. NASA has declared its intention to send humans to Mars in the 2030s (SN Online: 5/24/16). The private sector plans to get there even earlier: In September, Elon Musk announced his aim to launch the first crewed SpaceX mission to Mars as soon as 2024.

“That’s not a typo,” Musk said in Australia at an International Astronautical Congress meeting. “Although it is aspirational.”

Musk’s six-year timeline has some astrobiologists in a panic. If humans arrive too soon, these researchers fear, any chance of finding evidence of life — past or present — on Mars may be ruined.

“It’s really urgent,” says astrobiologist Alberto Fairén of the Center for Astrobiology in Madrid and Cornell University. Humans take whole communities of microorganisms with them everywhere, spreading those bugs indiscriminately.

Planetary geologist Matthew Golombek of NASA’s Jet Propulsion Laboratory in Pasadena, Calif., agrees, adding, “If you want to know if life exists there now, you kind of have to approach that question before you send people.”

A long-simmering debate over how rigorously to protect other planets from Earth life, and how best to protect life on Earth from other planets, is coming to a boil. The prospect of humans arriving on Mars has triggered a flurry of meetings and a spike in research into what “planetary protection” really means.

One of the big questions is whether Mars has regions that might be suitable for life and so deserve special protection. Another is how big a threat Earth microbes might be to potential Martian life (recent studies hint less of a threat than expected). Still, the specter of human biomes mucking up the Red Planet before a life-hunting mission can even launch has raised bitter divisions within the Mars research community.
Mind the gaps
Before any robotic Mars mission launches, the spacecraft are scrubbed, scoured and sometimes scorched to remove Earth microbes. That’s so if scientists discover a sign of life on Mars, they’ll know the life did not just hitchhike from Cape Canaveral. The effort is also intended to prevent the introduction of harmful Earth life that could kill off any Martians, similar to how invasive species edge native organisms out of Earth’s habitats.

“If we send Earth organisms to a place where they can grow and thrive, then we might come back and find nothing but Earth organisms, even though there were Mars organisms there before,” says astrobiologist John Rummel of the SETI Institute in Mountain View, Calif. “That’s bad for science; it’s bad for the Martians. We’d be real sad about that.”

To avoid that scenario, spacefaring organizations have historically agreed to keep spacecraft clean. Governments and private companies alike abide by Article IX of the 1967 Outer Space Treaty, which calls for planetary exploration to avoid contaminating both the visited environment and Earth. In the simplest terms: Don’t litter, and wipe your feet before coming back into the house.

But this guiding principle doesn’t tell engineers how to avoid contamination. So the international Committee on Space Research (called COSPAR) has debated and refined the details of a planetary protection policy that meets the treaty’s requirement ever since. The most recent version dates from 2015 and has a page of guidelines for human missions.

In the last few years, the international space community has started to add a quantitative component to the rules for humans — specifying how thoroughly to clean spacecraft before launch, for instance, or how many microbes are allowed to escape from human quarters.

“It was clear to everybody that we need more refined technical requirements, not just guidelines,” says Gerhard Kminek, planetary protection officer for the European Space Agency and chair of COSPAR’s planetary protection panel, which sets the standards. And right now, he says, “we don’t know enough to do a good job.”

In March 2015, more than 100 astronomers, biologists and engineers met at NASA’s Ames Research Center in Moffett Field, Calif., and listed 25 “knowledge gaps” that need more research before quantitative rules can be written.

The gaps cover three categories: monitoring astronauts’ microbes, minimizing contamination and understanding how matter naturally travels around Mars. Rather than prevent contamination — probably impossible — the goal is to assess the risks and decide what risks are acceptable. COSPAR prioritized the gaps in October 2016 and will meet again in Houston in February to decide what specific experiments should be done.
Stick the landing
The steps required for any future Mars mission will depend on the landing spot. COSPAR currently says that robotic missions are allowed to visit “special regions” on Mars, defined as places where terrestrial organisms are likely to replicate, only if robots are cleaned before launch to 0.03 bacterial spores per square meter of spacecraft. In contrast, a robot going to a nonspecial region is allowed to bring 300 spores per square meter. These “spores,” or endospores, are dormant bacterial cells that can survive environmental stresses that would normally kill the organism.

To date, any special regions are hypothetical, because none have been conclusively identified on Mars. But if a spacecraft finds that its location unexpectedly meets the special criteria, its mission might have to change on the spot.

The Viking landers, which in 1976 brought the first and only experiments to look for living creatures on Mars, were baked in an oven for hours before launch to clean the craft to special region standards.

“If you’re as clean as Viking, you can go anywhere on Mars,” says NASA planetary protection officer Catharine Conley. But no mission since, from the Pathfinder mission in the 1990s to the current Curiosity rover to the upcoming Mars 2020 and ExoMars rovers, has been cleared to access potentially special regions. That’s partly because of cost. A 2006 study by engineer Sarah Gavit of the Jet Propulsion Lab found that sterilizing a rover like Spirit or Opportunity (both launched in 2003) to Viking levels would cost up to 14 percent more than sterilizing it to a lower level. NASA has also backed away from looking for life after Viking’s search for Martian microbes came back inconclusive. The agency shifted focus to seeking signs of past habitability.

Although no place on Mars currently meets the special region criteria, some areas have conditions close enough to be treated with caution. In 2015, geologist Colin Dundas of the U.S. Geological Survey in Flagstaff, Ariz., and colleagues discovered what looked like streaks of salty water that appeared and disappeared in Gale Crater, where Curiosity is roving. Although those streaks were not declared special regions, the Curiosity team steered the rover clear of the area.
But evidence of flowing water on Mars bit the dust. In November, Dundas and colleagues reported in Nature Geoscience that the streaks are more likely to be tiny avalanches of sand. The reversal highlights how difficult it is to tell if a region on Mars is special or not.


However, on January 12 in Science, Dundas and colleagues reported finding eight slopes where layers of water ice were exposed at shallow depths (SN Online: 1/11/18). Those very steep spots would not be good landing sites for humans or rovers, but they suggest that nearby regions might have accessible ice within a meter or two of the surface.

If warm and wet conditions exist, that’s exactly where humans would want to go. Golombek has helped choose every Mars landing site since Pathfinder and has advised SpaceX on where to land its Red Dragon spacecraft, originally planned to bring the first crewed SpaceX mission to Mars. (Since then, SpaceX has announced it will use its BFR spacecraft instead, which might require shifts in landing sites.) The best landing sites for humans have access to water and are as close to the equator as possible, Golombek says. Low latitudes mean warmth, more solar power and a chance to use the planet’s rotation to help launch a rocket back to Earth.

That narrows the options. NASA’s first workshop on human landing sites, held in Houston in October 2015, identified more than 40 “exploration zones” within 50 degrees latitude of the equator, where astronauts could do science and potentially access raw materials for building and life support, including water.

Golombek helped SpaceX whittle its list to a handful of sites, including Arcadia Planitia and Deuteronilus Mensae, which show signs of having pure water ice buried beneath a thin layer of soil.

What makes these regions appealing for humans also makes them more likely to be good places for microbes to grow, putting a crimp in hopes for boots on the ground. But there are ways around the apparent barriers, Conley says. In particular, humans could land a safe distance from special regions and send clean robots to do the dirty work.

That suggestion raises a big question: How far is far enough? To figure out a safe distance, scientists need to know how well Earth microbes would survive on Mars in the first place, and how far those organisms would spread from a human habitat.
The most desirable places on Mars for human visits offer access to water in some form and are near the equator (for increased solar power and to get a boost when launching a return rocket). Rovers and landers have found evidence of a watery Martian past. Planners of future robotic and human missions have potential landing spots in mind. Map excludes polar regions.

Hover over or tap the map points to explore.
A no-grow zone
Initial results suggest that Mars does a good job of sterilizing itself. “I’ve been trying to grow Earth bacteria in Mars conditions for 15 years, and it’s actually really hard to do,” says astrobiologist Andrew Schuerger of the University of Florida in Gainesville. “I think that risk is much lower than the scientific community might think.”

In 2013 in Astrobiology, Schuerger and colleagues published a list of more than a dozen factors that microbes on Mars would have to overcome, including a lot of ultraviolet radiation from the sun; extreme dryness, low pressure and freezing temperatures; and high levels of salts, oxidants and heavy metals in Martian soils.
Schuerger has tried to grow hundreds of species of bacteria and fungi in the cold, low-pressure and low-oxygen conditions found on Mars. Some species came from natural soils in the dry Arctic and other desert environments, and others were recovered from clean rooms where spacecraft were assembled.

Of all those attempts, he has had success with 31 bacteria and no fungi. Seeing how difficult it is to coax these hardy microbes to thrive gives him confidence to say: “The surface conditions on Mars are so harsh that it’s very unlikely that terrestrial bacteria and fungi will be able to establish a niche.”

There’s one factor Schuerger does worry about, though: salts, which can lower the freezing temperature of water. In a 2017 paper in Icarus, Schuerger and colleagues tested the survival of Bacillus subtilis, a well-studied bacterium found in human gastrointestinal tracts, in simulated Martian soils with various levels of saltiness.

B. subtilis can form a tough spore when stressed, which could keep it safe in extreme environments. Schuerger showed that dormant B. subtilis spores were mostly unaffected for up to 28 days in six different soils. But another bacterium that does not form spores was killed off. That finding suggests that spore-forming microbes — including ones that humans carry with them — could survive in soils moistened by briny waters.

The Okarian’s trek across the Arctic offers a ray of hope: Spores might not make it very far from human habitats. At three stops during the journey across the Arctic, Pascal Lee, of the SETI Institute, collected samples from the pristine snow ahead and dirtier snow behind the vehicle, as well as from the rover’s interior. Later, Lee sent the samples to Schuerger’s lab.

The researchers asked, if humans drive over a microbe-free pristine environment, would they contaminate it? “The answer was no,” Schuerger says.

And that was in an Earth environment with only one or two of Schuerger’s biocidal factors (low temperatures and slightly higher UV radiation than elsewhere on Earth) and with a rover crawling with human-associated microbes. The Okarian hosted 69 distinct bacteria and 16 fungi, Schuerger and Lee reported in 2015 in Astrobiology.

But when crew members ventured outside the rover, they barely left a mark. The duo found one fungus and one bacterium on both the rover and two snow sites, one downwind and one ahead of the direction of travel. Other than that, nothing, even though crew members made no effort to contain their microbes — they breathed and ate openly.

“We didn’t see dispersal when conditions were much more conducive to dispersal” than they will be on Mars, Schuerger says.
The International Space Station may be an even better place to study what happens when inhabited space vessels leak microbes. Michelle Rucker, an engineer at NASA’s Johnson Space Center in Houston, and her colleagues are testing a tool for astronauts to swab the outside of their spacesuits and the space station, and collect whatever microbes are already there.

“At this point, no one has defined what the allowable levels of human contamination are,” Rucker says. “We don’t know if we’d meet them, but more importantly, we’ve never checked our human systems to see where we’re at.”

Rucker and colleagues have had astronauts test the swab kit as part of their training on Earth. The researchers plan to present the first results from those tests in March in Big Sky, Mont., at the IEEE Aerospace Conference. If the team gets the tool flight-certified to test it on the ISS, the results could fill a knowledge gap about how much spaceships carrying humans will leak and vent microbes.

A Russian experiment on the ISS may be giving the first clues. In November 2017, Russian cosmonauts told TASS news service that they had found living bacteria on the outside of the ISS. Some of those microbes, swabbed near vents during spacewalks, were not found on the spacecraft’s exterior when it launched.

Blowing in the wind
These results are important, says Conley, but they don’t give enough information alone to write quantitative contamination rules.

That’s partly because of another knowledge gap: how dust and wind move around on Mars. If Martian dust storms carry microbes far enough, the invaders could contaminate potential special regions even if humans land a safe distance away.

To find out, COSPAR’s Kminek suggests sending a fleet of Mars landers to act as meteorological stations at several fixed locations. The landers could measure atmospheric conditions and dust properties over a long time. Such landers would be relatively inexpensive to build, he says, and could launch in advance of humans.

But these weather stations would have to get in line. There’s a launch window between Earth and Mars every two years, and the next few are already booked. Weather stations would have to be stationary, so they couldn’t be added to rover missions like ExoMars or Mars 2020.

That means it’s possible that SpaceX or another company will try to send humans to Mars before the reconnaissance missions necessary to write rules for planetary protection are even built. If COSPAR is the tortoise in this race, SpaceX is the hare, along with a few other private companies. Only SpaceX has a stated timeline. Other contenders, including Washington-based Blue Origin, founded by Amazon executive Jeff Bezos, and United Launch Alliance, based in Colorado, are developing rockets that some analysts say could be part of a mission to the moon or Mars.

Now or never
Those looming launches prompted Fairén and colleagues to make a controversial proposal. In an article in the October 2017 Astrobiology, provocatively titled “Searching for life on Mars before it is too late,” the team suggested sending existing or planned rovers, even those not at the height of cleanliness, to look directly for signs of Martian life.

Given the harsh Martian conditions, rovers are unlikely to contaminate regions that might turn out to be special on a closer look, the group argues. The invasive species argument is misleading, they say: Don’t compare a microbe transfer to taking Asian parrots to the Amazon rainforest, where they could thrive and edge out local parrots. It would be closer to taking them to Antarctica to freeze to death.

Even if Earth microbes did replicate on Mars, the researchers wrote, technology is advanced enough that scientists would be able to distinguish hitchhikers from Earth from true Mars life (SN: 4/30/16, p. 28).

In a sharp rebuttal, published in the same issue of Astrobiology, Rummel and Conley disagreed. “Why would you want to go there with a dirty spacecraft?” says Rummel, who was NASA’s planetary protection officer before Conley. “To spend a billion dollars to go find life from Florida on Mars is both irresponsible and completely scientifically indefensible.”

There’s also concern for the health and safety of future astronauts. Conley says she mentioned the idea that scientists shouldn’t worry about getting sick if they encounter Earth organisms on Mars to a November meeting of epidemiologists who study the risks of Earth-based pandemics.

“The room burst out laughing,” she says. “This is a room full of medical doctors who deal with Ebola. The idea that we know about Earth organisms, and therefore they can’t hurt us, was literally laughable to them.”

Fairén has already drafted a response for a future issue of Astrobiology: “We acknowledge [that Rummel and Conley’s points] are informed and literate. Unfortunately, they are also unconvincing.”

The issue might come to a head in July in Pasadena, Calif., at the next meeting of COSPAR’s Scientific Assembly. Fairén and colleagues plan to push for more relaxed cleanliness rules.

That’s not likely to happen anytime soon. But with no concrete rules in place for humans, would a human mission even be allowed off the ground, whether NASA or SpaceX was at the helm? Currently, private U.S. companies must apply to the Federal Aviation Administration for a launch license, and for travel to another planet, that agency would probably ask NASA to weigh in.

It’s hard to know if anyone will actually be ready to send humans to Mars in the next decade. “You’d have to actually believe them to be scared,” says Rummel. “There are many unanswered questions about what Elon Musk wants to do. But I think we can calm down about people showing up on Mars unannounced.”

But SpaceX has defied expectations before and may give slow and steady a kick in the pants.

‘Machines That Think’ predicts the future of artificial intelligence

Movies and other media are full of mixed messages about the risks and rewards of building machines with minds of their own. For every manipulative automaton like Ex Machina’s Ava (SN: 5/16/15, p. 26), there’s a helpful Star Wars droid. And while some tech titans such as Elon Musk warn of the threats artificial intelligence presents, others, including Mark Zuckerberg, dismiss the doomsayers.

AI researcher Toby Walsh’s Machines That Think is for anyone who has heard the hype and is seeking a critical assessment of what the technology can do — and what it might do in the future. Walsh’s conversational style is welcoming to nonexperts while his endnotes point readers to opportunities for deeper dives into specific aspects of AI.
Walsh begins with a history of AI, from Aristotle’s foundation of formal logic to modern facial-recognition systems. Excerpts from computer-composed poetry and tales of computers trouncing humans at strategy games (SN: 11/11/17, p. 13) are a testament to how far AI has come. But Walsh also highlights weaknesses, such as machine-learning algorithms’ reliance on so much data to master a single task.

This 30,000-foot view of AI research packs a lot of history, as well as philosophical and technical explanation. Walsh personalizes the account with stories of his own programming experiences, anecdotes about AI in daily life — like his daughter’s use of Siri — and his absolute, unapologetic love of puns.

Later in the book, Walsh speculates about technical hurdles that may curb further AI development and legal limits that society may want to impose. He also explores the societal impact that increasingly intelligent computers may have.
For instance, Walsh evaluates how likely various jobs are to be outsourced to AI. Some occupations, like journalist, will almost certainly be automated, he argues. Others, like oral surgeon, are probably safe. For future job security, Walsh recommends pursuing careers that require programming acumen, emotional intelligence or creativity.

AI also has the potential to revolutionize warfare. “Like Moore’s law, we are likely to see exponential growth in the capabilities of autonomous weapons,” Walsh writes. “I have named this ‘Schwarzenegger’s law’ to remind us of where it will end.” Walsh isn’t resigned to a Terminator-like future, though. If governments ban killer robots and arms developers use automation to enhance defensive equipment, he believes military AI could actually save many lives.

In fact, Walsh argues, all aspects of AI’s future impacts are in our hands. “Artificial intelligence can lead us down many different paths, some good and some bad,” he writes. “Society must choose which path to take.”

Top 10 papers from Physical Review’s first 125 years

No anniversary list is ever complete. Just last month, for instance, my Top 10 scientific anniversaries of 2018 omitted the publication two centuries ago of Mary Shelley’s Frankenstein. It should have at least received honorable mention.

Perhaps more egregious, though, was overlooking the 125th anniversary of the physics journal Physical Review. Since 1893, the Physical Review has published hundreds of thousands of papers and has been long regarded as the premier repository for reports of advances in humankind’s knowledge of the physical world. In recent decades it has split itself into subjournals (A through E, plus L — for Letters — and also X) to prevent excessive muscle building by librarians and also better organize papers by physics subfield. (You don’t want to know what sorts of things get published in X.)
To celebrate the Physical Review anniversary, the American Physical Society (which itself is younger, forming in 1899 and taking charge of the journal in 1913), has released a list, selected by the journals’ editors, of noteworthy papers from Physical Review history.

The list comprises more than four dozen papers, oblivious to the concerns of journalists composing Top 10 lists. If you prefer the full list without a selective, arbitrary and idiosyncratic Top 10 filter, you can go straight to the Physical Review journals’ own list. But if you want to know which two papers the journal editors missed, you’ll have to read on.

  1. Millikan measures the electron’s charge, 1913.
    When J.J. Thomson discovered the electron in 1897, it was by proving the rays in cathode ray tubes were made up of a stream of particles. They carried a unit of electrical charge (hence their name). Thomson did not publish in the Physical Review. But Robert Millikan did in 1913 when he measured the strength of the electric charge on a single electron. He used oil drops, measuring how fast they fell through an electric field. Interacting with ions in the air gave each drop more or fewer electric charges, affecting how fast the drops fell. It was easy to calculate the smallest amount of charge consistent with the various changes in speed. (OK, it was not easy at all — it was a tough experiment and the calculations required corrections for all sorts of things.) Millikan’s answer was very close to today’s accepted value, and he won the Nobel Prize in 1923.
  2. Wave nature of electron, Davisson and Germer, 1927.
    J.J. Thomson’s son George also experimented with electrons, and showed that despite his father’s proof that they were particles, they also sometimes behaved like waves. George did not publish in the Physical Review. But Clinton Davisson and Lester Germer did; their paper established what came to be called the wave-particle duality. Their experiment confirmed the suspicions of Louis de Broglie, who had suggested the wave nature of electrons in 1924.
  3. Particle nature of X-rays, Compton, 1923.
    Actually, wave-particle duality was already on the physics agenda before de Broglie’s paper or Davisson and Germer’s experiment, thanks to Arthur Holly Compton. His experiments on X-rays showed that when they collided with electrons, momentum was transferred just as in collisions of particles. Nevertheless X-rays were definitely a form of electromagnetic radiation that moved as a wave, like light. Compton’s result was good news for Einstein, who had long argued that light had particle-like properties and could travel in the form of packets (later called photons).
  4. Discovery of antimatter, Carl Anderson, 1933.
    In the late 1920s, in the wake of the arrival of quantum mechanics, English physicist Paul Dirac was also interested in electrons. He applied his mathematical powers to devise an equation to explain them, and he succeeded. But he got out more than he put in. His equation yielded correct answers for an electron’s energy but also contained a negative root. That perplexed him; a negative energy for an electron seemed to make no physical sense. Still, the math was the math, and Dirac couldn’t ignore his own equation’s solutions. After some false steps, he decided that the negative energy implied the existence of a new kind of particle, identical to an electron except with an opposite electric charge (equal in magnitude to the charge that Millikan had measured). Dirac did not publish in the Physical Review. But Carl Anderson, who actually found Dirac’s antimatter electron in 1933, did. In cloud chamber observations of cosmic rays, Anderson spotted tracks of a lightweight positively charged particle, apparently Dirac’s antielectron. He titled his paper “The Positive Electron” and referred to the new particles as positrons. They were the first example of antimatter.
  5. How stars shine, Hans Bethe, 1939.
    Since the dawn of science, astronomers had wondered how the sun shines. Some experiments in the 19th century suggested gravity. But a sun powered by gravitational contraction would have burned itself out long ago. A new option for powering the sun appeared in the 1930s when physicists began to understand the energy released in nuclear reactions. In the simplest such reaction, two protons fused. That made sense as a solar power source, because a proton is the nucleus of a hydrogen atom and stars are made mostly of hydrogen. But at a conference in April 1938, experts including Hans Bethe of Cornell University concluded that proton fusion could not create the temperatures observed in the brightest stars. On the train back to Cornell, though, Bethe figured out the correct, more complicated nuclear reactions and soon sent a paper to the Physical Review. He asked the journal to delay publishing it so he could enter it in a contest (open to unpublished papers only). Bethe won the contest and then OK’d publication of his paper, which appeared in March 1939. For winning the contest, he received $500. For the published paper, his prize was delayed — until 1967. In that year he got the Nobel Prize: $61,700.
  6. Is quantum mechanics complete? Einstein, Podolsky and Rosen, 1935.
    Einstein was famous for a lot of things, including a stubborn resistance to the implications of quantum mechanics. His main objection was articulated in the Physical Review in May 1935 in a paper coauthored with physicists Nathan Rosen and Boris Podolsky. It presented a complicated argument that is frequently misrepresented or misunderstood (as I’ve discussed here previously), but the gist is he thought quantum mechanics was incomplete. Its math could not describe properties that were simultaneously “real” for two separated particles that had previously interacted. Decades later multiple experiments showed that quantum mechanics was in fact complete; reality is not as simple a concept as Einstein and colleagues would have liked. The “EPR paper” stimulated an enormous amount of interest in the foundations of quantum mechanics, though. And some people continue to believe E, P and R had a point.
  7. Is quantum mechanics complete? (Yes.) Bohr, 1935.
    Here’s one of the missing papers. Physical Review’s editors somehow forgot to include Niels Bohr’s reply to the EPR paper. In October 1935, Bohr published a detailed response in the Physical Review, outlining the misunderstandings that EPR had perpetrated. Later EPR experiments turned out exactly as Bohr would have expected. (An early example from 1982 is among the Physical Review anniversary papers, but not this Top 10 list.) Yet some present-day critics still believe that somehow Bohr was wrong and Einstein was right. He wasn’t.
  8. Gravitational waves detected by LIGO, 2016.
    Einstein was right about gravitational waves. After devising his general theory of relativity to explain gravity, he realized that it implied ripples in the very fabric of spacetime itself. Later he backed off, doubting his original conclusion. But he was right the first time: A mass abruptly changing its speed or direction of movement should emit waves in space. Violent explosions or collisions would create ripples sufficiently strong to be detectable, if you spent a billion dollars or so to build some giant detectors. In a hopeful sign for humankind, the U.S. National Science Foundation put up the money and two black holes provided the collision in 2015, as reported in February 2016 in Physical Review Letters and widely celebrated by bloggers.
  9. Explaining nuclear fission, Bohr and Wheeler, 1939.
    On September 1, 1939, the opening day of World War II, the Physical Review published a landmark paper describing the theory of nuclear fission. It was a quick turnaround, as fission had been discovered only in December 1938, in Germany. While Einstein was writing a letter to warn President Roosevelt of fission’s potential danger in the hands of Nazis, Bohr and John Archibald Wheeler figured out how fission happened. Their paper provided essential theoretical knowledge for the Manhattan Project, which led to the development of the atomic bomb, and later to the use of nuclear energy as a power source.
  10. Oppenheimer and Snyder describe black holes, 1939.
    The process of black hole formation was first described by J. Robert Oppenheimer and Hartland Snyder in the same issue of the Physical Review as Bohr and Wheeler’s fission paper. Of course, the name black hole didn’t exist yet, but Oppenheimer and Snyder thoroughly explained how a massive star contracting under the inward pull of its own gravity would eventually disappear from view. “The star thus tends to close itself off from any communication with a distant observer; only its gravitational field persists,” they wrote. Nobody paid any attention to black holes then, though, because Oppenheimer soon became director of the Manhattan Project (requiring him to read Bohr and Wheeler’s paper). It wasn’t until the late 1960s when black holes became a household name thanks to Wheeler (who eventually got around to reading Oppenheimer and Snyder’s paper). Yet for some reason the Physical Review editors omitted the Oppenheimer-Snyder paper from their list, verifying that no such list is ever complete, even if you have dozens of items instead of only 10.

Study debunks fishy tale of how rabbits were first tamed

Domesticated bunnies may need a new origin story.

Researchers thought they knew when rabbits were tamed. An often-cited tale holds that monks in Southern France domesticated rabbits after Pope Gregory issued a proclamation in A.D. 600 that fetal rabbits, called laurices, are fish and therefore can be eaten during Lent.

There’s just one problem: The story isn’t true. Not only does the legend offer little logic for rabbits being fish, but the proclamation itself is bogus, according to a new study of rabbit domestication.
“Pope Gregory never said anything about rabbits or laurices, and there is no evidence they were ever considered ‘fish,’” says Evan Irving-Pease, an archaeologist at the University of Oxford.

He and his colleagues discovered that scientists had mixed up Pope Gregory with St. Gregory of Tours. St. Gregory made a passing reference to a man named Roccolenus who in “the days of holy Lent … often ate young rabbits.” The misattribution somehow led to the story of rabbits’ domestication.

What’s more, DNA evidence can’t narrow rabbit domestication to that time period, Irving-Pease and colleagues report February 14 in Trends in Ecology and Evolution. Rabbit domestication wasn’t a single event, but a process with no distinct beginning, the researchers say. For similar reasons, scientists have found it difficult to pinpoint when and where other animals were first domesticated, too (SN: 7/8/17, p. 20).
Geneticist Leif Andersson of Uppsala University in Sweden agrees that genetic data can’t prove rabbit domestication happened around 600. But he says “it is also impossible to exclude that domestication of rabbits happened around that time period.”

Domestication practices were well known by then, Andersson says, and it’s possible that French monks or farmers in Southern France with a taste for rabbit meat made an effort to round up bunnies that eventually became the founding population for the domestic rabbit.

Ancient DNA from old rabbit bones may one day help settle the debate.