Moms tweak the timbre of their voice when talking to their babies

Voices carry so much information. Joy and anger, desires, comfort, vocabulary lessons. As babies learn about their world, the voice of their mother is a particularly powerful tool. One way mothers wield that tool is by speaking in the often ridiculous, occasionally condescending baby talk.

Also called “motherese,” this is a high-pitched, exaggerated language full of short, slow phrases and big vocal swoops. And when confronted with a tiny human, pretty much everybody — not just mothers, fathers and grandparents — instinctively does it.

Now, a study has turned up another way mothers modulate their voice during baby talk. Instead of focusing on changes such as pitch and rhythm, the researchers focused on timbre, the “color” or quality of a sound.

Timbre is a little bit nebulous, kind of a “know it when you hear it” sort of thing. For instance, the timbre of a reedy clarinet differs from a bombastic trumpet, even when both instruments are hitting the same note. The same is true for voices: When you hear the song “Hurt,” you don’t need to check whether it’s Nine Inch Nails’ Trent Reznor or Johnny Cash singing it. The vocal fingerprints make it obvious.
It turns out that timbre isn’t set in stone. People — mothers, in particular — change their timbre, depending on whether they’re talking to their baby or to an adult, scientists report online October 12 in Current Biology.

For the study, 12 English-speaking moms brought their babies into a Princeton lab. Researchers recorded the women talking to or reading to their 7- to 12-month old babies, and talking with an adult.
An algorithm sorted through timbre data taken from both baby- and adult-directed speech, and used this input to make a mathematical classifier. Based on snippets of speech, the classifier then could tell whether a mother was talking with an adult or with her baby. The timbre differences between baby- and adult-directed speech were obvious enough that a computer program could tell them apart.

Similar timbre shifts were obvious in other languages, too, the researchers found. These baby-directed shifts happened in 12 different women who spoke Cantonese, French, German, Hebrew, Hungarian, Polish, Russian, Mandarin or Spanish — a consistency that suggests this aspect of baby talk is universal.

Defined mathematically, these timbre shifts were consistent across women and across languages, but it’s still not clear what vocal qualities drove the change. “It likely combines several features, such as brightness, breathiness, purity or nasality,” says study coauthor Elise Piazza, a cognitive neuroscientist at Princeton University. She and her colleagues plan on studying these attributes to see whether babies pay more attention to some of them.

It’s not yet known whether babies perceive and use the timbre information from their mother. Babies recognize their mother’s voice; it’s possible they recognize their mother’s baby-directed timbre, too. Babies can tell timbre differences between musical instruments, so they can probably detect timbre differences in spoken language, Piazza says.
The work “highlights a new cue that mothers implicitly use,” Piazza says. The purpose of this cue isn’t clear yet, but the researchers suspect that the timbre change may emotionally engage babies and help them learn language.

People may not reserve timbre shifts just for babies, Piazza points out. Politicians talking to voters, middle school teachers talking to a classroom, and lovers whispering to each other may all tweak their timbre to convey … something.

19th century painters may have primed their canvases with beer-brewing leftovers

Beer breweries’ trash may have been Danish painters’ treasure.

The base layer of several paintings created in Denmark in the mid-1800s contains remnants of cereal grains and brewer’s yeast, the latter being a common by-product of the beer brewing process, researchers report May 24 in Science Advances. The finding hints that artists may have used the leftovers to prime their canvases.

Records suggest that Danish house painters sometimes created glossy, decorative paint by adding beer, says Cecil Krarup Andersen, a conservator at the Royal Danish Academy in Copenhagen. But yeast and cereal grains have never been found in primer.
Andersen had been studying paintings from the Danish Golden Age, an explosion of artistic creativity in the first half of the 19th century, at the National Gallery of Denmark. Understanding these paintings’ chemical compositions is key to preserving them, she says. As part of this work, she and colleagues looked at 10 pieces by Christoffer Wilhelm Eckersberg, considered the father of Danish painting, and his protégé Christen Schiellerup Købke.

Canvas trimmings from an earlier conservation effort allowed for an in-depth analysis that wouldn’t have otherwise been possible, since the process destroys samples. In seven paintings, Saccharomyces cerevisiae proteins turned up, as well as various combinations of wheat, barley, buckwheat and rye proteins. All these proteins are involved in beer fermentation (SN: 9/19/17).

Tests of an experimental primer that the researchers whipped up using residual yeast from modern beer brewing showed that the mixture held together and provided a stable painting surface — a primary purpose of a primer. And this concoction worked much better than one made with beer.

Beer was the most common drink in 1800s Denmark, and it was akin to liquid gold. Water needed to be treated prior to consuming and the brewing process took care of that. As a result, plenty of residual yeast would have been available for artists to purchase, the researchers say.

If the beer by-product is found in paintings by other artists, Andersen says, that information can help conservators better preserve the works and better understand the artists’ lives and craftsmanship. “It’s another piece of the puzzle.”

With tools from Silicon Valley, Quinton Smith builds lab-made organs

While volunteering at the University of New Mexico’s Children’s Hospital in Albuquerque, Quinton Smith quickly realized that he could never be a physician.

Then an undergrad at the university, Smith was too sad seeing sick kids all the time. But, he thought, “maybe I can help them with science.”

Smith had picked his major, chemical engineering, because he saw it as “a cooler way to go premed.” Though he ultimately landed in the lab instead of at the bedside, he has remained passionate about finding ways to cure what ails people.

Today, his lab at the University of California, Irvine uses tools often employed in fabricating tiny electronics to craft miniature, lab-grown organs that mimic their real-life counterparts. “Most of the time, when we study cells, we study them in a petri dish,” Smith says. “But that’s not their native form.” Prodding cells to assemble into these 3-D structures, called organoids, can give researchers a new way to study diseases and test potential treatments.

By combining Silicon Valley tech and stem cell biology, scientists are now “making tissues that look and react and function like human tissues,” Smith says. “And that hasn’t been done before.”

The power of stem cells
Smith’s work began in two dimensions. During his undergraduate studies, he spent two summers in the lab of biomedical engineer Sharon Gerecht, then at Johns Hopkins University. His project aimed to develop a device that could control oxygen and fluid flow inside minuscule chambers on silicon wafers, with the goal of mimicking the environment in which a blood vessel forms. It was there that Smith came to respect human induced pluripotent stem cells.

These stem cells are formed from body cells that are reprogrammed to an early, embryonic stage that can give rise to any cell type. “It just blew my mind that you can take these cells and turn them into anything,” Smith says.

Smith ultimately returned to Gerecht’s lab for his Ph.D., exploring how physical and chemical cues can push these stem cells toward becoming blood vessels. Using a technique called micropatterning — where researchers stamp proteins on glass slides to help cells attach — he spurred cells to organize into the beginnings of artificial blood vessels. Depending on the pattern, the cells formed 2-D stars, circles or triangles, showing how cells come together to form such tubular structures.
While a postdoc at MIT, he transitioned to 3-D, with a focus on liver organoids.

Like branching blood vessels, a network of bile ducts carry bile acid throughout the liver. This fluid helps the body digest and absorb fat. But artificial liver tissue doesn’t always re-create ducts that branch the way they do in the body. Cells growing in the lab “need a little bit of help,” Smith says.
To get around the problems, Smith and his team pour a stiff gel around minuscule acupuncture needles to create channels. After the gel solidifies, the researchers seed stem cells inside and douse the cells in chemical cues to coax them to form ducts. “We can create on-demand bile ducts using an engineering approach,” he says.

This approach to making liver organoids is possible because Smith speaks the language of biology and the language of engineering, says biomedical engineer Sangeeta Bhatia, a Howard Hughes Medical Institute investigator at MIT and Smith’s postdoc mentor. He can call on his cell biology knowledge and leverage engineering techniques to study how specific cell types are organized to work together in the body.

For example, Smith’s lab now uses 3-D printing to ensure liver tissues grown in the lab, including blood vessels and bile ducts, organize in the right way. Such engineering techniques could help researchers study and pinpoint the root causes behind some liver diseases, such as fatty liver disease, Smith says. Comparing organoids grown from cells from healthy people with those grown from cells from patients with liver disease — including Hispanic people, who are disproportionately affected — may point to a mechanism.

Looking beyond the liver
But Smith isn’t restricting himself to the liver. He and his trainees are branching out to explore other tissues and diseases as well.

One of those pursuits is preeclampsia, a disease that affects pregnant women, and disproportionately African American women. Women with preeclampsia develop dangerously high blood pressure because the placenta is inflamed and constricting the mother’s blood vessels. Smith plans to examine lab-grown placentas to determine how environmental factors such as physical forces and chemical cues from the organ impact attached maternal blood vessels.

“We’re really excited about this work,” Smith says. It’s only recently that scientists have tricked stem cells to enter an earlier stage of development that can form placentas. These lab-grown placentas even produce human chorionic gonadotropin, the hormone responsible for positive pregnancy tests.

Yet another win for the power of stem cells.

New CRISPR gene editors can fix RNA and DNA one typo at a time

New gene-editing tools can correct typos that account for about half of disease-causing genetic spelling errors.

Researchers have revamped the CRISPR/Cas9 gene editor so that it converts the DNA base adenine to guanine, biological chemist David Liu and colleagues report October 25 in Nature. In a separate study, published October 25 in Science, other researchers led by CRISPR pioneer Feng Zhang re-engineered a gene editor called CRISPR/Cas13 to correct the same typos in RNA instead of DNA.
Together with other versions of CRISPR/Cas9, the new editors offer scientists an expanded set of precision tools for correcting diseases.

CRISPR/Cas9 is a molecular scissors that snips DNA. Scientists can guide the scissors to the place they want to cut in an organism’s genetic instruction book with a guide RNA that matches DNA at the target site. The tool has been used to make mutations or correct them in animals and in human cells, including human embryos (SN: 10/14/17, p. 8).

A variety of innovations allow CRISPR/Cas9 to change genetic instructions without cutting DNA (SN: 9/3/16, p. 22). Earlier versions of these “base editors,” which target typos related to the other half of disease-causing genetic spelling errors, have already been used to alter genes in plants, fish, mice and even human embryos.
Such noncutting gene editors are possibly safer than traditional DNA-cutting versions, says Gene Yeo, an RNA biologist at the University of California, San Diego. “We know there are drawbacks to cutting DNA,” he said. Mistakes often arise when cellular machinery attempts to repair DNA breaks. And although accurate, CRISPR sometimes cuts DNA at places similar to the target, raising the possibility of introducing new mutations elsewhere. Such “permanent irreversible edits at the wrong place in the DNA could be bad,” Yeo says. “These two papers have different ways to solve that problem.”
The new editors allow researchers to rewrite all four bases that store information in DNA and RNA. Those four bases are adenine (A) which pairs with thymine (T) (or uracil (U) in RNA), and guanine (G) pairs with cytosine (C). Mutations that change C-G base pairs to T-A pairs happen 100 to 500 times every day in human cells. Most of those mutations are probably benign, but some may alter a protein’s structure and function, or interfere with gene activity, leading to disease. About half of the 32,000 mutations associated with human genetic diseases are this type of C-G to T-A change, says Liu, a Howard Hughes Medical Institute investigator at Harvard University. Until now, there was little anyone could do about it, he says.

In RNA, DNA’s chemical cousin, some naturally occurring enzymes can reverse this common mutation. Such enzymes chemically convert adenine to inosine (I), which the cell interprets as G. Such RNA editing happens frequently in octopuses and other cephalopods and sometimes in humans (SN: 4/29/17, p. 6).

Zhang, of the Broad Institute of MIT and Harvard, and colleagues made an RNA-editing enzyme called ADAR2 into a programmable gene-editing tool. The team started with CRISPR/Cas13, molecular scissors that normally cut RNA. Dulling the blades let the tool grasp instead of slice. Zhang and colleagues then bolted the A-to-I converting portion of ADAR2 onto CRISPR/Cas13. Dubbed REPAIR, the conglomerate tool edited from 13 percent to about 27 percent of RNAs of two genes in human cells grown in dishes. The researchers did not detect any undesired changes.

Editing RNA is good for temporary fixes, such as shutting down inflammation-promoting proteins. But to fix many mutations, it requires permanent DNA repairs, says Liu.

In 2016, Liu’s team made a base editor that converts C to T. Chinese researchers reported in Protein & Cell on September 23 that they used the old base editor in human embryos to repair a mutation that causes the blood disorder beta-thalassemia. But that editor couldn’t make the opposite change, switching A to G.

Unlike with RNA, no enzymes naturally make the A-to-I conversion in DNA. So Nicole Gaudelli in Liu’s lab forced E. coli bacteria to evolve one. Then the researchers bolted the E. coli DNA converter, TadA, to a “dead” version of Cas9, disabled so it couldn’t cut both strands of DNA. The result was a base editor, called ABE, that could switch A-T base pairs into G-C pairs in about 50 percent of human cells tested.

This base editor works more like a pencil than scissors, Liu says. In lab dishes, Liu’s team corrected a mutation in human cells from a patient with an iron-storage blood disorder called hereditary hemochromatosis. The team also re-created beneficial mutations that allow blood cells to keep making fetal hemoglobin. Those mutations are known to protect against sickle cell anemia.

Another group reported in the October Protein & Cell that base editing appears to be safer than traditional cut-and-paste CRISPR/Cas9 editing. Liu’s results seem to support that. His team found that about 14 percent of the time cut-and-paste CRISPR/Cas9 made changes at nine of 12 possible “off-target” sites. The new A-to-G base editor altered just four of the 12 off-target sites and only 1.3 percent of the time.

That’s not to say cut-and-paste editing isn’t useful, Liu says. “Sometimes, if your task is to cut something, you’re not going to do that with a pencil. You need scissors.”

Here’s why some water striders have fans on their legs

For an animal already amazing enough to walk on water, what could growing feather fans on its legs possibly add?

These fans have preoccupied Abderrahman Khila of the University of Lyon in France, who keeps some 30 species of bugs called water striders walking the tanks in his lab without getting their long, elegant legs wet.

“Walk” may be too humdrum a word. The 2,200 or so known species of water striders worldwide can zip, skim, skate and streak. Among such damp-defying acrobats, however, only the Rhagovelia genus grows a fan of delicate feathers on the middle pair of its six legs. Even little hatchlings head-banging their way out of underwater eggs have a pair of feathery microfluffs for their perilous swim up to cruise the water’s surface.
A first guess at a function — maybe plumes help support bigger adults — would be wrong, Khila says. The Rhagovelia are not giants among water striders. In a jar of alcohol in his lab, he treasures a specimen of a much bigger species, with a body about the size of a peanut and a leg span that can straddle a CD. Yet this King Kong among striders, found in Vietnam and China, slides over the water as other species do, cushioned by air trapped in dense hydrophobic leg bristles. No froufrou feathers needed.
Fans aren’t required either for water striders’ action-packed, often violent lives. “In the lab, they eat each other all the time,” Khila says. A newly molted strider, still soft and weak after 10 minutes of wriggling out of its old external skeleton, can get mobbed by cannibals. Any other insect, such as a mosquito, that lands on the water surface also triggers a frenzy. Small striders “start to attack the legs of the mosquito,” he says, “and seconds later there are 50 water striders gathered around.” With their tubelike mouthparts, the striders stab holes in the victim and inject enzymes to liquefy flesh into a meat shake to suck out.

For these Rhagovelia, Khila sees the fans as “one of those examples of ‘key evolutionary innovations,’” traits that just “pop up” in evolutionary history with no clear line of precursors or partial forms, he says. Now he and his colleagues have identified a fan benefit. When they removed plumes from the bugs or suppressed genes for fan formation, the mutant striders couldn’t turn as deftly or run upstream against the current as fully fanned Rhagovelia can, the researchers report in the Oct. 20 Science. Striders in a closely related but fanless genus were likewise hampered. The innovative fan opened up new territory, helping the insects navigate flowing water, the researchers conclude.

Fan-maker genes were intriguing in another way. Evolutionary biologists have long debated whether such evolutionary innovations just repurpose and recombine old developmental genes or actually rely on new ones. In the case of the fans, two genes, which the researchers named geisha and mother-of-geisha after geisha fans, are unique to this genus, but three other genes are repurposed. So in a twist on an old debate, Khila says, “neither hypothesis is wrong.”

Humans are driving climate change, federal scientists say

It is “extremely likely” that humans are driving warming on Earth since the 1950s. That statement — which indicates a 95 to 100 percent confidence in the finding — came in a report released November 3 by the U.S. Global Change Research Program. This interagency effort was established in 1989 by presidential initiative to help inform national science policy.

The 2017 Climate Science Special Report, which lays out the current state of scientific knowledge on climate change, will be rolled into the fourth National Climate Assessment, set to be released in late 2018.
The last national climate assessment, released in 2014, also concluded that recent warming was mostly due to humans, but didn’t give a confidence level (SN Online: 5/6/14). Things haven’t gotten better. Ice sheet melting has accelerated, the 2017 report finds. As a result, projections of possible average global sea level rise by 2100 under a high greenhouse gas emissions scenario (in which emissions rise unabated throughout the 21st century) have increased from 2 meters to as much as 2.6 meters.

In addition, the report notes that three of the warmest years on record — 2014, 2015 and 2016 — occurred since the last report was released; those years also had record-low sea ice extent in the Arctic Ocean in the summer.

The report also notes some still-unresolved questions that have become increasingly active areas of research. One big one: How will climate change alter atmospheric circulation in the mid-latitude areas? Scientists are wrangling with whether and how these changes will affect storm patterns and contribute to extreme weather events, including blizzards and drought.

When tumors fuse with blood vessels, clumps of breast cancer cells can spread

PHILADELPHIA — If you want to beat them, join them. Some breast cancer tumors may follow that strategy to spread through the body.

Breast cancer tumors can fuse with blood vessel cells, allowing clumps of cancer cells to break away from the main tumor and ride the bloodstream to other locations in the body, suggests preliminary research. Cell biologist Vanesa Silvestri of Johns Hopkins University School of Medicine presented the early work December 4 at the American Society for Cell Biology/European Molecular Biology Organization meeting.

Previous research has shown that cancer cells traveling in clusters have a better chance of spreading than loners do (SN: 1/10/15, p. 9). But how clusters of cells get into the bloodstream in the first place has been a mystery, in part because scientists couldn’t easily see inside tumors to find out.

So Silvestri and colleagues devised a see-through synthetic version of a blood vessel. The vessel ran through a transparent gel studded with tiny breast cancer tumors. A camera attached to a microscope allowed the researchers to record the tumors invading the artificial blood vessel. Sometimes the tumors pinched the blood vessel, eventually sealing it off. But in at least one case, a small tumor merged with the cells lining the faux blood vessel. Then tiny clumps of cancer cells broke away from the tumor and floated away in the fluid flowing through the capillary. More work is needed to confirm that the same process happens in the body, Silvestri said.

How to keep humans from ruining the search for life on Mars

T he Okarian rover was in trouble. The yellow Humvee was making slow progress across a frigid, otherworldly landscape when planetary scientist Pascal Lee felt the rover tilt backward. Out the windshield, Lee, director of NASA’s Haughton Mars Project, saw only sky. The rear treads had broken through a crack in the sea ice and were sinking into the cold water.

True, there are signs of water on Mars, but not that much. Lee and his crew were driving the Okarian (named for the yellow Martians in Edgar Rice Burroughs’ novel The Warlord of Mars) across the Canadian Arctic to a research station in Haughton Crater that served in this dress rehearsal as a future Mars post. On a 496-kilometer road trip along the Northwest Passage, crew members pretended they were explorers on a long haul across the Red Planet to test what to expect if and when humans go to Mars.

What they learned in that April 2009 ride may become relevant sooner rather than later. NASA has declared its intention to send humans to Mars in the 2030s (SN Online: 5/24/16). The private sector plans to get there even earlier: In September, Elon Musk announced his aim to launch the first crewed SpaceX mission to Mars as soon as 2024.

“That’s not a typo,” Musk said in Australia at an International Astronautical Congress meeting. “Although it is aspirational.”

Musk’s six-year timeline has some astrobiologists in a panic. If humans arrive too soon, these researchers fear, any chance of finding evidence of life — past or present — on Mars may be ruined.

“It’s really urgent,” says astrobiologist Alberto Fairén of the Center for Astrobiology in Madrid and Cornell University. Humans take whole communities of microorganisms with them everywhere, spreading those bugs indiscriminately.

Planetary geologist Matthew Golombek of NASA’s Jet Propulsion Laboratory in Pasadena, Calif., agrees, adding, “If you want to know if life exists there now, you kind of have to approach that question before you send people.”

A long-simmering debate over how rigorously to protect other planets from Earth life, and how best to protect life on Earth from other planets, is coming to a boil. The prospect of humans arriving on Mars has triggered a flurry of meetings and a spike in research into what “planetary protection” really means.

One of the big questions is whether Mars has regions that might be suitable for life and so deserve special protection. Another is how big a threat Earth microbes might be to potential Martian life (recent studies hint less of a threat than expected). Still, the specter of human biomes mucking up the Red Planet before a life-hunting mission can even launch has raised bitter divisions within the Mars research community.
Mind the gaps
Before any robotic Mars mission launches, the spacecraft are scrubbed, scoured and sometimes scorched to remove Earth microbes. That’s so if scientists discover a sign of life on Mars, they’ll know the life did not just hitchhike from Cape Canaveral. The effort is also intended to prevent the introduction of harmful Earth life that could kill off any Martians, similar to how invasive species edge native organisms out of Earth’s habitats.

“If we send Earth organisms to a place where they can grow and thrive, then we might come back and find nothing but Earth organisms, even though there were Mars organisms there before,” says astrobiologist John Rummel of the SETI Institute in Mountain View, Calif. “That’s bad for science; it’s bad for the Martians. We’d be real sad about that.”

To avoid that scenario, spacefaring organizations have historically agreed to keep spacecraft clean. Governments and private companies alike abide by Article IX of the 1967 Outer Space Treaty, which calls for planetary exploration to avoid contaminating both the visited environment and Earth. In the simplest terms: Don’t litter, and wipe your feet before coming back into the house.

But this guiding principle doesn’t tell engineers how to avoid contamination. So the international Committee on Space Research (called COSPAR) has debated and refined the details of a planetary protection policy that meets the treaty’s requirement ever since. The most recent version dates from 2015 and has a page of guidelines for human missions.

In the last few years, the international space community has started to add a quantitative component to the rules for humans — specifying how thoroughly to clean spacecraft before launch, for instance, or how many microbes are allowed to escape from human quarters.

“It was clear to everybody that we need more refined technical requirements, not just guidelines,” says Gerhard Kminek, planetary protection officer for the European Space Agency and chair of COSPAR’s planetary protection panel, which sets the standards. And right now, he says, “we don’t know enough to do a good job.”

In March 2015, more than 100 astronomers, biologists and engineers met at NASA’s Ames Research Center in Moffett Field, Calif., and listed 25 “knowledge gaps” that need more research before quantitative rules can be written.

The gaps cover three categories: monitoring astronauts’ microbes, minimizing contamination and understanding how matter naturally travels around Mars. Rather than prevent contamination — probably impossible — the goal is to assess the risks and decide what risks are acceptable. COSPAR prioritized the gaps in October 2016 and will meet again in Houston in February to decide what specific experiments should be done.
Stick the landing
The steps required for any future Mars mission will depend on the landing spot. COSPAR currently says that robotic missions are allowed to visit “special regions” on Mars, defined as places where terrestrial organisms are likely to replicate, only if robots are cleaned before launch to 0.03 bacterial spores per square meter of spacecraft. In contrast, a robot going to a nonspecial region is allowed to bring 300 spores per square meter. These “spores,” or endospores, are dormant bacterial cells that can survive environmental stresses that would normally kill the organism.

To date, any special regions are hypothetical, because none have been conclusively identified on Mars. But if a spacecraft finds that its location unexpectedly meets the special criteria, its mission might have to change on the spot.

The Viking landers, which in 1976 brought the first and only experiments to look for living creatures on Mars, were baked in an oven for hours before launch to clean the craft to special region standards.

“If you’re as clean as Viking, you can go anywhere on Mars,” says NASA planetary protection officer Catharine Conley. But no mission since, from the Pathfinder mission in the 1990s to the current Curiosity rover to the upcoming Mars 2020 and ExoMars rovers, has been cleared to access potentially special regions. That’s partly because of cost. A 2006 study by engineer Sarah Gavit of the Jet Propulsion Lab found that sterilizing a rover like Spirit or Opportunity (both launched in 2003) to Viking levels would cost up to 14 percent more than sterilizing it to a lower level. NASA has also backed away from looking for life after Viking’s search for Martian microbes came back inconclusive. The agency shifted focus to seeking signs of past habitability.

Although no place on Mars currently meets the special region criteria, some areas have conditions close enough to be treated with caution. In 2015, geologist Colin Dundas of the U.S. Geological Survey in Flagstaff, Ariz., and colleagues discovered what looked like streaks of salty water that appeared and disappeared in Gale Crater, where Curiosity is roving. Although those streaks were not declared special regions, the Curiosity team steered the rover clear of the area.
But evidence of flowing water on Mars bit the dust. In November, Dundas and colleagues reported in Nature Geoscience that the streaks are more likely to be tiny avalanches of sand. The reversal highlights how difficult it is to tell if a region on Mars is special or not.


However, on January 12 in Science, Dundas and colleagues reported finding eight slopes where layers of water ice were exposed at shallow depths (SN Online: 1/11/18). Those very steep spots would not be good landing sites for humans or rovers, but they suggest that nearby regions might have accessible ice within a meter or two of the surface.

If warm and wet conditions exist, that’s exactly where humans would want to go. Golombek has helped choose every Mars landing site since Pathfinder and has advised SpaceX on where to land its Red Dragon spacecraft, originally planned to bring the first crewed SpaceX mission to Mars. (Since then, SpaceX has announced it will use its BFR spacecraft instead, which might require shifts in landing sites.) The best landing sites for humans have access to water and are as close to the equator as possible, Golombek says. Low latitudes mean warmth, more solar power and a chance to use the planet’s rotation to help launch a rocket back to Earth.

That narrows the options. NASA’s first workshop on human landing sites, held in Houston in October 2015, identified more than 40 “exploration zones” within 50 degrees latitude of the equator, where astronauts could do science and potentially access raw materials for building and life support, including water.

Golombek helped SpaceX whittle its list to a handful of sites, including Arcadia Planitia and Deuteronilus Mensae, which show signs of having pure water ice buried beneath a thin layer of soil.

What makes these regions appealing for humans also makes them more likely to be good places for microbes to grow, putting a crimp in hopes for boots on the ground. But there are ways around the apparent barriers, Conley says. In particular, humans could land a safe distance from special regions and send clean robots to do the dirty work.

That suggestion raises a big question: How far is far enough? To figure out a safe distance, scientists need to know how well Earth microbes would survive on Mars in the first place, and how far those organisms would spread from a human habitat.
The most desirable places on Mars for human visits offer access to water in some form and are near the equator (for increased solar power and to get a boost when launching a return rocket). Rovers and landers have found evidence of a watery Martian past. Planners of future robotic and human missions have potential landing spots in mind. Map excludes polar regions.

Hover over or tap the map points to explore.
A no-grow zone
Initial results suggest that Mars does a good job of sterilizing itself. “I’ve been trying to grow Earth bacteria in Mars conditions for 15 years, and it’s actually really hard to do,” says astrobiologist Andrew Schuerger of the University of Florida in Gainesville. “I think that risk is much lower than the scientific community might think.”

In 2013 in Astrobiology, Schuerger and colleagues published a list of more than a dozen factors that microbes on Mars would have to overcome, including a lot of ultraviolet radiation from the sun; extreme dryness, low pressure and freezing temperatures; and high levels of salts, oxidants and heavy metals in Martian soils.
Schuerger has tried to grow hundreds of species of bacteria and fungi in the cold, low-pressure and low-oxygen conditions found on Mars. Some species came from natural soils in the dry Arctic and other desert environments, and others were recovered from clean rooms where spacecraft were assembled.

Of all those attempts, he has had success with 31 bacteria and no fungi. Seeing how difficult it is to coax these hardy microbes to thrive gives him confidence to say: “The surface conditions on Mars are so harsh that it’s very unlikely that terrestrial bacteria and fungi will be able to establish a niche.”

There’s one factor Schuerger does worry about, though: salts, which can lower the freezing temperature of water. In a 2017 paper in Icarus, Schuerger and colleagues tested the survival of Bacillus subtilis, a well-studied bacterium found in human gastrointestinal tracts, in simulated Martian soils with various levels of saltiness.

B. subtilis can form a tough spore when stressed, which could keep it safe in extreme environments. Schuerger showed that dormant B. subtilis spores were mostly unaffected for up to 28 days in six different soils. But another bacterium that does not form spores was killed off. That finding suggests that spore-forming microbes — including ones that humans carry with them — could survive in soils moistened by briny waters.

The Okarian’s trek across the Arctic offers a ray of hope: Spores might not make it very far from human habitats. At three stops during the journey across the Arctic, Pascal Lee, of the SETI Institute, collected samples from the pristine snow ahead and dirtier snow behind the vehicle, as well as from the rover’s interior. Later, Lee sent the samples to Schuerger’s lab.

The researchers asked, if humans drive over a microbe-free pristine environment, would they contaminate it? “The answer was no,” Schuerger says.

And that was in an Earth environment with only one or two of Schuerger’s biocidal factors (low temperatures and slightly higher UV radiation than elsewhere on Earth) and with a rover crawling with human-associated microbes. The Okarian hosted 69 distinct bacteria and 16 fungi, Schuerger and Lee reported in 2015 in Astrobiology.

But when crew members ventured outside the rover, they barely left a mark. The duo found one fungus and one bacterium on both the rover and two snow sites, one downwind and one ahead of the direction of travel. Other than that, nothing, even though crew members made no effort to contain their microbes — they breathed and ate openly.

“We didn’t see dispersal when conditions were much more conducive to dispersal” than they will be on Mars, Schuerger says.
The International Space Station may be an even better place to study what happens when inhabited space vessels leak microbes. Michelle Rucker, an engineer at NASA’s Johnson Space Center in Houston, and her colleagues are testing a tool for astronauts to swab the outside of their spacesuits and the space station, and collect whatever microbes are already there.

“At this point, no one has defined what the allowable levels of human contamination are,” Rucker says. “We don’t know if we’d meet them, but more importantly, we’ve never checked our human systems to see where we’re at.”

Rucker and colleagues have had astronauts test the swab kit as part of their training on Earth. The researchers plan to present the first results from those tests in March in Big Sky, Mont., at the IEEE Aerospace Conference. If the team gets the tool flight-certified to test it on the ISS, the results could fill a knowledge gap about how much spaceships carrying humans will leak and vent microbes.

A Russian experiment on the ISS may be giving the first clues. In November 2017, Russian cosmonauts told TASS news service that they had found living bacteria on the outside of the ISS. Some of those microbes, swabbed near vents during spacewalks, were not found on the spacecraft’s exterior when it launched.

Blowing in the wind
These results are important, says Conley, but they don’t give enough information alone to write quantitative contamination rules.

That’s partly because of another knowledge gap: how dust and wind move around on Mars. If Martian dust storms carry microbes far enough, the invaders could contaminate potential special regions even if humans land a safe distance away.

To find out, COSPAR’s Kminek suggests sending a fleet of Mars landers to act as meteorological stations at several fixed locations. The landers could measure atmospheric conditions and dust properties over a long time. Such landers would be relatively inexpensive to build, he says, and could launch in advance of humans.

But these weather stations would have to get in line. There’s a launch window between Earth and Mars every two years, and the next few are already booked. Weather stations would have to be stationary, so they couldn’t be added to rover missions like ExoMars or Mars 2020.

That means it’s possible that SpaceX or another company will try to send humans to Mars before the reconnaissance missions necessary to write rules for planetary protection are even built. If COSPAR is the tortoise in this race, SpaceX is the hare, along with a few other private companies. Only SpaceX has a stated timeline. Other contenders, including Washington-based Blue Origin, founded by Amazon executive Jeff Bezos, and United Launch Alliance, based in Colorado, are developing rockets that some analysts say could be part of a mission to the moon or Mars.

Now or never
Those looming launches prompted Fairén and colleagues to make a controversial proposal. In an article in the October 2017 Astrobiology, provocatively titled “Searching for life on Mars before it is too late,” the team suggested sending existing or planned rovers, even those not at the height of cleanliness, to look directly for signs of Martian life.

Given the harsh Martian conditions, rovers are unlikely to contaminate regions that might turn out to be special on a closer look, the group argues. The invasive species argument is misleading, they say: Don’t compare a microbe transfer to taking Asian parrots to the Amazon rainforest, where they could thrive and edge out local parrots. It would be closer to taking them to Antarctica to freeze to death.

Even if Earth microbes did replicate on Mars, the researchers wrote, technology is advanced enough that scientists would be able to distinguish hitchhikers from Earth from true Mars life (SN: 4/30/16, p. 28).

In a sharp rebuttal, published in the same issue of Astrobiology, Rummel and Conley disagreed. “Why would you want to go there with a dirty spacecraft?” says Rummel, who was NASA’s planetary protection officer before Conley. “To spend a billion dollars to go find life from Florida on Mars is both irresponsible and completely scientifically indefensible.”

There’s also concern for the health and safety of future astronauts. Conley says she mentioned the idea that scientists shouldn’t worry about getting sick if they encounter Earth organisms on Mars to a November meeting of epidemiologists who study the risks of Earth-based pandemics.

“The room burst out laughing,” she says. “This is a room full of medical doctors who deal with Ebola. The idea that we know about Earth organisms, and therefore they can’t hurt us, was literally laughable to them.”

Fairén has already drafted a response for a future issue of Astrobiology: “We acknowledge [that Rummel and Conley’s points] are informed and literate. Unfortunately, they are also unconvincing.”

The issue might come to a head in July in Pasadena, Calif., at the next meeting of COSPAR’s Scientific Assembly. Fairén and colleagues plan to push for more relaxed cleanliness rules.

That’s not likely to happen anytime soon. But with no concrete rules in place for humans, would a human mission even be allowed off the ground, whether NASA or SpaceX was at the helm? Currently, private U.S. companies must apply to the Federal Aviation Administration for a launch license, and for travel to another planet, that agency would probably ask NASA to weigh in.

It’s hard to know if anyone will actually be ready to send humans to Mars in the next decade. “You’d have to actually believe them to be scared,” says Rummel. “There are many unanswered questions about what Elon Musk wants to do. But I think we can calm down about people showing up on Mars unannounced.”

But SpaceX has defied expectations before and may give slow and steady a kick in the pants.

Top 10 papers from Physical Review’s first 125 years

No anniversary list is ever complete. Just last month, for instance, my Top 10 scientific anniversaries of 2018 omitted the publication two centuries ago of Mary Shelley’s Frankenstein. It should have at least received honorable mention.

Perhaps more egregious, though, was overlooking the 125th anniversary of the physics journal Physical Review. Since 1893, the Physical Review has published hundreds of thousands of papers and has been long regarded as the premier repository for reports of advances in humankind’s knowledge of the physical world. In recent decades it has split itself into subjournals (A through E, plus L — for Letters — and also X) to prevent excessive muscle building by librarians and also better organize papers by physics subfield. (You don’t want to know what sorts of things get published in X.)
To celebrate the Physical Review anniversary, the American Physical Society (which itself is younger, forming in 1899 and taking charge of the journal in 1913), has released a list, selected by the journals’ editors, of noteworthy papers from Physical Review history.

The list comprises more than four dozen papers, oblivious to the concerns of journalists composing Top 10 lists. If you prefer the full list without a selective, arbitrary and idiosyncratic Top 10 filter, you can go straight to the Physical Review journals’ own list. But if you want to know which two papers the journal editors missed, you’ll have to read on.

  1. Millikan measures the electron’s charge, 1913.
    When J.J. Thomson discovered the electron in 1897, it was by proving the rays in cathode ray tubes were made up of a stream of particles. They carried a unit of electrical charge (hence their name). Thomson did not publish in the Physical Review. But Robert Millikan did in 1913 when he measured the strength of the electric charge on a single electron. He used oil drops, measuring how fast they fell through an electric field. Interacting with ions in the air gave each drop more or fewer electric charges, affecting how fast the drops fell. It was easy to calculate the smallest amount of charge consistent with the various changes in speed. (OK, it was not easy at all — it was a tough experiment and the calculations required corrections for all sorts of things.) Millikan’s answer was very close to today’s accepted value, and he won the Nobel Prize in 1923.
  2. Wave nature of electron, Davisson and Germer, 1927.
    J.J. Thomson’s son George also experimented with electrons, and showed that despite his father’s proof that they were particles, they also sometimes behaved like waves. George did not publish in the Physical Review. But Clinton Davisson and Lester Germer did; their paper established what came to be called the wave-particle duality. Their experiment confirmed the suspicions of Louis de Broglie, who had suggested the wave nature of electrons in 1924.
  3. Particle nature of X-rays, Compton, 1923.
    Actually, wave-particle duality was already on the physics agenda before de Broglie’s paper or Davisson and Germer’s experiment, thanks to Arthur Holly Compton. His experiments on X-rays showed that when they collided with electrons, momentum was transferred just as in collisions of particles. Nevertheless X-rays were definitely a form of electromagnetic radiation that moved as a wave, like light. Compton’s result was good news for Einstein, who had long argued that light had particle-like properties and could travel in the form of packets (later called photons).
  4. Discovery of antimatter, Carl Anderson, 1933.
    In the late 1920s, in the wake of the arrival of quantum mechanics, English physicist Paul Dirac was also interested in electrons. He applied his mathematical powers to devise an equation to explain them, and he succeeded. But he got out more than he put in. His equation yielded correct answers for an electron’s energy but also contained a negative root. That perplexed him; a negative energy for an electron seemed to make no physical sense. Still, the math was the math, and Dirac couldn’t ignore his own equation’s solutions. After some false steps, he decided that the negative energy implied the existence of a new kind of particle, identical to an electron except with an opposite electric charge (equal in magnitude to the charge that Millikan had measured). Dirac did not publish in the Physical Review. But Carl Anderson, who actually found Dirac’s antimatter electron in 1933, did. In cloud chamber observations of cosmic rays, Anderson spotted tracks of a lightweight positively charged particle, apparently Dirac’s antielectron. He titled his paper “The Positive Electron” and referred to the new particles as positrons. They were the first example of antimatter.
  5. How stars shine, Hans Bethe, 1939.
    Since the dawn of science, astronomers had wondered how the sun shines. Some experiments in the 19th century suggested gravity. But a sun powered by gravitational contraction would have burned itself out long ago. A new option for powering the sun appeared in the 1930s when physicists began to understand the energy released in nuclear reactions. In the simplest such reaction, two protons fused. That made sense as a solar power source, because a proton is the nucleus of a hydrogen atom and stars are made mostly of hydrogen. But at a conference in April 1938, experts including Hans Bethe of Cornell University concluded that proton fusion could not create the temperatures observed in the brightest stars. On the train back to Cornell, though, Bethe figured out the correct, more complicated nuclear reactions and soon sent a paper to the Physical Review. He asked the journal to delay publishing it so he could enter it in a contest (open to unpublished papers only). Bethe won the contest and then OK’d publication of his paper, which appeared in March 1939. For winning the contest, he received $500. For the published paper, his prize was delayed — until 1967. In that year he got the Nobel Prize: $61,700.
  6. Is quantum mechanics complete? Einstein, Podolsky and Rosen, 1935.
    Einstein was famous for a lot of things, including a stubborn resistance to the implications of quantum mechanics. His main objection was articulated in the Physical Review in May 1935 in a paper coauthored with physicists Nathan Rosen and Boris Podolsky. It presented a complicated argument that is frequently misrepresented or misunderstood (as I’ve discussed here previously), but the gist is he thought quantum mechanics was incomplete. Its math could not describe properties that were simultaneously “real” for two separated particles that had previously interacted. Decades later multiple experiments showed that quantum mechanics was in fact complete; reality is not as simple a concept as Einstein and colleagues would have liked. The “EPR paper” stimulated an enormous amount of interest in the foundations of quantum mechanics, though. And some people continue to believe E, P and R had a point.
  7. Is quantum mechanics complete? (Yes.) Bohr, 1935.
    Here’s one of the missing papers. Physical Review’s editors somehow forgot to include Niels Bohr’s reply to the EPR paper. In October 1935, Bohr published a detailed response in the Physical Review, outlining the misunderstandings that EPR had perpetrated. Later EPR experiments turned out exactly as Bohr would have expected. (An early example from 1982 is among the Physical Review anniversary papers, but not this Top 10 list.) Yet some present-day critics still believe that somehow Bohr was wrong and Einstein was right. He wasn’t.
  8. Gravitational waves detected by LIGO, 2016.
    Einstein was right about gravitational waves. After devising his general theory of relativity to explain gravity, he realized that it implied ripples in the very fabric of spacetime itself. Later he backed off, doubting his original conclusion. But he was right the first time: A mass abruptly changing its speed or direction of movement should emit waves in space. Violent explosions or collisions would create ripples sufficiently strong to be detectable, if you spent a billion dollars or so to build some giant detectors. In a hopeful sign for humankind, the U.S. National Science Foundation put up the money and two black holes provided the collision in 2015, as reported in February 2016 in Physical Review Letters and widely celebrated by bloggers.
  9. Explaining nuclear fission, Bohr and Wheeler, 1939.
    On September 1, 1939, the opening day of World War II, the Physical Review published a landmark paper describing the theory of nuclear fission. It was a quick turnaround, as fission had been discovered only in December 1938, in Germany. While Einstein was writing a letter to warn President Roosevelt of fission’s potential danger in the hands of Nazis, Bohr and John Archibald Wheeler figured out how fission happened. Their paper provided essential theoretical knowledge for the Manhattan Project, which led to the development of the atomic bomb, and later to the use of nuclear energy as a power source.
  10. Oppenheimer and Snyder describe black holes, 1939.
    The process of black hole formation was first described by J. Robert Oppenheimer and Hartland Snyder in the same issue of the Physical Review as Bohr and Wheeler’s fission paper. Of course, the name black hole didn’t exist yet, but Oppenheimer and Snyder thoroughly explained how a massive star contracting under the inward pull of its own gravity would eventually disappear from view. “The star thus tends to close itself off from any communication with a distant observer; only its gravitational field persists,” they wrote. Nobody paid any attention to black holes then, though, because Oppenheimer soon became director of the Manhattan Project (requiring him to read Bohr and Wheeler’s paper). It wasn’t until the late 1960s when black holes became a household name thanks to Wheeler (who eventually got around to reading Oppenheimer and Snyder’s paper). Yet for some reason the Physical Review editors omitted the Oppenheimer-Snyder paper from their list, verifying that no such list is ever complete, even if you have dozens of items instead of only 10.

Study debunks fishy tale of how rabbits were first tamed

Domesticated bunnies may need a new origin story.

Researchers thought they knew when rabbits were tamed. An often-cited tale holds that monks in Southern France domesticated rabbits after Pope Gregory issued a proclamation in A.D. 600 that fetal rabbits, called laurices, are fish and therefore can be eaten during Lent.

There’s just one problem: The story isn’t true. Not only does the legend offer little logic for rabbits being fish, but the proclamation itself is bogus, according to a new study of rabbit domestication.
“Pope Gregory never said anything about rabbits or laurices, and there is no evidence they were ever considered ‘fish,’” says Evan Irving-Pease, an archaeologist at the University of Oxford.

He and his colleagues discovered that scientists had mixed up Pope Gregory with St. Gregory of Tours. St. Gregory made a passing reference to a man named Roccolenus who in “the days of holy Lent … often ate young rabbits.” The misattribution somehow led to the story of rabbits’ domestication.

What’s more, DNA evidence can’t narrow rabbit domestication to that time period, Irving-Pease and colleagues report February 14 in Trends in Ecology and Evolution. Rabbit domestication wasn’t a single event, but a process with no distinct beginning, the researchers say. For similar reasons, scientists have found it difficult to pinpoint when and where other animals were first domesticated, too (SN: 7/8/17, p. 20).
Geneticist Leif Andersson of Uppsala University in Sweden agrees that genetic data can’t prove rabbit domestication happened around 600. But he says “it is also impossible to exclude that domestication of rabbits happened around that time period.”

Domestication practices were well known by then, Andersson says, and it’s possible that French monks or farmers in Southern France with a taste for rabbit meat made an effort to round up bunnies that eventually became the founding population for the domestic rabbit.

Ancient DNA from old rabbit bones may one day help settle the debate.