Zika could one day help combat deadly brain cancer

Zika’s damaging neurological effects might someday be enlisted for good — to treat brain cancer.

In human cells and in mice, the virus infected and killed the stem cells that become a glioblastoma, an aggressive brain tumor, but left healthy brain cells alone. Jeremy Rich, a regenerative medicine scientist at the University of California, San Diego, and colleagues report the findings online September 5 in the Journal of Experimental Medicine.

Previous studies had shown that Zika kills stem cells that generate nerve cells in developing brains (SN: 4/2/16, p. 26). Because of similarities between those neural precursor cells and stem cells that turn into glioblastomas, Rich’s team suspected the virus might also target the cells that cause the notoriously deadly type of cancer. In the United States, about 12,000 people are expected to be diagnosed with glioblastoma in 2017. (It’s the type of cancer U.S. Senator John McCain was found to have in July.) Even with treatment, most patients live only about a year after diagnosis, and tumors frequently recur.
In cultures of human cells, Zika infected glioblastoma stem cells and halted their growth, Rich and colleagues report. The virus also infected full-blown glioblastoma cells but at a lower rate, and didn’t infect normal brain tissues. Zika-infected mice with glioblastoma either saw their tumors shrink or their tumor growth slow compared with uninfected mice. The virus-infected mice lived longer, too. In one trial, almost half of the mice survived more than six weeks after being infected with Zika, while all of the uninfected mice died within two weeks of receiving a placebo.

Using a virus to knock out cancer isn’t a completely new idea. Treatments that rely on modified polioviruses to target tumors such as glioblastomas are already in clinical trials in the United States, and there’s a modified herpesvirus approved by the U.S. Food and Drug Administration for treating melanoma.

These cancer-fighting viruses seem to work in two ways, says Andrew Zloza, head of surgical oncology research at the Rutgers Cancer Institute of New Jersey in New Brunswick. First, the viruses infect and kill cancer cells. Then, as those cancer cells split open, previously hidden tumor fragments become visible to the immune system, which can recognize and fight them.

“Right now we don’t know what kind of viruses are best” for fighting cancer, Zloza says — whether it’s more effective to use a common virus that many people have been exposed to or something more unusual. But now, Zika is yet another candidate.

Further testing is needed to determine whether the virus is safe and effective in humans. Since Zika’s effects are more harmful in developing brains, a Zika-based cancer therapy might be safe only in adults. And the virus would need to be genetically modified to make it safer and less transmissible.
Rich and colleagues are now testing in mice whether combining Zika with traditional cancer treatments such as chemotherapy is more effective than either treatment by itself. Because Zika targets the cells that generate tumor cells, it might prevent tumors from recurring.

Science can’t forecast love

Here’s some heartbreaking news for people pinning their hopes on online matchmaking sites: It’s virtually impossible to forecast a love connection.

Maybe that’s not so shocking to survivors of the dating wars. But now science is weighing in. Extensive background data on two individuals — comparable to that collected by digital dating services — can’t predict whether that pair will romantically click during a four-minute, face-to-face speed date, say psychologist Samantha Joel of the University of Utah in Salt Lake City and colleagues.

People know when an in-person meeting on a speed date has gone smoothly or felt right — and that bodes well for mutual attraction, the investigators report online August 30 in Psychological Science. But on paper, no blend of personal qualities and partner preferences thought to influence mate choices pegged which opposite-sex duos would hit it off, Joel’s group concludes.

Joel expected that, say, a person who reported being attracted to extroverted people would generate the most chemistry with speed daters who reported being extroverted. Or, two people who reported being good-looking and having particularly warm personalities would feel especially attracted to one another after brief dates. But that’s not what Joel and coauthors Paul Eastwick of the University of California, Davis and Eli Finkel of Northwestern University in Evanston, Ill., found.
The researchers studied 350 heterosexual college students — almost evenly split between males and females — who participated in one of 15 speed-dating events in 2005 or 2007. Participants filled out either 182-item or 112-item questionnaires about their personality traits and preferences in romantic partners. The students then completed about 12 speed dates. Afterward, participants rated their interest in and sexual attraction to each person they met.

Some qualities romance seekers said they wanted — such as extroversion and warmth — predicted individual speed daters’ greater attractiveness to others in general. But a statistical analysis of participants’ responses found that no traits or preferences, or combinations of traits and preferences, predicted how much one person especially desired another person after a speed date.

Joel’s team has not analyzed evidence from online matchmaking services to see if their questionnaires frequently pair people who generate romantic heat. “But our findings suggest that it’s quite difficult to predict initial romantic attraction using self-report measures before two people have met,” Joel says.

Biological anthropologist Helen Fisher, a senior researcher at Indiana University’s Kinsey Institute in Bloomington, agrees. “You’ve got to meet someone in person to trigger the brain circuitry for romantic love,” Fisher says.

That comes as no surprise to operators of online dating sites, she adds. These sites typically don’t promise customers romantic connections, says Fisher, who is a consultant for online dating site Match.com and founded its affiliated website, Chemistry.com. The aim is to provide an array of potential dates with background and personality traits requested by a customer. The rest is up to those who decide to go on dates.

How bats could help tomato farmers (and the U.S. Navy)

Bats, with their superb ability to echolocate, are inspiring advanced technologies — from better Navy sonar to gadgets that might deliver packages or help farmers manage crops. And engineers aren’t waiting for neuroscientists to work out every detail of how the bats’ brains manage the task.

“We think we have enough information to be useful to us, to develop a bio-inspired sensor,” says research engineer Jason Gaudette of the Naval Undersea Warfare Center Newport Division in Rhode Island. Like bats, the Navy uses sonar to find and visualize objects in the deep. But current versions are far less elegant than the flying mammals’ system.
The Navy’s sonar arrays can be huge, encompassing hundreds of “ears” that listen for sonar pings from atop a submarine’s dome or trailing behind it in a long tail. Bats, Gaudette notes, dodge obstacles and find mosquito-sized meals with just two ears. He and colleagues have developed a bat-inspired prototype device that they hope can perform more like bats do. Mounted on the nose of a half-meter-long, torpedo-shaped autonomous undersea robot, the sonar system has one sound transmitter and three receivers (Gaudette hopes to eventually get that number down to two or even one).
The system uses algorithms inspired by research in bats to interpret returning sonar echoes for navigation. If it works, the system could help the Navy perform sonar imaging using less space and less money while offering sharper images, Gaudette says.

Researchers in Israel are hoping to help farmers with a bat-inspired kind of sonar. Neuroecologist Yossi Yovel of Tel Aviv University is creating computer algorithms describing how bats might interpret returning echoes to distinguish different plants.
Yovel collaborates with Avital Bechar, a researcher at the Institute of Agricultural Engineering near Rishon LeZion, Israel, who wants to help farmers predict their crops’ yield, which can vary widely from year to year. The same acre of tomato plants, for example, could yield 30 or 120 tons of fruit, Bechar estimates. Such a wide range puts farmers at a disadvantage in negotiating a price for crops and forces the farmers to guess how much equipment and how many pickers they’ll need at harvest time.

Bechar’s sonar system, which emits batlike sounds and records via microphones that mimic bat ears, can penetrate three rows of plants deep — farther than cameras could. Then it calculates the number of leaves and pounds of fruit per plant, based on Yovel’s algorithms. Bechar has mounted the scanner on a prototype robot and plans to affix it to a drone to count fruit in 15-meter-high date palms. The researchers also hope to add weed-detection capability. Bechar expects it to be “a game changer in agriculture, because it will reduce the unknowns.”
At Virginia Tech in Blacksburg, engineer Rolf Mueller is learning tricks from the physical structures of bats’ noses and ears. Certain bat groups, such as horseshoe bats (one species Mueller works with is Rhinolophus ferrumequinum), send out their echolocation calls through their noses, like a snort. Complex, fleshy formations called nose-leaves change the outgoing sound as it comes out of the nose. And the bats’ ears have more than 20 muscles, which rapidly change shape as the bat listens for echoes, Mueller says. That flexibility gives the animals more information, he suspects: “It’s like seeing the world with a different perspective, at the same time, [from] one echo.”

His group developed a prototype robot with mechanical “nose-leaves” and shape-shifting “ears,” and sent it zooming through forested areas on a zip line to record how the bot perceives trees and branches. Eventually, Mueller envisions an autonomous underwater bot or an airborne drone with a similar sonar setup. The drone could be useful for delivering packages in forested or otherwise complicated areas without crashing.

Ancient boy’s DNA pushes back date of earliest humans

A boy who lived in what’s now South Africa nearly 2,000 years ago has lent a helping genome to science. Using the long-gone youngster’s genetic instruction book, scientists have estimated that humans emerged as a distinct population earlier than typically thought, between 350,000 and 260,000 years ago.

The trick was retrieving a complete version of the ancient boy’s DNA from his skeleton to compare with DNA from people today and from Stone Age Neandertals and Denisovans. Previously documented migrations of West African farmers to East Africa around 2,000 years ago, and then to southern Africa around 1,500 years ago, reshaped Africans’ genetics — and obscured ancient ancestry patterns — more than has been known, the researchers report online September 28 in Science.
The ancient boy’s DNA was not affected by those migrations. As a result, it provides the best benchmark so far for gauging when Homo sapiens originated in Africa, evolutionary geneticist Carina Schlebusch of Uppsala University in Sweden and her colleagues conclude.

In line with the new genetically derived age estimate for human origins, another team has proposed that approximately 300,000-year-old fossils found in northwestern Africa belonged to H. sapiens (SN: 7/8/17, p. 6). Some researchers suspect a skull from South Africa’s Florisbad site, dated to around 260,000 years ago, qualifies as H. sapiens. But investigators often place our species’ origins close to 200,000 years ago (SN: 2/26/05, p. 141). There is broad consensus that several fossils from that time represent H. sapiens.

Debate over the timing of human origins will continue despite the new evidence from the child, whose remains came from previous shoreline excavations near the town of Ballito Bay, says Uppsala University evolutionary geneticist and study coauthor Mattias Jakobsson. “We don’t know if early Homo sapiens fossils or the Florisbad individual were genetically related to the Ballito Bay boy,” he says.

Thus, the precise timing of humankind’s emergence, and exact patterns of divergence among later human populations, remain unclear. Researchers have yet to retrieve DNA from fossils dating between 200,000 and 300,000 years old that either securely or possibly belong to H. sapiens.
However early human evolution played out, later mixing and mingling of populations had a big genetic impact. DNA evidence from more recent fossils, including those studied by Schlebusch’s group, increasingly suggests that Stone Age human groups migrated from one part of Africa to another and mated with each other along the way (SN: 10/20/12, p. 9), says Harvard Medical School evolutionary geneticist Pontus Skoglund. In the Sept. 21 Cell, he and his colleagues report that DNA from 16 Africans, whose remains date to between 8,100 and 400 years ago, reveals a shared ancestry among hunter-gatherers from East Africa to South Africa that existed before West African farmers first arrived 2,000 years ago.

That ancient set of common genes still comprises a big, varying chunk of the DNA of present-day Khoisan people in southern Africa, Skoglund’s group found. Earlier studies found that the Khoisan — consisting of related San hunter-gatherer and Khoikhoi herding groups — display more genetic diversity than any other human population.

Schlebusch’s team estimates that a genetic split between the Khoisan and other Africans occurred roughly 260,000 years ago, shortly after humankind’s origins and around the time of the Florisbad individual. Khoisan people then diverged into two genetically distinct populations around 200,000 years ago, the researchers calculate.

Ancient DNA in Schlebusch’s study came from seven individuals unearthed at six South African sites. Three hunter-gatherers, including the Ballito Bay boy, lived about 2,000 years ago. Four farmers lived between 500 and 300 years ago.

Comparisons to DNA from modern populations in Africa and elsewhere indicated that between 9 percent and 30 percent of Khoisan DNA today comes from an East African population that had already interbred with Eurasian people. Those East Africans were likely the much-traveled farmers who started out in West Africa and reached southern Africa around 1,500 years ago, the researchers propose.

Chong Liu one-ups plant photosynthesis

For Chong Liu, asking a scientific question is something like placing a bet: You throw all your energy into tackling a big and challenging problem with no guarantee of a reward. As a student, he bet that he could create a contraption that photosynthesizes like a leaf on a tree — but better. For the now 30-year-old chemist, the gamble is paying off.

“He opened up a new field,” says Peidong Yang, a chemist at the University of California, Berkeley who was Liu’s Ph.D. adviser. Liu was among the first to combine bacteria with metals or other inorganic materials to replicate the energy-generating chemical reactions of photosynthesis, Yang says. Liu’s approach to artificial photosynthesis may one day be especially useful in places without extensive energy infrastructure.

Liu first became interested in chemistry during high school, and majored in the subject at Fudan University in Shanghai. He recalls feeling frustrated in school when he would ask questions and be told that the answer was beyond the scope of what he needed to know. Research was a chance to seek out answers on his own. And the problem of artificial photosynthesis seemed like something substantial to throw himself into — challenging enough “so [I] wouldn’t be jobless in 10 or 15 years,” he jokes.
Photosynthesis is a simple but powerful process: Sunlight helps transform carbon dioxide and water into chemical energy stored in the chemical bonds of sugar molecules. But in nature, the process isn’t particularly efficient, converting just 1 percent of solar energy into chemical energy. Liu thought he could do better with a hybrid system.
The efficiency of natural photosynthesis is limited by light-absorbing pigments in plants or bacteria, he says. People have designed materials that absorb light far more efficiently. But when it comes to transforming that light energy into fuel, bacteria shine.

“By taking a hybrid approach, you leverage what each side is better at,” says Dick Co, managing director of the Solar Fuels Institute at Northwestern University in Evanston, Ill.

Liu’s early inspiration was an Apollo-era attempt at a life-support system for manned space missions. The idea was to use inorganic materials with specialized bacteria to turn astronauts’ exhaled carbon dioxide into food. But early attempts never went anywhere.

“The efficiency was terribly low, way worse than you’d expect from plants,” Liu says. And the bacteria kept dying — probably because other parts of the system were producing molecules that were toxic to the bacteria.

As a graduate student, Liu decided to use his understanding of inorganic chemistry to build a system that would work alongside the bacteria, not against them. He first designed a system that uses nanowires coated with bacteria. The nanowires collect sunlight, much like the light-absorbing layer on a solar panel, and the bacteria use the energy from that sunlight to carry out chemical reactions that turn carbon dioxide into a liquid fuel such as isopropanol.

As a postdoctoral fellow in the lab of Harvard University chemist Daniel Nocera, Liu collaborated on a different approach. Nocera had been working on a “bionic leaf” in which solar panels provide the energy to split water into hydrogen and oxygen gases. Then, Ralstonia eutropha bacteria consume the hydrogen gas and pull in carbon dioxide from the air. The microbes are genetically engineered to transform the ingredients into isopropanol or another liquid fuel. But the project faced many of the same problems as other bacteria-based artificial photosynthesis attempts: low efficiency and lots of dead bacteria.
“Chong figured out how to make the system extremely efficient,” Nocera says. “He invented biocompatible catalysts” that jump-start the chemical reactions inside the system without killing off the fuel-generating bacteria. That advance required sifting through countless scientific papers for clues to how different materials might interact with the bacteria, and then testing many different options in the lab. In the end, Liu replaced the original system’s problem catalysts — which made a microbe-killing, highly reactive type of oxygen molecule — with cobalt-phosphorus, which didn’t bother the bacteria.

Chong is “very skilled and open-minded,” Nocera says. “His ability to integrate different fields was a big asset.”

The team published the results in Science in 2016, reporting that the device was about 10 times as efficient as plants at removing carbon dioxide from the air. With 1 kilowatt-hour of energy powering the system, Liu calculated, it could recycle all the carbon dioxide in more than 85,000 liters of air into other molecules that could be turned into fuel. Using different bacteria but the same overall setup, the researchers later turned nitrogen gas into ammonia for fertilizer, which could offer a more sustainable approach to the energy-guzzling method used for fertilizer production today.

Soil bacteria carry out similar reactions, turning atmospheric nitrogen into forms that are usable by plants. Now at UCLA, Liu is launching his own lab to study the way the inorganic components of soil influence bacteria’s ability to run these and other important chemical reactions. He wants to understand the relationship between soil and microbes — not as crazy a leap as it seems, he says. The stuff you might dig out of your garden is, like his approach to artificial photosynthesis, “inorganic materials plus biological stuff,” he says. “It’s a mixture.”

Liu is ready to place a new bet — this time on re-creating the reactions in soil the same way he’s mimicked the reactions in a leaf.

How to make the cosmic web give up the matter it’s hiding

Evidence is piling up that much of the universe’s missing matter is lurking along the strands of a vast cosmic web.

A pair of papers report some of the best signs yet of hot gas in the spaces between galaxy clusters, possibly enough to represent the half of all ordinary matter previously unaccounted for. Previous studies have hinted at this missing matter, but a new search technique is helping to fill in the gaps in the cosmic census where other efforts fell short. The papers were published online at arXiv.org on September 15 and September 29.
Two independent teams stacked images of hundreds of thousands of galaxies on top of one another to reveal diffuse filaments of gas connecting pairs of galaxies across millions of light-years. Measuring how the gas distorted the background light of the universe let the researchers determine the mass of ordinary matter, or baryons, that it held — the protons and neutrons that make up atoms.

“It’s a very important problem,” says Dominique Eckert of the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, who has searched for the missing matter via X-rays emitted by individual strands. “If you want to understand how galaxies form and how everything forms within a galaxy, you have to understand the evolution of the baryon content.” That starts with knowing where it is.

About 85 percent of the matter in the universe is mysterious, invisible stuff called dark matter, which physicists have yet to find (SN Online: 9/6/17). Weirdly, about half of the ordinary matter is also unaccounted for. When astronomers look around at the galaxies in the nearest few billion light-years, they find only about half the baryons that should have been produced in the Big Bang.

The rest is probably hiding in long filaments of gas that connect galaxy clusters in a vast cosmic web (SN: 3/8/14, p. 8). Previous attempts to find the baryons focused on X-rays emitted by gas in the filaments (SN Online: 8/4/15) or on the light of distant quasars filtering through these cobwebby strands (SN: 5/13/00, p. 310). But those efforts were either inconclusive, or were sensitive to such a narrow range of gas temperatures that they missed much of the matter.

Now there might be a way to find the rest. Two groups — cosmologist Hideki Tanimura, who did the work while at the University of British Columbia in Vancouver, and his colleagues, and Anna de Graaff of the University of Edinburgh and her colleagues — have sought the missing matter in a new way. Both teams found a way to look through the gas all the way back to the oldest light in the universe.
“Filamentary gas is very difficult to detect, but now we have a technique to detect it,” says Tanimura, now with the Institute of Space Astrophysics in Orsay, France.

That ancient light, called the cosmic microwave background, was emitted 380,000 years after the Big Bang. When this light passes though clouds of electrons in space — such as those found in filaments of hot gas — it gets deflected and distorted in a specific way. The Planck satellite released an all-sky map of these distortions in 2015 (SN: 3/21/15, p. 7).

Tanimura and de Graaff separately figured that there would be more distortion along the filaments than in empty space. To locate the filaments, both teams chose pairs of galaxies from the Sloan Digital Sky Survey catalog that were at least 20 million light-years apart. De Graaff’s team chose roughly a million pairs, and Tanimura’s team chose 262,864 pairs. Both teams assumed that the galaxies were not part of the same cluster, but that they should be connected by a filament.

The filaments were still too faint to see individually, so the teams used software to layer all the images and subtracted out distortion from electrons in the galaxies to see what was left. Both saw a residual distortion in the cosmic microwave background, which they attribute to the filaments.

Next, de Graaff’s team calculated that those filaments account for 30 percent of the total baryon content of the universe. That’s surely an underestimate, since they didn’t examine every filament in the universe, the team writes — the rest of the missing matter is probably there too.

“Both groups here took the obvious first step,” says Michael Shull of the University of Colorado Boulder, who was not involved in the new studies. “I think they’re on the right track.” But he worries that the gas they see might have been ejected from galaxies at high speeds, and so not actually the missing matter at all.

Eckert also worries that the gas may belong more to the galaxies than to their intergalactic tethers. Future observations of the composition of the gas, as well as more sensitive X-ray observations, could help solve that part of the puzzle.

Moms tweak the timbre of their voice when talking to their babies

Voices carry so much information. Joy and anger, desires, comfort, vocabulary lessons. As babies learn about their world, the voice of their mother is a particularly powerful tool. One way mothers wield that tool is by speaking in the often ridiculous, occasionally condescending baby talk.

Also called “motherese,” this is a high-pitched, exaggerated language full of short, slow phrases and big vocal swoops. And when confronted with a tiny human, pretty much everybody — not just mothers, fathers and grandparents — instinctively does it.

Now, a study has turned up another way mothers modulate their voice during baby talk. Instead of focusing on changes such as pitch and rhythm, the researchers focused on timbre, the “color” or quality of a sound.

Timbre is a little bit nebulous, kind of a “know it when you hear it” sort of thing. For instance, the timbre of a reedy clarinet differs from a bombastic trumpet, even when both instruments are hitting the same note. The same is true for voices: When you hear the song “Hurt,” you don’t need to check whether it’s Nine Inch Nails’ Trent Reznor or Johnny Cash singing it. The vocal fingerprints make it obvious.
It turns out that timbre isn’t set in stone. People — mothers, in particular — change their timbre, depending on whether they’re talking to their baby or to an adult, scientists report online October 12 in Current Biology.

For the study, 12 English-speaking moms brought their babies into a Princeton lab. Researchers recorded the women talking to or reading to their 7- to 12-month old babies, and talking with an adult.
An algorithm sorted through timbre data taken from both baby- and adult-directed speech, and used this input to make a mathematical classifier. Based on snippets of speech, the classifier then could tell whether a mother was talking with an adult or with her baby. The timbre differences between baby- and adult-directed speech were obvious enough that a computer program could tell them apart.

Similar timbre shifts were obvious in other languages, too, the researchers found. These baby-directed shifts happened in 12 different women who spoke Cantonese, French, German, Hebrew, Hungarian, Polish, Russian, Mandarin or Spanish — a consistency that suggests this aspect of baby talk is universal.

Defined mathematically, these timbre shifts were consistent across women and across languages, but it’s still not clear what vocal qualities drove the change. “It likely combines several features, such as brightness, breathiness, purity or nasality,” says study coauthor Elise Piazza, a cognitive neuroscientist at Princeton University. She and her colleagues plan on studying these attributes to see whether babies pay more attention to some of them.

It’s not yet known whether babies perceive and use the timbre information from their mother. Babies recognize their mother’s voice; it’s possible they recognize their mother’s baby-directed timbre, too. Babies can tell timbre differences between musical instruments, so they can probably detect timbre differences in spoken language, Piazza says.
The work “highlights a new cue that mothers implicitly use,” Piazza says. The purpose of this cue isn’t clear yet, but the researchers suspect that the timbre change may emotionally engage babies and help them learn language.

People may not reserve timbre shifts just for babies, Piazza points out. Politicians talking to voters, middle school teachers talking to a classroom, and lovers whispering to each other may all tweak their timbre to convey … something.

19th century painters may have primed their canvases with beer-brewing leftovers

Beer breweries’ trash may have been Danish painters’ treasure.

The base layer of several paintings created in Denmark in the mid-1800s contains remnants of cereal grains and brewer’s yeast, the latter being a common by-product of the beer brewing process, researchers report May 24 in Science Advances. The finding hints that artists may have used the leftovers to prime their canvases.

Records suggest that Danish house painters sometimes created glossy, decorative paint by adding beer, says Cecil Krarup Andersen, a conservator at the Royal Danish Academy in Copenhagen. But yeast and cereal grains have never been found in primer.
Andersen had been studying paintings from the Danish Golden Age, an explosion of artistic creativity in the first half of the 19th century, at the National Gallery of Denmark. Understanding these paintings’ chemical compositions is key to preserving them, she says. As part of this work, she and colleagues looked at 10 pieces by Christoffer Wilhelm Eckersberg, considered the father of Danish painting, and his protégé Christen Schiellerup Købke.

Canvas trimmings from an earlier conservation effort allowed for an in-depth analysis that wouldn’t have otherwise been possible, since the process destroys samples. In seven paintings, Saccharomyces cerevisiae proteins turned up, as well as various combinations of wheat, barley, buckwheat and rye proteins. All these proteins are involved in beer fermentation (SN: 9/19/17).

Tests of an experimental primer that the researchers whipped up using residual yeast from modern beer brewing showed that the mixture held together and provided a stable painting surface — a primary purpose of a primer. And this concoction worked much better than one made with beer.

Beer was the most common drink in 1800s Denmark, and it was akin to liquid gold. Water needed to be treated prior to consuming and the brewing process took care of that. As a result, plenty of residual yeast would have been available for artists to purchase, the researchers say.

If the beer by-product is found in paintings by other artists, Andersen says, that information can help conservators better preserve the works and better understand the artists’ lives and craftsmanship. “It’s another piece of the puzzle.”

With tools from Silicon Valley, Quinton Smith builds lab-made organs

While volunteering at the University of New Mexico’s Children’s Hospital in Albuquerque, Quinton Smith quickly realized that he could never be a physician.

Then an undergrad at the university, Smith was too sad seeing sick kids all the time. But, he thought, “maybe I can help them with science.”

Smith had picked his major, chemical engineering, because he saw it as “a cooler way to go premed.” Though he ultimately landed in the lab instead of at the bedside, he has remained passionate about finding ways to cure what ails people.

Today, his lab at the University of California, Irvine uses tools often employed in fabricating tiny electronics to craft miniature, lab-grown organs that mimic their real-life counterparts. “Most of the time, when we study cells, we study them in a petri dish,” Smith says. “But that’s not their native form.” Prodding cells to assemble into these 3-D structures, called organoids, can give researchers a new way to study diseases and test potential treatments.

By combining Silicon Valley tech and stem cell biology, scientists are now “making tissues that look and react and function like human tissues,” Smith says. “And that hasn’t been done before.”

The power of stem cells
Smith’s work began in two dimensions. During his undergraduate studies, he spent two summers in the lab of biomedical engineer Sharon Gerecht, then at Johns Hopkins University. His project aimed to develop a device that could control oxygen and fluid flow inside minuscule chambers on silicon wafers, with the goal of mimicking the environment in which a blood vessel forms. It was there that Smith came to respect human induced pluripotent stem cells.

These stem cells are formed from body cells that are reprogrammed to an early, embryonic stage that can give rise to any cell type. “It just blew my mind that you can take these cells and turn them into anything,” Smith says.

Smith ultimately returned to Gerecht’s lab for his Ph.D., exploring how physical and chemical cues can push these stem cells toward becoming blood vessels. Using a technique called micropatterning — where researchers stamp proteins on glass slides to help cells attach — he spurred cells to organize into the beginnings of artificial blood vessels. Depending on the pattern, the cells formed 2-D stars, circles or triangles, showing how cells come together to form such tubular structures.
While a postdoc at MIT, he transitioned to 3-D, with a focus on liver organoids.

Like branching blood vessels, a network of bile ducts carry bile acid throughout the liver. This fluid helps the body digest and absorb fat. But artificial liver tissue doesn’t always re-create ducts that branch the way they do in the body. Cells growing in the lab “need a little bit of help,” Smith says.
To get around the problems, Smith and his team pour a stiff gel around minuscule acupuncture needles to create channels. After the gel solidifies, the researchers seed stem cells inside and douse the cells in chemical cues to coax them to form ducts. “We can create on-demand bile ducts using an engineering approach,” he says.

This approach to making liver organoids is possible because Smith speaks the language of biology and the language of engineering, says biomedical engineer Sangeeta Bhatia, a Howard Hughes Medical Institute investigator at MIT and Smith’s postdoc mentor. He can call on his cell biology knowledge and leverage engineering techniques to study how specific cell types are organized to work together in the body.

For example, Smith’s lab now uses 3-D printing to ensure liver tissues grown in the lab, including blood vessels and bile ducts, organize in the right way. Such engineering techniques could help researchers study and pinpoint the root causes behind some liver diseases, such as fatty liver disease, Smith says. Comparing organoids grown from cells from healthy people with those grown from cells from patients with liver disease — including Hispanic people, who are disproportionately affected — may point to a mechanism.

Looking beyond the liver
But Smith isn’t restricting himself to the liver. He and his trainees are branching out to explore other tissues and diseases as well.

One of those pursuits is preeclampsia, a disease that affects pregnant women, and disproportionately African American women. Women with preeclampsia develop dangerously high blood pressure because the placenta is inflamed and constricting the mother’s blood vessels. Smith plans to examine lab-grown placentas to determine how environmental factors such as physical forces and chemical cues from the organ impact attached maternal blood vessels.

“We’re really excited about this work,” Smith says. It’s only recently that scientists have tricked stem cells to enter an earlier stage of development that can form placentas. These lab-grown placentas even produce human chorionic gonadotropin, the hormone responsible for positive pregnancy tests.

Yet another win for the power of stem cells.

New CRISPR gene editors can fix RNA and DNA one typo at a time

New gene-editing tools can correct typos that account for about half of disease-causing genetic spelling errors.

Researchers have revamped the CRISPR/Cas9 gene editor so that it converts the DNA base adenine to guanine, biological chemist David Liu and colleagues report October 25 in Nature. In a separate study, published October 25 in Science, other researchers led by CRISPR pioneer Feng Zhang re-engineered a gene editor called CRISPR/Cas13 to correct the same typos in RNA instead of DNA.
Together with other versions of CRISPR/Cas9, the new editors offer scientists an expanded set of precision tools for correcting diseases.

CRISPR/Cas9 is a molecular scissors that snips DNA. Scientists can guide the scissors to the place they want to cut in an organism’s genetic instruction book with a guide RNA that matches DNA at the target site. The tool has been used to make mutations or correct them in animals and in human cells, including human embryos (SN: 10/14/17, p. 8).

A variety of innovations allow CRISPR/Cas9 to change genetic instructions without cutting DNA (SN: 9/3/16, p. 22). Earlier versions of these “base editors,” which target typos related to the other half of disease-causing genetic spelling errors, have already been used to alter genes in plants, fish, mice and even human embryos.
Such noncutting gene editors are possibly safer than traditional DNA-cutting versions, says Gene Yeo, an RNA biologist at the University of California, San Diego. “We know there are drawbacks to cutting DNA,” he said. Mistakes often arise when cellular machinery attempts to repair DNA breaks. And although accurate, CRISPR sometimes cuts DNA at places similar to the target, raising the possibility of introducing new mutations elsewhere. Such “permanent irreversible edits at the wrong place in the DNA could be bad,” Yeo says. “These two papers have different ways to solve that problem.”
The new editors allow researchers to rewrite all four bases that store information in DNA and RNA. Those four bases are adenine (A) which pairs with thymine (T) (or uracil (U) in RNA), and guanine (G) pairs with cytosine (C). Mutations that change C-G base pairs to T-A pairs happen 100 to 500 times every day in human cells. Most of those mutations are probably benign, but some may alter a protein’s structure and function, or interfere with gene activity, leading to disease. About half of the 32,000 mutations associated with human genetic diseases are this type of C-G to T-A change, says Liu, a Howard Hughes Medical Institute investigator at Harvard University. Until now, there was little anyone could do about it, he says.

In RNA, DNA’s chemical cousin, some naturally occurring enzymes can reverse this common mutation. Such enzymes chemically convert adenine to inosine (I), which the cell interprets as G. Such RNA editing happens frequently in octopuses and other cephalopods and sometimes in humans (SN: 4/29/17, p. 6).

Zhang, of the Broad Institute of MIT and Harvard, and colleagues made an RNA-editing enzyme called ADAR2 into a programmable gene-editing tool. The team started with CRISPR/Cas13, molecular scissors that normally cut RNA. Dulling the blades let the tool grasp instead of slice. Zhang and colleagues then bolted the A-to-I converting portion of ADAR2 onto CRISPR/Cas13. Dubbed REPAIR, the conglomerate tool edited from 13 percent to about 27 percent of RNAs of two genes in human cells grown in dishes. The researchers did not detect any undesired changes.

Editing RNA is good for temporary fixes, such as shutting down inflammation-promoting proteins. But to fix many mutations, it requires permanent DNA repairs, says Liu.

In 2016, Liu’s team made a base editor that converts C to T. Chinese researchers reported in Protein & Cell on September 23 that they used the old base editor in human embryos to repair a mutation that causes the blood disorder beta-thalassemia. But that editor couldn’t make the opposite change, switching A to G.

Unlike with RNA, no enzymes naturally make the A-to-I conversion in DNA. So Nicole Gaudelli in Liu’s lab forced E. coli bacteria to evolve one. Then the researchers bolted the E. coli DNA converter, TadA, to a “dead” version of Cas9, disabled so it couldn’t cut both strands of DNA. The result was a base editor, called ABE, that could switch A-T base pairs into G-C pairs in about 50 percent of human cells tested.

This base editor works more like a pencil than scissors, Liu says. In lab dishes, Liu’s team corrected a mutation in human cells from a patient with an iron-storage blood disorder called hereditary hemochromatosis. The team also re-created beneficial mutations that allow blood cells to keep making fetal hemoglobin. Those mutations are known to protect against sickle cell anemia.

Another group reported in the October Protein & Cell that base editing appears to be safer than traditional cut-and-paste CRISPR/Cas9 editing. Liu’s results seem to support that. His team found that about 14 percent of the time cut-and-paste CRISPR/Cas9 made changes at nine of 12 possible “off-target” sites. The new A-to-G base editor altered just four of the 12 off-target sites and only 1.3 percent of the time.

That’s not to say cut-and-paste editing isn’t useful, Liu says. “Sometimes, if your task is to cut something, you’re not going to do that with a pencil. You need scissors.”