Source: Slate by Bryan Walsh
New biotechnology tools enable scientists to create more dangerous viruses for research. What happens if they get out?
The United States Army Medical Research Institute of Infectious Diseases at Maryland’s Fort Detrick works with some of the most dangerous pathogens in the world, including the Ebola virus and smallpox. So it was concerning, to say the least, when news broke earlier this month that the Centers for Disease Control and Prevention had temporarily shut down research at USAMRIID involving “select agents” like Ebola. The CDC found that the lab lacked periodic recertification for personnel working on biocontainment, as well as problems with a wastewater decontamination unit. That included leaks—although, as a USAMRIID spokesperson helpfully told the New York Times, at least none of the leaks were outside the lab.
USAMRIID was just the latest pathogen research lab to run into serious safety problems. In 2014, USA Today reported that at the CDC itself, as many as 75 workers might have been exposed to live anthrax bacteria after potentially infectious samples were sent to labs that lacked the safety equipment to handle them. According to government statistics, there were more than 700 incidents of the loss or release of “select agents and toxins” from U.S. labs between 2004 and 2010, and in 11 instances lab workers contracted bacterial or fungal infections.
The occasional infection and even death among lab technicians is an occupational hazard of working with virulent pathogens, and it can happen even at laboratories that take the highest precautions. But as I learned when reporting my book End Times: A Brief Guide to the End of the World, that risk rises exponentially when labs work with pathogens that are potentially more dangerous than anything found in nature—because scientists made them that way.
“Do you want to risk that really, really, really low-probability but terrible event?” – Marc Lipsitch, Harvard epidemiologist
In 2010 and 2011, the labs of Yoshihiro Kawaoka at the University of Wisconsin–Madison and Ron Fouchier of Erasmus Medical Center in the Netherlands separately announced that they had succeeded in making the deadly H5N1 avian flu virus more transmissible through genetic engineering. Since it first spilled over from poultry to human beings in Hong Kong in 1997, H5N1 has infected and killed hundreds of people in sporadic outbreaks, mostly in Asia. The virus has a roughly 60 percent fatality rate among confirmed cases, but fortunately, H5N1 almost never spreads from person to person. Nearly every infection is due to close contact with infected poultry.
Flu experts, though, worried that H5N1 might mutate and gain the ability to transmit easily from person to person, triggering what could be a disastrous pandemic. Until recently, there was little scientists could do but wait and see what nature might cook up. But biotechnology offered a new strategy, through what is called “gain of function” research. In his lab, Kawaoka introduced mutations in the hemagglutinin gene of an H5N1 virus—the H in H5N1—and combined it with seven genes from the highly transmissible but not very deadly 2009 H1N1 flu virus. Fouchier and his team took an existing H5N1 virus collected in Indonesia and used reverse genetics to introduce mutations that previous research had shown made H5N1 strains more effective in infecting human beings.
In both cases, the modified H5N1 flu viruses were able to spread between ferrets in the lab, a strong sign they could also pass between humans, which indicated that H5N1 did indeed have pandemic potential with the right set of mutations. But in performing the experiments, Kawaoka and Fouchier engineered altered influenza viruses that potentially possessed the worst of both worlds: the virulence of avian flu and the transmissibility of human flu. In the aftermath of their work, the National Science Advisory Board for Biosecurity, for the first time ever, asked scientific journals to hold back on publishing the full details of an experiment, lest potential terrorists use the information as a blueprint for a bioweapon. After some revisions, the two papers were eventually published in Science and Nature, respectively, but the scientific community more broadly was split between the laissez-faire attitude that information should always be open and fear that it could be misused. In 2014, the U.S. Department of Health and Human Services put a moratorium on such gain-of-function research, while regulators tried to sort out the situation.
Harvard epidemiologist Marc Lipsitch told me that the experiments should never have been conducted. “Is the science so compelling and so important to do that it justifies this kind of risk?” he said. “The answer is no.” Kawaoka and Fouchier—and other respected scientists—obviously disagreed.
But Lipsitch and Tom Inglesby of the Johns Hopkins Center for Health Security pushed further, using lab safety as a focal point. They collaborated on a study in 2014 estimating the chances that a hybrid flu could accidentally infect a lab worker and, from there, spread to the rest of the world. Based off past biosafety statistics, they found that each year of working with the hybrid flu carried a 0.01 percent to 0.1 percent chance of triggering a pandemic. While it’s impossible to know what the fatality rate would be in a hybrid flu pandemic, let’s assume that the modified H5N1, like its wild cousin, would kill 3 out of every 5 people sickened. If about one-third of the global population were infected by the new, far more transmissible virus—not unreasonable, since no one would have immunity—the result could be a death toll as high as 1.4 billion people.
“There are really big risks,” said Lipsitch. “And do you want to risk that really, really, really low-probability but terrible event?”
Apparently so. In 2017, the National Institutes of Health lifted the moratorium on gain-of-function research, while putting in place new regulations around the work and restricting it to a handful of labs with the highest levels of biocontainment. And so in early 2019, both Kawaoka and Fouchier were given the go-ahead to resume their gain-of-function research on flu after a government review—though that was only revealed to the public thanks to an investigation by Science. “We are glad the United States government weighed the risks and benefits … and developed new oversight mechanisms,” Kawaoka told Science. “We know that it does carry risks. We also believe it is important work to protect human health.”
Both Kawaoka and Fouchier are highly respected scientists with years of experience working with deadly pathogens, including H5N1. The Department of Health and Human Services earlier this year put additional protocols in place for research around such “enhanced potential pandemic pathogens,” which will include a specific review around biosafety and security. Yet as the safety problems at USAMRIID and other highly secure labs demonstrate, errors do happen, even with the surest of hands. The fact that the viruses they’re working with have been purposefully enhanced means the ramifications of any error would be that much graver. And there’s no guarantee that the best researchers to do such work will have the same track record.
New biotechnology tools like CRISPR are allowing scientists to program the code of life and, in doing so, achieve miracles. But too little attention is being paid to the possibility that along the way they will inadvertently create low-probability but high-consequence existential risks. All it takes is one mistake.