"Rebecca is volunteer in the experimental covid-19 vaccine trial in the US" by avlxyz is licensed under CC BY-NC 2.0
Afton Burrell is a senior majoring in biology, minoring in public health science and a 2020-21 health care ethics intern at the Markkula Center for Applied Ethics. Views are her own.
On February 17, 2021, health officials in the United Kingdom took an exciting, albeit controversial, step forward in tackling the COVID-19 pandemic. Following ethics committee approval, researchers at Imperial College London were given the green light to conduct the world’s first human challenge trial for COVID-19. In a matter of weeks, volunteers will be deliberately infected with SARS-CoV-2, the virus that causes COVID-19, in an effort to understand the spread of the novel coronavirus and its emerging variants.
For most people, the thought of intentionally exposing oneself to infection (particularly one we’ve worked so hard to avoid) is puzzling to say the least. For physicians bound by the aphoristic expression, “First, do no harm,” the thought is borderline nefarious. And yet, the powerful influence that human challenge trials have had on modern medicine is unrivaled. Past experiments have lent themselves to some of the most influential scientific breakthroughs in human history. (This is not to say that every human challenge trial was executed with the utmost ethical forethought—quite the opposite, in fact.)
In 1796, physician Edward Jenner showed that a previous cowpox infection could protect against smallpox after inoculating his gardener's eight-year-old son. This led to the world’s first vaccine and the eradication of one of the deadliest diseases on Earth. A century later, pathologist Walter Reed studied yellow fever transmission with mosquito vectors, aiding in the discovery of the first human virus and the first informed consent process used in a human trial. In 1985, physician Barry Marshall drank live H. pylori culture to establish a clear link between the bacteria and stomach ulcers. Today, human challenge trials help us understand the immunological mechanisms of influenza, malaria, and typhoid fever.
Without these continued advancements in human experimentation, scientific progress as we know it would come to a screeching halt. Whether we like it or not, the safety of a vaccine has always depended on the willingness of someone courageous enough to test it.
Navigating the ethics of human challenge trials for COVID-19 requires coming up with a tolerable level of risk amidst a sea of unknowns. The complexity of this undertaking is, in part, caused by the personal and relative nature of risk (barring more extreme examples). In other words, one person’s “dangerous” is another's' “perfectly innocuous.” This begs the question: In a world where generating knowledge oftentimes requires difficult moral trade-offs, how can we minimize the risks of involvement while maximizing the benefits we accrue from participation? Acknowledging that a perfect balance of nonmaleficence (minimizing harm) and beneficence (maximizing benefit) is not always possible in the real world, an ethical concept we call “utility,” is helpful in this regard. Utility tells us that when we cannot always just help others or just avoid hurting them, we ought to do what yields the best overall outcome when both factors are weighed simultaneously. Let's see how this plays out with the UK’s human challenge trial.
In traditional vaccine clinical trials, exposure to SARS-CoV-2 is left to chance. Once participants have been randomized to either the placebo or experimental arm of a study, they are asked to go about their normal lives. It is the researchers’ hope that those receiving the test vaccine in the experimental group will fare better against SARS-CoV-2 than those in the placebo group. But until enough subjects acquire an infection, this hope remains largely unsubstantiated.
In contrast, human challenge trials do not rely on natural exposure. Instead, scientists intentionally inoculate both groups with the virus and keep them under strict observation. As a result, researchers are able to pinpoint when, how, and to what degree each participant is exposed to the virus. This allows for a more detailed analysis of disease progression as well as what really matters in the interaction between a pathogen and its host. Plus, the time saved in human challenge trials gives manufacturers a much-needed head start on production and distribution. As we are all well aware, this can be the difference between life and death.
To reduce the likelihood of severe illness, UK researchers have limited their study base to healthy volunteers aged 18-30 with no underlying health conditions. While this metric may be sufficient enough to justify which population is selected, it reads more as the lesser of a few evils. In reality, the long-term effects of COVID-19 remain largely unclear. Our ability to acquire truly informed consent, wherein participants are provided with the information they need in order to make informed judgments, is only hindered by this information void. Additionally, excluding high-risk populations from a study has the potential to reduce its external validity. It’s a lose-lose situation: If we choose to include vulnerable populations in our challenge trial, we risk subjecting them to unnecessary and unethical harm. On the other hand, if we choose not to include these groups, we risk perpetuating health disparities and running an experiment with no generalizable return on our risk-benefit investment.
It’s worth noting that healthy volunteers may be more vulnerable to exploitation than we might think. Altruistic motivations, while generous, have the potential to cloud one’s decision-making judgment. If people are willing to infect themselves with SARS-CoV-2 for the sake of the “greater good,” they may be more likely to see potential dangers as mere roadblocks to success. As health care professionals, do we have a duty to protect these volunteers from self-harm, beyond that of the informed consent process, or should we learn to simply embrace their generosity? At the very least, additional protections should be used to ensure that a volunteer’s autonomy is not diminished by his or her desire to make a difference.
Rather than relying solely on a risk-benefit analysis of human challenge trials, as we have done thus far, it may be helpful to broaden our scope. What kinds of information are human challenge trials uniquely able to provide for us, and what are we trying to accomplish by instituting such experiments? Dr. Matthew Memoli, the director of the Laboratory of Infectious Diseases Clinical Studies Unit at the National Institutes of Health, says it best: “We’re not trying to solve the problem that we face right now. What we’re trying to do is to be more prepared for the future.” With three FDA-approved vaccines and more and more Americans becoming eligible to receive a shot, it’s important to remember that vaccine development is a continuous process. The most pressing issues of infectious disease prevention, many of which align with that of cancer research, still require our attention.
From this perspective, perhaps the advantage of human challenge trials is not in expediting vaccine development, but in permitting us to practice proactive science rather than reactive science. These trials open a window of opportunity for identifying correlates of protection and biomarkers, both of which help to explain why one person becomes infected and ill with COVID-19 while another escapes scot-free. The same goes for prospective indicators of vaccine success.
As mentioned previously, the safety of a vaccine has always depended on the willingness of someone courageous enough to test it. While the COVID-19 pandemic has taken a significant toll on people all across the world, it has also inspired record-shattering health care innovation and supererogatory behavior. Embracing human challenge trials may not be easy, but we cannot afford to overlook the enormous social benefit that comes with their implementation.