/

Pooling multiple models during COVID-19 pandemic provided more reliable projections about an uncertain future

9 mins read

Emily Howerton, Penn State; Cecile Viboud, National Institutes of Health, and Justin Lessler, University of North Carolina at Chapel Hill

How can anyone decide on the best course of action in a world full of unknowns?

There are few better examples of this challenge than the COVID-19 pandemic, when officials fervently compared potential outcomes as they weighed options like whether to implement lockdowns or require masks in schools. The main tools they used to compare these futures were epidemic models.

But often, models included numerous unstated assumptions and considered only one scenario – for instance, that lockdowns would continue. Chosen scenarios were rarely consistent across models. All this variability made it difficult to compare models, because it’s unclear whether the differences between them were due to different starting assumptions or scientific disagreement.

In response, we came together with colleagues to found the U.S. COVID-19 Scenario Modeling Hub in December 2020. We provide real-time, long-term projections in the U.S. for use by federal agencies such as the Centers for Disease Control and Prevention, local health authorities and the public. We work directly with public health officials to identify which possible futures, or scenarios, would be most helpful to consider as they set policy, and we convene multiple independent modeling teams to make projections of public health outcomes for each scenario. Crucially, having multiple teams address the same question allows us to better envision what could possibly happen in the future.

Since its inception, the Scenario Modeling Hub has generated 17 rounds of projections of COVID-19 cases, hospitalizations and deaths in the U.S. across varying stages of the pandemic. In a recent study published in the journal Nature Communications, we looked back at all these projections and evaluated how well they matched the reality that unfolded. This work provided insights about when and what kinds of model projections are most trustworthy – and most importantly supported our strategy of combining multiple models into one ensemble.

line graph that ends in multiple colored options on the right
Collecting projections from multiple independent models provides a fuller picture of possible futures − as in this graph of potential hospitalizations − and allows researchers to generate an ensemble. COVID-19 Scenario Modeling Hub, CC BY-ND

Multiple models are better than just one

A founding principle of our Scenario Modeling Hub is that multiple models are more reliable than one.

From tomorrow’s temperature on your weather app to predictions of interest rates in the next few months, you likely use the combined results of multiple models all the time. Especially in times like the COVID-19 pandemic when uncertainty abounds, combining projections from multiple models into an ensemble provides a fuller picture of what could happen in the future. Ensembles have become ubiquitous in many fields, primarily because they work.

Our analysis of this approach with COVID-19 models resoundingly showed the strong performance of the Scenario Modeling Hub ensemble. Not only did the ensemble give us more accurate predictions of what could happen in the future overall, it was substantially more consistent than any individual model throughout the different stages of the pandemic. When one model failed, another performed well, and by taking into account results from all of these varying models, the ensemble emerged as more accurate and more reliable.

Researchers have previously shown performance benefits of ensembles for short-term forecasts of influenza, dengue and SARS-CoV-2. But our recent study is one of the first times researchers have tested this effect for long-term projections of alternative scenarios.

A ‘hub’ makes multimodel projections possible

While scientists know combining multiple models into an ensemble improves predictions, it can be tricky to put an ensemble together. For example, in order for an ensemble to be meaningful, model outputs and key assumptions need to be standardized. If one model assumes a new COVID-19 variant will gain steam and another model does not, they will come up with vastly different results. Likewise, a model that projects cases and one that projects hospitalizations would not provide comparable results.

people seated around an open conference table with whiteboards
Meeting frequently helps multiple modeling teams stay on the same page. Matteo Chinazzi, CC BY-ND

Many of these challenges are overcome by convening as a “hub.” Our modeling teams meet weekly to make sure we’re all on the same page about the scenarios we model. This way, any differences in what individual models project are the result of things researchers truly do not know. Retaining this scientific disagreement is essential; the success of the Scenario Modeling Hub ensemble arises because each modeling team takes a different approach.

At our hub we work together to design our scenarios strategically and in close collaboration with public health officials. By projecting outcomes under specific scenarios, we can estimate the impact of particular interventions, like vaccination.

For example, a scenario with higher vaccine uptake can be compared with a scenario with current vaccination rates to understand how many lives could potentially be saved. Our projections have informed recommendations of COVID-19 vaccines for children and bivalent boosters for all age groups, both in 2022 and 2023.

In other cases, we design scenarios to explore the effects of important unknowns, such as the impact of a new variant – known or hypothetical. These types of scenarios can help individuals and institutions know what they might be up against in the future and plan accordingly.

Although the hub process requires substantial time and resources, our results showed that the effort has clear payoffs: The information we generate together is more reliable than the information we could generate alone.

woman filling out a form with a COVID vaccine sign in the foreground
What models suggest are likely futures can inform real-world decisions, such as when to run a vaccine clinic. Eric Lee for The Washington Post via Getty Images

Past reliability, confidence for future

Because Scenario Modeling Hub projections can inform real public health decisions, it is essential that we provide the best possible information. Holding ourselves accountable in retrospective evaluations not only allows us to identify places where the models and the scenarios can be improved, but also helps us build trust with the people who rely on our projections.

Our hub has expanded to produce scenario projections for influenza, and we are introducing projections of respiratory syncytial virus, or RSV. And encouragingly, other groups abroad, particularly in the EU, are replicating our setup.

Scientists around the world can take the hub-based approach that we’ve shown improves reliability during the COVID-19 pandemic and use it to support a comprehensive public health response to important pathogen threats.

Emily Howerton, Postdoctoral Scholar in Biology, Penn State; Cecile Viboud, Senior Research Scientist, National Institutes of Health, and Justin Lessler, Professor of Epidemiology, University of North Carolina at Chapel Hill

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Charlene

Charlene is a Bay Area journalist who hails from the small community of Fresno. Drawing from her experience writing for her college paper, Charlene continues to advocate for free press and local journalism. She also volunteers in all the beach cleanups she can because she loves the water.