Since the start of the COVID-19 pandemic, thousands of scientists and volunteers, across dozens of teams, have been tracking in detail the interventions that governments have adopted to curb viral spread — from closing restaurants to mandating the wearing of masks. They hope to deduce which policies are most effective.
Compiling and analysing this data is a mammoth task. At a workshop last month and a public conference this week, scientists involved in 50 of these tracking databases met and discussed the future of their efforts. Peter Klimek, a mathematical physicist at the Complexity Science Hub (CSH) Vienna and the Medical University of Vienna, who is involved in the CSH’s tracking project, explains the scope of the challenge.
How much work has gone into these trackers?
In our tracker alone, more than 40 volunteers and scientists have been involved in assigning codes to more than 11,000 measures in 57 countries. There are many other trackers and some, such as the CoronaNet consortium and the University of Oxford’s government response tracker, each have hundreds of volunteers and researchers.
Some tracking efforts have received funding, but most are struggling owing to a lack of money, and some have had to stop. It’s also a challenge to keep volunteers motivated. For some, tracking the impact of containment measures is a way to cope with the stress of the pandemic; they find a family of like-minded spirits. But when one resigns, we lose the experience they have accumulated, and delays in data availability can occur in some regions.
How have the trackers been used?
We advise the Austrian government on policy measures to contain the spread of coronavirus and avoid health-system overload. When we’re asked questions, such as why some countries have much lower case numbers than others, the first places we look are the databases tracking government interventions.
We still don’t know the best way to plug the data from the tracking systems into mathematical models. But the trackers are a unique treasure trove that we can use to make epidemiological modelling a data-driven science and to prepare for the next pandemic.
What have they told us so far?
When many countries applied various control measures simultaneously, we knew very little about the effects of government interventions. When more data became available, we found that curfews, cancellations of small gatherings, and closures of schools, shops and restaurants were among the most effective policies1.
But there is less agreement, when analysing different trackers, on how to rank these measures. For example, it is not certain that highly restrictive measures are automatically more effective than a smart mix of comparatively modest restrictions and better timings of their implementation.
Why is it hard to estimate the effects of interventions?
It is difficult to untangle the effects of any given measure from those of other policy interventions. There are many statistical approaches to disentangling relations in complex systems, but none of them is perfect. To analyse the effects that different measures might have, we must also properly code each measure, which is extremely challenging. For example, sociocultural factors can make social distancing more effective in one country than in another.
The effects of interventions also change over time. It is dangerous to compare the first wave of the pandemic with the second or a third wave. The situation has become more complicated as government interventions have become more diverse, and as people adhere less willingly to restrictions. At the same time, the situation is becoming more urgent as new viral variants develop and spread. We need to intensify our tracking activities — even if the task is becoming more daunting.
Why not combine the trackers?
Each tracker has its own aspects and perspectives. Some do integrate data from different databases, including one maintained by the World Health Organization. But this comes at the expense of some of the granularity that the original databases might have had. From the perspective of data quality and reproducibility of results, merging trackers into a super-database isn’t a good idea. To be able to forecast which policy measures and strategies might work best to contain the virus, we should seek to continue using all trackers for as long as possible.
How might this sort of work change in the future?
There is growing societal and political pressure to understand hypothetical scenarios: how not having implemented a certain measure might have changed the course of the pandemic. For example, was it really necessary to close schools? Or will the social and economic costs turn out to have outweighed the health-related benefits?
It can help to compare the situation with countries that didn’t adopt a particular measure with what happened elsewhere — but this is difficult. If a policy is not recorded in a tracker, that might be a problem with data quality, or it might mean that countries have implemented the policy in a way that somehow eludes the classification of measures that a particular tracker has adopted. Without reliable tracker data, there will be no solid evidence to answer such questions.