In the rush toward tech tools to manage COVID-19, societal and ethical risks must be center stage

6 min read


We are in a defining moment. The global coronavirus pandemic has now affected three million people globally, and the world is desperately seeking ways to manage its toll on society. The speed and depth of the pandemic is forcing us to adopt drastic crisis management strategies. Using data-driven technologies, artificial intelligence (AI) and health tech applications are incredibly promising, especially when they are cross-fertilized. But low maturity and insufficient understanding of the ethical and societal impacts of these technologies pose risks to democracy and the right to privacy. We need to better understand the dangers of rushing toward these tech solutions without fully considering the societal and ethical implications.

Many are scrambling to find solutions and adequate responses that can save lives and ease suffering, track the spread of the virus, and find a way forward. While it is tempting to rush toward quick tech solutions, we need to think about the long-term threats and implications of the choices we make. We lack the tools to detect, measure, and govern how these tech solutions for COVID-19 are scaling in broader societal and ethical contexts. And, we can’t lose sight of potential threats to democracy and the right to privacy in deploying AI surveillance tools to fight the pandemic. Citizens need transparency in how their personal data is collected and used, and assurance that tech solutions which use a more privacy-intrusive surveillance approach to track the disease, are not normalized in post-crisis times.  

Even before the emergence of the novel coronavirus that causes COVID-19, the field of digital health was a highly fragmented ecosystem. Multiple technologies demonstrate incredible promise and potential in the field of health. Smart phones can provide information via apps that help you learn about or track your own health data. Mobile location data can provide valuable information as to how a disease spreads, and location information and social media can be used for contact tracing. AI can help identify drugs that can cure or predict a disease, indicate the effectiveness of diagnosis, or track genetic data, similar to big data. Telemedicine enables doctor-patient consultations anywhere in the world. Blockchain (a growing list of records, called blocks, that are linked using cryptography) will help us keep track of medical records, supply chains, and payments. Along with these technologies’ promise, however, is the allure of data as the new gold which everyone wants to monetize. For example, in digital health, insurance companies are using data-driven technologies and AI without sufficiently considering and understanding ethical consequences. Furthermore, the tech giants are set up to maximize their profits and governments are set to act bold and fast.  

The incentives to pursue these solutions clash with public skepticism and concerns about privacy protections. Four out of five Americans are worried that the pandemic will encourage government surveillance, according to a just-released survey from CyberNews. The survey also revealed 79 percent of Americans were either “worried” or “very worried” that any intrusive tracking measures enacted by the government would extend long after the coronavirus is defeated. Only 27 percent of those surveyed would give an app permission to track their location, and 65 percent said they would disapprove of the government collecting their data or using facial recognition to track their whereabouts.

Lack of governance and transparency will surely lead to an erosion of trust. Companies’ rush to develop technologies to track coronavirus infections is outpacing citizens’ willingness to use them. About half of Americans with smartphones say they’re probably or definitely unwilling to download apps being developed by Google and Apple to alert those nearby they came into contact with someone who is infected, according to a Washington Post-University of Maryland poll. That’s primarily because they don’t trust the tech companies to treat their data securely and privately.

We need to find ways to balance smart solutions with a surveillance economy. We must consider through an ethical and societal lens who is benefitting – it may not always be the patient, the nurse or the doctor. Being thoughtful about the potential ramifications is especially urgent with little to no supporting policy or regulatory frameworks. We need to be careful not to act impulsively and regret it later.

There are ways to approach this ethical dilemma responsibly. For example, researchers at Lund University in Sweden have launched an app (originally developed by doctors in the UK) to help map the spread of infection and increase knowledge of the coronavirus. It is called the COVID Symptom Tracker and it makes it possible for the public to report symptoms and thereby provide insights into the national health status. The free app is voluntary, does not collect personal data and the user’s location is based only on the first two digits of the postal code to protect the user’s identity. No GPS data is collected and the app does not in any way attempt to trace the user’s movements. Further, it is used for research, not commercial purposes. 

Another example is Swedish telco company Telia Company, providing mobility and data insights to cities, with anonymization features designed to protect citizen privacy.  The solution can track where the disease is moving, but it is not privacy intrusive as the data is anonymized and aggregated and does not identify individuals.

So, what is the best way to use tech to fight COVID-19? There is no panacea, but these recommendations can be helpful in addressing this dilemma going forward.

  • Despite the obvious risks, like privacy intrusion, bias, and discrimination, companies and other developers should take active measures to protect and preserve privacy and should use and manage tools wisely.   
  • Companies should be transparent and publicly state how they are – and aren’t –using the data they collect as part of their pandemic response. A higher level of transparency is a growing expectation from employees and consumers alike. Recent Digital advertising trends survey by Choozle.com found that 89 percent of consumers wish companies would take additional steps to protect their data. Governments should act swiftly to make these technologies available but ensure appropriate frameworks and compliance tools are in place to prevent misuse or overuse of data.

Companies should explore methods and tools which can help to identify and characterize data-driven risks.  AISC and MetricStream have launched an AI Sustainability risk scanning self-assessment tool which does just this

For more information about AISC and MetricStream’s partnership, and how we jointly offer tools to detect data-driven risks, visit our website.

About the author:

Elaine Grunewald is an expert in the technology sector and effects of digitalization, as well as the global sustainability and development arena, where she has had leading positions and roles, including Chief Sustainability & Public Affairs Officer at Ericsson. Today she is also a Board member of SWECO AB and the Whitaker Peace and Development Initiative. Elaine has worked with digital health initiatives for over ten years. From implementation projects in Africa exploring the most basic use of mobile phones for Community Health Workers to collecting health data in rurally impoverished villages, to using cell phone data to track the spread of Ebola in West Africa, to more recent industry and policy initiatives such as the Broadband Commission for Sustainable Development and the Digital Health Initiative.

Anna Felländer is one of Sweden’s leading experts on the effect of digitalization on organizations, society, and the economy. She recently had the role as Chief and Digital Economist at Swedbank and has spent 10 years working for the Swedish government. She has been affiliated to the Royal Institute of Technology, and has had advisory roles in government, the digital start-up scene, and large organizations focusing on Artificial Intelligence and Ethics – including the Minister of Digitalization. Anna has served in the Swedish Ministry of Finance and Prime Minister’s Office in the Crisis Management Coordination Secretariat during several global and national crises and has been an advisor to the Minister of Digitalization in Sweden. 

Both Anna and Elaine have deep knowledge and experience from industry, academia, and policy on the impact of digitalization on society. They are the founders of the AI Sustainability Center. Their full bios are available online at www.aisustainability.org

See the AISC Risk Scanning Offering
See the AISC Risk Scanning demo video
Try our the AISC Mini Risk scanning survey