We are in a defining moment. The global coronavirus pandemic has now affected nearly 50 million people globally, and the world is desperately seeking ways to manage its toll on society. The speed and depth of the pandemic is forcing us to adopt drastic crisis management strategies. Using data-driven technologies, artificial intelligence (AI) and health tech applications are incredibly promising, especially when they are cross-fertilized. But low maturity and insufficient understanding of the ethical and societal impacts of these technologies pose risks to democracy and the right to privacy. We need to better understand the dangers of rushing toward these tech solutions without fully considering the societal and ethical implications.
Many are scrambling to find solutions and adequate responses that can save lives and ease suffering, track the spread of the virus, and find a way forward. While it is tempting to rush toward quick tech solutions, we need to think about the long-term threats and implications of the choices we make. We lack the tools to detect, measure, and govern how these tech solutions for COVID-19 are scaling in broader societal and ethical contexts. And, we can’t lose sight of potential threats to democracy and the right to privacy in deploying AI surveillance tools to fight the pandemic. Citizens need transparency in how their personal data is collected and used, and assurance that tech solutions which use a more privacy-intrusive surveillance approach to track the disease, are not normalized in post-crisis times.
Even before the emergence of the novel coronavirus that causes COVID-19, the field of digital health was a highly fragmented ecosystem. Multiple technologies demonstrate incredible promise and potential in the field of health. Smart phones can provide information via apps that help you learn about or track your own health data. Mobile location data can provide valuable information as to how a disease spreads, and location information and social media can be used for contact tracing. AI can help identify drugs that can cure or predict a disease, indicate the effectiveness of diagnosis, or track genetic data, similar to big data. Telemedicine enables doctor-patient consultations anywhere in the world. Blockchain (a growing list of records, called blocks, that are linked using cryptography) will help us keep track of medical records, supply chains and payments. Along with these technologies’ promise, however, is the allure of data as the new gold which everyone wants to monetize. For example, in digital health, insurance companies are using data-driven technologies and AI without sufficiently considering and understanding ethical consequences. Furthermore, the tech giants are set up to maximize their profits and governments are set to act bold and fast.
The incentives to pursue these solutions clash with public skepticism and concerns about privacy protections. Four out of five Americans are worried that the pandemic will encourage government surveillance, according to a just-released survey from CyberNews. The survey also revealed 79 percent of Americans were either “worried” or “very worried” that any intrusive tracking measures enacted by the government would extend long after the coronavirus is defeated. Only 27 percent of those surveyed would give an app permission to track their location, and 65 percent said they would disapprove of the government collecting their data or using facial recognition to track their whereabouts.
Lack of governance and transparency will surely lead to an erosion of trust. Companies’ rush to develop technologies to track coronavirus infections is outpacing citizens’ willingness to use them. About half of Americans with smartphones say they’re probably or definitely unwilling to download apps being developed by Google and Apple to alert those nearby they came into contact with someone who is infected, according to a Washington Post-University of Maryland poll. That’s primarily because they don’t trust the tech companies to treat their data securely and privately. We need to find ways to balance smart solutions with a surveillance economy. We must consider through an ethical and societal lens who is benefitting – it may not always be the patient, the nurse or the doctor. Being thoughtful about the potential ramifications is especially urgent with little to no supporting policy or regulatory frameworks. We need to be careful not to act impulsively and regret it later.
There are ways to approach this ethical dilemma responsibly. For example, researchers at Lund University in Sweden have launched an app (originally developed by doctors in the UK) to help map the spread of infection and increase knowledge of the coronavirus. It is called the COVID Symptom Tracker and it makes it possible for the public to report symptoms and thereby provide insights into the national health status. The free app is voluntary, does not collect personal data and the user’s location is based only on the first two digits of the postal code to protect the user’s identity. No GPS data is collected, and the app does not in any way attempt to trace the user’s movements. Further, it is used for research, not commercial purposes.
Another example is Swedish telco company Telia Company, providing mobility and data insights to cities, with anonymization features designed to protect citizen privacy. The solution can track where the disease is moving, but it is not privacy intrusive as the data is anonymized and aggregated and does not identify individuals.
So, what is the best way to use tech to fight COVID-19? There is no panacea, but these recommendations can be helpful in addressing this dilemma going forward.
Companies should explore methods and tools which can help to identify and characterize data-driven risks. AISC and MetricStream have launched an AI Sustainability risk scanning self-assessment tool which does just this. For more information about AISC and MetricStream’s partnership, and how we jointly offer tools to detect data-driven risks, visit our website.