Featured Product
This Week in Quality Digest Live
Health Care Features
Ophir Ronen
Ushering in a new era of data-driven hospitals
Martin Cottam
OH&S must stay one step ahead to keep workers safe
Matthew M. Lowe
Don’t wait until something bad happens to make a change
Michael Glickman
In the medical realm, AI’s most powerful use is to enhance human capabilities—not replace them
Kari Miller
A lack of integration hampers quality compliance across a product’s life cycle

More Features

Health Care News
Better compliance, outbreak forecasting, and prediction of pathogens such as listeria or salmonella
Links ZEISS research and capabilities in automated, high-resolution 3D imaging and analysis
Creates one of the most comprehensive regulatory SaaS platforms for the industry
Resistant to high-pressure environments, and their 3/8-in. diameter size fits tight spaces
Easy, reliable leak testing with methylene blue
New medical product from Canon’s Video Sensing Division
Reduce identifying info in patient health data to enable better treatments and diagnostics
Making the new material freely available to testing laboratories and manufacturers worldwide
Google Docs collaboration, more efficient management of quality deviations

More News

Health Care

‘Tech for Good’ Needs a ‘Good Tech’ Approach

Good tech prioritizes processes before outcomes

Published: Tuesday, April 5, 2022 - 12:03

Technology has always been a double-edged sword. While it’s been a major force for progress, it has also been abused and caused harm. From water power to Fordism, history shows that technology is neither good nor bad by itself. It can, of course, be both, depending on how it’s used.

Telecommunications, specifically the internet, and more recently AI, which is estimated to contribute more than 11 billion euros to the global economy by 2030, are no different.

On one hand, the internet connects us all, and kept us in touch with one another during the pandemic. AI and machine learning can help solve some of the world’s most pressing problems. Just a few examples are diagnosing disease, thwarting cyberattacks, and fighting climate change. Yet, if left unchecked, algorithms can also perpetuate biases, create online echo-chambers, enable radicalization, and compromise safety and privacy.

This year is poised to bring sweeping changes to digital regulations. The EU parliament approved the Digital Services Act to increase online safety and consumer protection, and is preparing the Artificial Intelligence Act to govern AI. The U.S. Federal Trade Commission has published its guidance on AI, while China has launched a wave of regulations. The Organization for Economic Cooperation (OECD) currently tracks more than 700 AI policy initiatives across 60 countries.

Meanwhile, for years, the private and nonprofit sectors have rallied behind the Tech for Good movement, which strives to “put digital and technology at the service of humanity.” In its shortest and most sweeping form, it promises that technology can help the world achieve the UN’s Sustainable Development Goals.

But in light of history, we must ask: Is it possible for Tech for Good to succeed without doing harm? We argue that the answer is largely about focusing on what we call “good tech.” 

Good tech prioritizes processes before outcomes

One problem is that the best intentions are no guarantee of a positive outcome. Therefore, a sole focus on what technology can do is too narrow. We must shift our priority to how we design, implement, and monitor tech, across contexts.

In other words, we need to focus on process.

To leverage the best of AI and tech, and safeguard our world from their inherent risks, we must integrate robust processes that check against abuses, biases, or harmful uses into our activities. Drawing on our research on AI, machine learning and fair process leadership, we call the output of this process-oriented approach to technology innovation and regulation good tech.

How to develop and implement good tech

The goal of good tech is to minimize the possibility that modern technology is abused or causes harm, so that society reaps only the benefits. Good tech demands a rigorous, inclusive process for design, implementation, and monitoring through three components: “good” principles, fair process, and strong oversight.

1. Good tech is inclusive, value-based, and future-proof

After goals are set, high performance starts with defining values; in an organization or team, shared values create a wall against abuse and risks.

In recent years, companies such as Google, Microsoft, IBM, BMW, and Telefonica have rallied behind principles for ethical or responsible technology. As of April 2020, the Swiss nonprofit AlgorithmWatch has 173 guidelines in its AI Global Ethics Guidelines Inventory.

Of course, we always will need to scrutinize these principles, who creates them, and how they are implemented.

Good tech principles are more than words; they reflect a collaborative process among diverse stakeholders. They can’t be rushed—often these principles demand months to deliberate and implement.

The most robust and effective principles, like the UN’s Principles of Human Rights or OECD’s AI Principles, are “values-based” and distilled over time through an inclusive process that seeks input from all stakeholders and minimizes bias. Luckily, we don’t have to always start from scratch. For example, principles such as the OECD’s AI framework and the work that the OECD Network of Experts on AI does can be a starting point for organizations developing Good Tech to consider.

2. Good tech must be governed by ‘fair process’

Goals and principles are fine but fall flat if they aren’t implemented or ignored when needed. Implementation remains a key challenge.

Although there are multiple frameworks for responsible tech by design, we need to make sure that they’re also fully aligned with time-tested practices for fair process. This is, in our opinion, critical work.

We believe that a commitment to fair process is instrumental to developing good tech. Decades of research with companies and leaders has correlated fair process with sustainable performance. Fair play, also called “procedural justice” by organizational scientists, is defined by five values, all of which must apply to good tech:
• Clarity and transparency, including of goals, purpose, and “rules”
• Consistency in treating people and issues equally over time, without preference or bias
• Communication that favors listening over telling and that doesn’t sanction people for what they say
• Changeability of views when faced with new evidence
• Culture of truth-seeking and doing the right thing instead of choosing what’s most popular or convenient

Fair process maps out how matters are decided, monitored, and adapted as needed. It’s implementable and measurable. For example, when developing a new technology, it lays out a clear process with stakeholder input at all stages of design, implementation, and evaluation. When situations change or risks flare up, it forces learning and continuous improvement.

For example, we know that gender bias in precision medicine affects patient care, especially if AI uses data sets from more men than women. In such instances, fair process demands that data analysis be made gender-agnostic and establishes systemic checks to safeguard against representation biases—e.g. in data, models, and developers’ teams as well as stakeholder views, even with the support of tech itself, as companies like Tremau develop.

3. Good tech requires good leadership and oversight

In the end, good tech will continue to call on values and mission-driven people and, because of the complexity of the task, on collaborative leadership.

Many organizations have already introduced ethics committees and boards that review and investigate AI risks. Fair process demands impartiality, accountability, and transparency—also, like a judge or governor, unbiased leadership. Good-tech ethics boards should include external cross-sector experts with sufficient diversity so that any bias is countered.

Ethics boards must be formed by fair process, or they face risks. For example, the AI ethics board at Google evaporated barely a week after it was established. It was consumed by rising organizational skepticism about its composition and role because the perception was that the board lacked a clear mandate; it wasn’t set up for success.

In tandem, regulation must be checked for fair process, too. One commendable example of engagement and exploration of issues is the European Union’s practice of publishing white papers that help facilitate open, informed debate among stakeholders.

Most mistakes tend to be repeated, perhaps not in an identical way, but at least sharing a pattern. Committees and “wise leadership” find them.

Can we avoid repeating history’s mistakes?

Technology has always posed risks, and always will. Good tech principles, fair process, and strong oversight can help make our world safer.

By using good tech principles, maybe finally—after centuries—we have a shot at avoiding technology disasters for years to come. Reducing the odds alone would be a momentous achievement.

First published March 1, 2022, on INSEAD’s Knowledge blog.

Discuss

About The Authors

Theodoros Evgeniou’s picture

Theodoros Evgeniou

Theodoros Evgeniou is a professor of decision sciences and technology management at INSEAD, and academic director of INSEAD elab, the research and analytics center of INSEAD that focuses on data analytics for business.

Ludo Van der Heyden’s picture

Ludo Van der Heyden

Ludo Van der Heyden is the INSEAD Chaired Professor of Corporate Governance and Emeritus Professor of Technology and Operations Management at INSEAD. He is the founder of the INSEAD Corporate Governance Centre. Professor Van der Heyden is also chairman of a software company in natural resource estimation and is a regular advisor to boards and leadership teams across the world.