Our PROMISE: Our ads will never cover up content.
Our children thank you.
Published: Tuesday, April 5, 2022 - 12:03 Technology has always been a double-edged sword. While it’s been a major force for progress, it has also been abused and caused harm. From water power to Fordism, history shows that technology is neither good nor bad by itself. It can, of course, be both, depending on how it’s used. Telecommunications, specifically the internet, and more recently AI, which is estimated to contribute more than 11 billion euros to the global economy by 2030, are no different. On one hand, the internet connects us all, and kept us in touch with one another during the pandemic. AI and machine learning can help solve some of the world’s most pressing problems. Just a few examples are diagnosing disease, thwarting cyberattacks, and fighting climate change. Yet, if left unchecked, algorithms can also perpetuate biases, create online echo-chambers, enable radicalization, and compromise safety and privacy. This year is poised to bring sweeping changes to digital regulations. The EU parliament approved the Digital Services Act to increase online safety and consumer protection, and is preparing the Artificial Intelligence Act to govern AI. The U.S. Federal Trade Commission has published its guidance on AI, while China has launched a wave of regulations. The Organization for Economic Cooperation (OECD) currently tracks more than 700 AI policy initiatives across 60 countries. Meanwhile, for years, the private and nonprofit sectors have rallied behind the Tech for Good movement, which strives to “put digital and technology at the service of humanity.” In its shortest and most sweeping form, it promises that technology can help the world achieve the UN’s Sustainable Development Goals. But in light of history, we must ask: Is it possible for Tech for Good to succeed without doing harm? We argue that the answer is largely about focusing on what we call “good tech.” One problem is that the best intentions are no guarantee of a positive outcome. Therefore, a sole focus on what technology can do is too narrow. We must shift our priority to how we design, implement, and monitor tech, across contexts. In other words, we need to focus on process. To leverage the best of AI and tech, and safeguard our world from their inherent risks, we must integrate robust processes that check against abuses, biases, or harmful uses into our activities. Drawing on our research on AI, machine learning and fair process leadership, we call the output of this process-oriented approach to technology innovation and regulation good tech. The goal of good tech is to minimize the possibility that modern technology is abused or causes harm, so that society reaps only the benefits. Good tech demands a rigorous, inclusive process for design, implementation, and monitoring through three components: “good” principles, fair process, and strong oversight. 1. Good tech is inclusive, value-based, and future-proof After goals are set, high performance starts with defining values; in an organization or team, shared values create a wall against abuse and risks. In recent years, companies such as Google, Microsoft, IBM, BMW, and Telefonica have rallied behind principles for ethical or responsible technology. As of April 2020, the Swiss nonprofit AlgorithmWatch has 173 guidelines in its AI Global Ethics Guidelines Inventory. Of course, we always will need to scrutinize these principles, who creates them, and how they are implemented. Good tech principles are more than words; they reflect a collaborative process among diverse stakeholders. They can’t be rushed—often these principles demand months to deliberate and implement. The most robust and effective principles, like the UN’s Principles of Human Rights or OECD’s AI Principles, are “values-based” and distilled over time through an inclusive process that seeks input from all stakeholders and minimizes bias. Luckily, we don’t have to always start from scratch. For example, principles such as the OECD’s AI framework and the work that the OECD Network of Experts on AI does can be a starting point for organizations developing Good Tech to consider. 2. Good tech must be governed by ‘fair process’ Goals and principles are fine but fall flat if they aren’t implemented or ignored when needed. Implementation remains a key challenge. Although there are multiple frameworks for responsible tech by design, we need to make sure that they’re also fully aligned with time-tested practices for fair process. This is, in our opinion, critical work. We believe that a commitment to fair process is instrumental to developing good tech. Decades of research with companies and leaders has correlated fair process with sustainable performance. Fair play, also called “procedural justice” by organizational scientists, is defined by five values, all of which must apply to good tech: Fair process maps out how matters are decided, monitored, and adapted as needed. It’s implementable and measurable. For example, when developing a new technology, it lays out a clear process with stakeholder input at all stages of design, implementation, and evaluation. When situations change or risks flare up, it forces learning and continuous improvement. For example, we know that gender bias in precision medicine affects patient care, especially if AI uses data sets from more men than women. In such instances, fair process demands that data analysis be made gender-agnostic and establishes systemic checks to safeguard against representation biases—e.g. in data, models, and developers’ teams as well as stakeholder views, even with the support of tech itself, as companies like Tremau develop. 3. Good tech requires good leadership and oversight In the end, good tech will continue to call on values and mission-driven people and, because of the complexity of the task, on collaborative leadership. Many organizations have already introduced ethics committees and boards that review and investigate AI risks. Fair process demands impartiality, accountability, and transparency—also, like a judge or governor, unbiased leadership. Good-tech ethics boards should include external cross-sector experts with sufficient diversity so that any bias is countered. Ethics boards must be formed by fair process, or they face risks. For example, the AI ethics board at Google evaporated barely a week after it was established. It was consumed by rising organizational skepticism about its composition and role because the perception was that the board lacked a clear mandate; it wasn’t set up for success. In tandem, regulation must be checked for fair process, too. One commendable example of engagement and exploration of issues is the European Union’s practice of publishing white papers that help facilitate open, informed debate among stakeholders. Most mistakes tend to be repeated, perhaps not in an identical way, but at least sharing a pattern. Committees and “wise leadership” find them. Can we avoid repeating history’s mistakes? Technology has always posed risks, and always will. Good tech principles, fair process, and strong oversight can help make our world safer. By using good tech principles, maybe finally—after centuries—we have a shot at avoiding technology disasters for years to come. Reducing the odds alone would be a momentous achievement. First published March 1, 2022, on INSEAD’s Knowledge blog. Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types. However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads. So please consider turning off your ad blocker for our site. Thanks, Theodoros Evgeniou is a professor of decision sciences and technology management at INSEAD, and academic director of INSEAD elab, the research and analytics center of INSEAD that focuses on data analytics for business. Ludo Van der Heyden is the INSEAD Chaired Professor of Corporate Governance and Emeritus Professor of Technology and Operations Management at INSEAD. He is the founder of the INSEAD Corporate Governance Centre. Professor Van der Heyden is also chairman of a software company in natural resource estimation and is a regular advisor to boards and leadership teams across the world. ‘Tech for Good’ Needs a ‘Good Tech’ Approach
Good tech prioritizes processes before outcomes
Good tech prioritizes processes before outcomes
How to develop and implement good tech
• Clarity and transparency, including of goals, purpose, and “rules”
• Consistency in treating people and issues equally over time, without preference or bias
• Communication that favors listening over telling and that doesn’t sanction people for what they say
• Changeability of views when faced with new evidence
• Culture of truth-seeking and doing the right thing instead of choosing what’s most popular or convenient
Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.
Quality Digest Discuss
About The Authors
Theodoros Evgeniou
Ludo Van der Heyden
© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.