Featured Product
This Week in Quality Digest Live
Innovation Features
Dario Lirio
Modernization is critical to enhance patient experience and boost clinical trial productivity
NIST
Drawing lessons from the compound eyes of trilobites, NIST researchers fabricate tiny lenses that see both near and far.
Gary Shorter
Pharma needs to adapt and evolve with the changing environment of life science data
NIST
Can using RNA like a circuit breaker make it a computer?

More Features

Innovation News
SynthAI service solves the challenge of training machine vision systems
Appointments are the first for recently established committee to advise the President
For the correlative analysis of Raman, AFM, AFM-Raman, cathodoluminescence, and fluorescence data and microscopy images
Xcelerator enables Saildrone to easily integrate mechanical and electronic design information
Vibroseis trucks better equipped to tell what’s shaking
Twice as powerful, more accurate, and more user-friendly than ever
Apex Skating raises the bar in athletic performance coaching
Attending Aviation Week’s show in Dallas, April 26–28, 2022

More News

Innovation

The End of Human Risk Management?

Automation works best when human intuition is involved

Published: Tuesday, December 22, 2015 - 11:39

Eric (not his real name) was under pressure from his sales department. He was hesitant to close a large financing deal with a Chinese corporation but had little beyond his intuition to back up his position.

The company’s stock price had gained a whopping 600 percent in one year. Nevertheless, Eric followed his intuition and ran a software analysis on the company trading activity. It didn’t take long for a strange pattern to emerge: There was strong activity at the end of most trading days that was pushing the stock up. He had enough to kill the deal.

A few weeks later, that company’s stock crashed nearly 50 percent in a single day, triggering an extended trading suspension pending an investigation by the local regulator. Unstructured data analysis combined with human intuition had saved Eric’s firm from a severe financial and reputational loss.

This example, far from being isolated, stresses the opportunities of automated data management. The monetary cost of conserving data has plummeted during the last few decades while the processing technology has dramatically expanded. By now, machine-driven analysis has become as ubiquitous as Amazon or Google.

This has paved the way for automated risk management. Take cybersecurity, for example. It has evolved from passive protection based on anti-viruses and similar technologies to real-time monitoring based on behavioral indicators and then to dynamic cyber-defense. Machines now make decisions that used to be the purview of IT specialists. As the amount of data to be treated in real time increase, the role of humans shrinks.

People power

However, big strategic decisions are still made by people, not machines. Hardly anyone feels passionate about an algorithm outcome. Moreover, very few current senior managers understand concepts like “machine learning.” This creates the perception of a “black box” that is detrimental to the credibility of machine-generated analysis. Raw, or even processed, data need to be converted into something that makes sense for humans like stories or pictures.

Data visualization, for example, has become a booming industry. This market is expected to exceed $6 billion by 2019, with a yearly growth rate of 10 percent. For example, more than 92,000 articles from major newswires in the United States are posted to the web each day. Naturally, it’s impossible for a person to comprehend or summarize such a dense data feed. Companies such as Amareos provide heat maps with end-of-day generic summaries. These provide a measure of sentiment after analyzing the data and then a graphical representation of the output. Humans can then harness their intuition and their imagination to tell stories based on the graphics, a skill still not readily available to machines.

More broadly, the key point is to establish a complementary relationship with machines. For example, inside financial institutions, automated processing creates correlations between risk models that may induce systemic instability. Recently, flash crashes have started to affect financial markets on a regular basis. Algorithmic models, often executed at high-frequency, are designed to divide large trades into several smaller ones to manage market impact and risk. However, algorithmic trading speed and interconnectedness can result in the loss of billions of dollars in a matter of minutes when its cumulative effect reaches a tipping point.

This stresses the need for circuit breakers—a role for humans to play. Acting as such, an officer by the name of Stanislav Petrov is said to have prevented WW III in 1983. Petrov was the duty officer for the newly installed soviet nuclear early-warning system when it detected a few nuclear missiles incoming from the United States. Considering a limited nuclear strike to be implausible, Petrov surmised it was a false alarm. His judgment, later proven correct, prevented a potential soviet counterstrike.

Beat this

Another issue with machines is their performances in fuzzy environments. Although IBM’s Deep Blue supercomputer scored a victory over chess world champion, Garry Kasparov, as early as 1997, supercomputers have yet to achieve a similar victory against grandmasters of Go, a traditional Chinese strategy board game involving squares and pins. While chess is well-suited for algorithmic analysis, Go is based on principles that rely more on qualitative judgment and are somehow easier to visualize for humans. Human intuition is still hard to beat for machines.

This became clear during a 2005 chess tournament opened to human players, computers, and teams composed of both. Chess grandmasters and best-in-class machines (similar to Deep Blue) competed. The winner turned out to be a pair of amateur chess players using relatively weak laptops. Their comparative advantage was not to play chess better than the grandmasters, or to enjoy better raw computer power from their machines, but to deploy computers more efficiently to help them make the right decisions. Kasparov concluded from that tournament that “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” That is good news for human risk managers. They are probably not disappearing anytime soon, but their role will be largely redefined into a more strategic partnership with machines.

This article is republished courtesy of INSEAD Knowledge. © INSEAD 2015.

Discuss

About The Authors

Gilles Hilary’s picture

Gilles Hilary

Gilles Hilary is an INSEAD professor of accounting and control and the Mubadala chaired professor in corporate governance and strategy. He is also a contributing faculty member to the INSEAD Corporate Governance Initiative. Hilary regularly teaches courses on corporate governance, risk management, financial analysis, decision making processes, and behavioral finance. He has an MBA from Cornell University, a Ph.D. from the University of Chicago, and a French professional accounting degree.

Arnaud Lagarde’s picture

Arnaud Lagarde

Arnaud Lagarde is the chief risk officer of Mandarin Capital Ltd., a Hong Kong-based asset management company. Lagarde is also working on the item writing program for the Global Association of Risk Professionals. He has a master of science in applied mathematics to finance. He is proficient in three languages: German, French, and English. Currently Lagarde is studying INSEAD’s International Directors Program.