Featured Product
This Week in Quality Digest Live
Innovation Features
Judah Levine
How the UTC time scale was defined
Jo Napolitano
Scientists developed a device that can sort information similarly to how the human brain does
Knowledge at Wharton
A hybrid workforce will be much more difficult to manage
Eryn Brown
Their prospects for surviving the pandemic may seem dim, but there are some encouraging signs, experts say
John Toon
Detailed 3D data could help improve reliability and performance

More Features

Innovation News
Inspect nozzle welds using phased array ultrasound testing techniques including ray-tracing, scanner simulation, coverage maps
Produce large parts up to 300 × 300 × 450 mm without residual stress, gas cross flow, or having to pre-sinter powder bed
Interfacial launches highly filled, proprietary polymer masterbatches
‘Completely new diagnostic platform’ could prove to be a valuable clinical tool for detecting exposure to multiple viruses
Precitech ships Nanoform X diamond turning lathe to Keene State College
Galileo’s Telescope describes how to measure success at the top of the organization, translate down to every level of supervision
Realistic variations in glossiness could aid fine art reproduction and the design of prosthetics

More News

Innovation

The End of Human Risk Management?

Automation works best when human intuition is involved

Published: Tuesday, December 22, 2015 - 11:39

Eric (not his real name) was under pressure from his sales department. He was hesitant to close a large financing deal with a Chinese corporation but had little beyond his intuition to back up his position.

The company’s stock price had gained a whopping 600 percent in one year. Nevertheless, Eric followed his intuition and ran a software analysis on the company trading activity. It didn’t take long for a strange pattern to emerge: There was strong activity at the end of most trading days that was pushing the stock up. He had enough to kill the deal.

A few weeks later, that company’s stock crashed nearly 50 percent in a single day, triggering an extended trading suspension pending an investigation by the local regulator. Unstructured data analysis combined with human intuition had saved Eric’s firm from a severe financial and reputational loss.

This example, far from being isolated, stresses the opportunities of automated data management. The monetary cost of conserving data has plummeted during the last few decades while the processing technology has dramatically expanded. By now, machine-driven analysis has become as ubiquitous as Amazon or Google.

This has paved the way for automated risk management. Take cybersecurity, for example. It has evolved from passive protection based on anti-viruses and similar technologies to real-time monitoring based on behavioral indicators and then to dynamic cyber-defense. Machines now make decisions that used to be the purview of IT specialists. As the amount of data to be treated in real time increase, the role of humans shrinks.

People power

However, big strategic decisions are still made by people, not machines. Hardly anyone feels passionate about an algorithm outcome. Moreover, very few current senior managers understand concepts like “machine learning.” This creates the perception of a “black box” that is detrimental to the credibility of machine-generated analysis. Raw, or even processed, data need to be converted into something that makes sense for humans like stories or pictures.

Data visualization, for example, has become a booming industry. This market is expected to exceed $6 billion by 2019, with a yearly growth rate of 10 percent. For example, more than 92,000 articles from major newswires in the United States are posted to the web each day. Naturally, it’s impossible for a person to comprehend or summarize such a dense data feed. Companies such as Amareos provide heat maps with end-of-day generic summaries. These provide a measure of sentiment after analyzing the data and then a graphical representation of the output. Humans can then harness their intuition and their imagination to tell stories based on the graphics, a skill still not readily available to machines.

More broadly, the key point is to establish a complementary relationship with machines. For example, inside financial institutions, automated processing creates correlations between risk models that may induce systemic instability. Recently, flash crashes have started to affect financial markets on a regular basis. Algorithmic models, often executed at high-frequency, are designed to divide large trades into several smaller ones to manage market impact and risk. However, algorithmic trading speed and interconnectedness can result in the loss of billions of dollars in a matter of minutes when its cumulative effect reaches a tipping point.

This stresses the need for circuit breakers—a role for humans to play. Acting as such, an officer by the name of Stanislav Petrov is said to have prevented WW III in 1983. Petrov was the duty officer for the newly installed soviet nuclear early-warning system when it detected a few nuclear missiles incoming from the United States. Considering a limited nuclear strike to be implausible, Petrov surmised it was a false alarm. His judgment, later proven correct, prevented a potential soviet counterstrike.

Beat this

Another issue with machines is their performances in fuzzy environments. Although IBM’s Deep Blue supercomputer scored a victory over chess world champion, Garry Kasparov, as early as 1997, supercomputers have yet to achieve a similar victory against grandmasters of Go, a traditional Chinese strategy board game involving squares and pins. While chess is well-suited for algorithmic analysis, Go is based on principles that rely more on qualitative judgment and are somehow easier to visualize for humans. Human intuition is still hard to beat for machines.

This became clear during a 2005 chess tournament opened to human players, computers, and teams composed of both. Chess grandmasters and best-in-class machines (similar to Deep Blue) competed. The winner turned out to be a pair of amateur chess players using relatively weak laptops. Their comparative advantage was not to play chess better than the grandmasters, or to enjoy better raw computer power from their machines, but to deploy computers more efficiently to help them make the right decisions. Kasparov concluded from that tournament that “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” That is good news for human risk managers. They are probably not disappearing anytime soon, but their role will be largely redefined into a more strategic partnership with machines.

This article is republished courtesy of INSEAD Knowledge. © INSEAD 2015.

Discuss

About The Authors

Gilles Hilary’s picture

Gilles Hilary

Gilles Hilary is an INSEAD professor of accounting and control and the Mubadala chaired professor in corporate governance and strategy. He is also a contributing faculty member to the INSEAD Corporate Governance Initiative. Hilary regularly teaches courses on corporate governance, risk management, financial analysis, decision making processes, and behavioral finance. He has an MBA from Cornell University, a Ph.D. from the University of Chicago, and a French professional accounting degree.

Arnaud Lagarde’s picture

Arnaud Lagarde

Arnaud Lagarde is the chief risk officer of Mandarin Capital Ltd., a Hong Kong-based asset management company. Lagarde is also working on the item writing program for the Global Association of Risk Professionals. He has a master of science in applied mathematics to finance. He is proficient in three languages: German, French, and English. Currently Lagarde is studying INSEAD’s International Directors Program.