Wednesday, 25 July 2018

Artificial Intelligence (AI) – Fear or Uncertainty?



During the Q&A session at a recent panel discussion on FinTech, some people in the audience spoke openly about the fear of AI. This appeared to be driven by the potential impact of AI on jobs. Technological progress has often led to changes in the labour market, destroying and also creating new jobs. Who knew a web designer 25 years ago?

A lesson I take from world events and other experiences is that dismissing people’s fears about AI (or anything else, for that matter) is not usually an effective strategy. Unfortunately, I do not have a crystal ball to predict the future or provide reassurance. Instead, I thought I would take the hard road and write something about the genuine uncertainty that exists about the progress of AI and its potential value.

It is useful to start by taking a step back and thinking about the key aspects of AI tools. The main purpose of these AI tools is to make predictions about the future based on historical data and their underlying correlations. If you have been trained in economics, building models that employ data to generate predictions that support decision making is nothing new. 

AI, however, is different because advances in computer technology facilitate real-time predictions based on an extended range of data sources, including data originating from social media, text and many devices. Not surprisingly, these enhanced capabilities can uncover dependencies and correlations that might not have been visible to the naked eye or with more traditional research methods or data sources. It is worth noting at this point that despite its progress, AI shares with more traditional research methods the challenge of distinguishing between correlation and causation.

Progress in computing brings with it the ability to build decision-making tools that transform predictions into actions that can be executed without human intervention. The simplest and most ubiquitous example is predicting the correct spelling as you type. An example in FS would be any AI tool that forecasts an individual’s or corporate creditworthiness, then offers a product with no human intervention. However, predictions will remain predictions.

Another source of uncertainty about the impact of AI is the breadth of these tools. So far, AI tools have been confined to what is usually described as “narrow” activities, such as the two examples above. The possibility of material employment substitution comes with what is called general artificial intelligence that operates in broader settings. There has been progress moving from these narrow settings, one example being Google’s AI tool which beat the world champion of the board game, Go. However, a breakthrough in which that gives rise to general artificial intelligence still remains undiscovered. There are different views about how close we may be to reaching that point, but the reality is that no one knows. In the end, does it really matter if the world will be very different in 100 years?

At the same time, it is worth remembering that in an increasingly complex and interdependent world, enhanced predictions can benefit everyone, for example by preventing diseases and helping people enjoy work more. There are countless stories of this. The latest one I have come across (The Economist, “Diligence Disrupted”, 12 July 2018) relates to the challenge of legally reviewing a huge number of documents as part of due diligence or preparation for court. This is a labour-intensive activity that usually represents a significant cost hurdle. AI tools (and there are many) play a triage role, scanning the many documents searching for specific clauses, specific aspects of clauses, or their absence. These then allow lawyers to conduct a focused review and legal diligence, resulting in greater job satisfaction and, potentially, cost savings for the client. As the article notes, the net impact on employment remains unclear. However, lower fees could lead to higher demand requiring more lawyers.

So there is uncertainty about the impact of AI tools and a fair amount of benefit to society that can be derived before that uncertainty is resolved. This does not mean that progress with AI will always be smooth; like any other tool, AI tools can be misused intentionally or unintentionally. This is where applying governance and risk management to AI tools would generate value, but I will leave that discussion for another post.

If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

No comments:

Post a Comment