Sunday, 14 June 2020

Delegating Decision Making to AI Tools – Choices and Consequences*


Sometimes when I hear about Artificial Intelligence (AI) tools it seems like it is all about the technical details of the model and the data, which is certainly very important. This post is about another important aspect: the operating model in which the AI tool will operate.

There are many aspects of such an operating model.  Some are practical, such as ensuring that the tools integrate with other parts of the business.   In this post, I am focusing on the delegation of decision making to the AI tool – the choices that exist in most cases and the implications for the control environment.  These are summarised in the figure below.

At one extreme of the delegation of decision making, you have AI tools that operate independently of human intervention.  An example is algorithmic trading or an automated trading system which trade without any human intervention to use the speed and data processing advantages that computers have over a human trader.  Interestingly, this also represents one of the few prescriptive examples of PRA intervention where it requires that a human has the possibility of stopping the trading system.[1]

At the other end of the spectrum, there are AI tools used by experts in a professional environment.  For example, actuaries might use machine learning techniques to undertake experience analysis and support reserving work.

Between these two examples, you have AI tools that provide a forecast or recommendation for consideration by an analyst.  For example, the AI tool could provide a credit rating that validates a rating derived using more traditional methods.

Another middle of the road alternative is ‘management by exception’.  This means that the AI tools have a degree of autonomy to operate within a ‘norm’, which is inferred from historical data.  Cases that are outside the norm are then referred to an analyst for consideration to improve and verify the predictions. 

These are business choices and in turn have implications for the development process of AI tools.   You would expect controls around data and model documentation in all cases.  But broadly speaking you would also expect a tighter control and a more intense validation for AI tools that operate more independently of human intervention.  This includes the depth of model’s understanding, including:

  • explainability – why did the model do that;
  • transparency – how does the model work;
  • the impact on customers – e.g., the difference between Netflix recommendations and credit card underwriting.

The choices of operating model also have important implications for staff training.  AI tools operated by staff that have not been involved in its development must be trained to the appropriate level to ensure that the AI tool operates effectively.  For example, where ‘management by exception’ is adopted, staff would need the appropriate knowledge and skills to deal with the exceptions.

There are important choices for the operating model into which AI tools are deployed.  These choices have risk management and control implications and these choices may change over time.  An AI tool might start operating in an advisory capacity.  As trust in the AI tool increases then the delegated decision making can be increased.

These implications and choices should be considered as part of the model design.

We hope you found this post of interest. You can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.



*  This post is based on my contribution to a virtual panel discussion organised by ActuarTech on AI Governance & Risk Management.

[1] Prudential Regulation Authority (PRA), Algorithmic trading, Supervisory Statement, 5/18, June 2018.


No comments:

Post a Comment