Sunday, 14 June 2020

Delegating Decision Making to AI Tools – Choices and Consequences*

Sometimes when I hear about Artificial Intelligence (AI) tools it seems like it is all about the technical details of the model and the data, which is certainly very important. This post is about another important aspect: the operating model in which the AI tool will operate.

There are many aspects of such an operating model.  Some are practical, such as ensuring that the tools integrate with other parts of the business.   In this post, I am focusing on the delegation of decision making to the AI tool – the choices that exist in most cases and the implications for the control environment.  These are summarised in the figure below.

At one extreme of the delegation of decision making, you have AI tools that operate independently of human intervention.  An example is algorithmic trading or an automated trading system which trade without any human intervention to use the speed and data processing advantages that computers have over a human trader.  Interestingly, this also represents one of the few prescriptive examples of PRA intervention where it requires that a human has the possibility of stopping the trading system.[1]

At the other end of the spectrum, there are AI tools used by experts in a professional environment.  For example, actuaries might use machine learning techniques to undertake experience analysis and support reserving work.

Between these two examples, you have AI tools that provide a forecast or recommendation for consideration by an analyst.  For example, the AI tool could provide a credit rating that validates a rating derived using more traditional methods.

Another middle of the road alternative is ‘management by exception’.  This means that the AI tools have a degree of autonomy to operate within a ‘norm’, which is inferred from historical data.  Cases that are outside the norm are then referred to an analyst for consideration to improve and verify the predictions. 

These are business choices and in turn have implications for the development process of AI tools.   You would expect controls around data and model documentation in all cases.  But broadly speaking you would also expect a tighter control and a more intense validation for AI tools that operate more independently of human intervention.  This includes the depth of model’s understanding, including:

  • explainability – why did the model do that;
  • transparency – how does the model work;
  • the impact on customers – e.g., the difference between Netflix recommendations and credit card underwriting.

The choices of operating model also have important implications for staff training.  AI tools operated by staff that have not been involved in its development must be trained to the appropriate level to ensure that the AI tool operates effectively.  For example, where ‘management by exception’ is adopted, staff would need the appropriate knowledge and skills to deal with the exceptions.

There are important choices for the operating model into which AI tools are deployed.  These choices have risk management and control implications and these choices may change over time.  An AI tool might start operating in an advisory capacity.  As trust in the AI toll increases then the delegated decision making can be increased.

These implications and choices should be considered as part of the model design.

We hope you found this post of interest. You can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

*  This post is based on my contribution to a virtual panel discussion organised by ActuarTech on AI Governance & Risk Management.

[1] Prudential Regulation Authority (PRA), Algorithmic trading, Supervisory Statement, 5/18, June 2018.

Wednesday, 26 February 2020

Good risk management is not just about good ideas

One might say that this is stating the obvious and that it is understood that implementation also matters.  A recent FCA enforcement case against Moneybarn would suggest that it is not so obvious after all.

Moneybarn is a lender that provides motor finance for used vehicles to ‘non-standard’ customers.[1] The case against them related to the regulatory expectations for treatment of and communication to customers that fall into financial difficulties, i.e. the exercise and communication of appropriate forbearance by the lender.  Here, we seek to tease out the implications of this case for the risk management activities of FCA regulated business.

1.  Appropriate policy design

As one would expect, policies need to cover the appropriate ground.  This can include articulating the appropriate range of options (in this case, for customers forbearance and resolution), the considerations that would be taken into account and the governance that would apply to different options. 

It is worth noting that in this enforcement case, it appears that the FCA had no obvious concerns about the relevant policies and procedures reviewed.  

2.  Implementation

The challenge is how these policies and procedures are translated in the business, e.g. whether the call scripts are consistent with the policies.  In some case, this means that calls would be far from “linear”.  Customer service agents will have to consider a range of options and guide the customer.  This would have implications for training and tools available for customer service agents. 

The FCA notes that “from the review of the sample the use of any other forbearance options”, other than clearing their arrears over a short period of time, “despite the fact that policies and procedures referred to other available options”.   

3.  Monitoring and assurance

There is usually a combination of first line monitoring and oversight by 2nd and 3rd line functions.  To some extent, who provides assurance becomes less important than whether assurance is provided.

It is important to recognise that assurance should be provided about the processes and about the outcomes.  Where the nature of the issue involves considering customers’ individual circumstances in response to financial difficulties, then it is important to evidence that the range of options set out in the policy have been delivered.   This is more challenging to monitor than following a process. 

It is interesting that in this enforcement note there are no references to assurance or to the role of 2nd and 3rd line functions.

4.  Regulatory relationship management

The FCA initial engagement starts with a seemingly low-profile review of a “limited number” of files and call records leading to a visit in July 2016 to assess forbearance and termination practices.  There were then several interactions with the FCA in September 2016 and January 2017, leading to a formal request for imposition of a requirement in June 2017 and eventually enforcement action.  One must wonder if a more proactive engagement with the FCA would have prevented the escalation to enforcement.

It is usually noted that proactive engagement with the FCA and the issues raised would have been expensive.  Hindsight may be a powerful tool but it is not clear that the cost of the proactive engagement would have been unlikely to exceed the enforcement costs, which ended up being very substantial – the fine of £2.7m, the impact on senior management’s time, and the £30.3m of compensation paid to customers potentially affected by these failings. 

This post is part of the materials discussed in episode 3 of RegNut Podcast.   If you found this post of interest, subscribe to RegNut.  You can also subcribe to the blog and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

[1] Non-standard customers are those that cannot access finance from mainstream lenders because they have a poor or no credit history or past problems with credit due to unemployment, ill health or other adverse events.

Monday, 27 January 2020

Operational Resilience

By Shirley Beglinger, Advisory Board Member, Crescendo Advisors

In today's interconnected financial world, "organisational resilience" must be taken to mean much more than just "a fully tested disaster recovery plan". Regulators are requiring boards to see beyond the walls of their own firm and identify its position in the economic, IT and service-delivery ecosystem with an emphasis on important services provided. This is a completely different perspective on risk.  Boards and CROs need to reconsider many tried and tested risk methodologies and metrics.

In reviewing the drivers of potential operational disruption, the CRO may identify several which are difficult or expensive to address. "Reliance on legacy infrastructure" for example will likely lead to a lengthy boardroom discussion of the expense and dangers of IT integration projects. Supply chains and data sharing quickly lead to the realisation that even if the firm's own arrangements are top-notch, there are probably other firms in their ecosystem who may not have the same level of preparedness.

Having identified potential sources of disruption, the board must then quantify potential costs (internal and external) and assess the ability to recover from severe and plausible scenarios of operational disruption and compare these with the firm's stated tolerance for operational disruption. Where necessary, remediation plans must be put in place.

While no board member wishes to explain to the regulator why their firm was the first domino in the ecosystem to fall over, such far-reaching change needs to be carefully managed.  To implement these requirements firms will benefit from a pilot that enables them to develop an understanding of the steps that would be required.  This will be less disruptive and more beneficial than a firm-wide initiative.

However, the need to scale up means that firms will need to identify or acquire in-house "resilience capabilities". A key aspect of the output from a successful pilot project would be to identify exactly what capabilities are required and how they can best be embedded within the firm's business.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.