Sunday, 14 June 2020

Delegating Decision Making to AI Tools – Choices and Consequences*

Sometimes when I hear about Artificial Intelligence (AI) tools it seems like it is all about the technical details of the model and the data, which is certainly very important. This post is about another important aspect: the operating model in which the AI tool will operate.

There are many aspects of such an operating model.  Some are practical, such as ensuring that the tools integrate with other parts of the business.   In this post, I am focusing on the delegation of decision making to the AI tool – the choices that exist in most cases and the implications for the control environment.  These are summarised in the figure below.

At one extreme of the delegation of decision making, you have AI tools that operate independently of human intervention.  An example is algorithmic trading or an automated trading system which trade without any human intervention to use the speed and data processing advantages that computers have over a human trader.  Interestingly, this also represents one of the few prescriptive examples of PRA intervention where it requires that a human has the possibility of stopping the trading system.[1]

At the other end of the spectrum, there are AI tools used by experts in a professional environment.  For example, actuaries might use machine learning techniques to undertake experience analysis and support reserving work.

Between these two examples, you have AI tools that provide a forecast or recommendation for consideration by an analyst.  For example, the AI tool could provide a credit rating that validates a rating derived using more traditional methods.

Another middle of the road alternative is ‘management by exception’.  This means that the AI tools have a degree of autonomy to operate within a ‘norm’, which is inferred from historical data.  Cases that are outside the norm are then referred to an analyst for consideration to improve and verify the predictions. 

These are business choices and in turn have implications for the development process of AI tools.   You would expect controls around data and model documentation in all cases.  But broadly speaking you would also expect a tighter control and a more intense validation for AI tools that operate more independently of human intervention.  This includes the depth of model’s understanding, including:

  • explainability – why did the model do that;
  • transparency – how does the model work;
  • the impact on customers – e.g., the difference between Netflix recommendations and credit card underwriting.

The choices of operating model also have important implications for staff training.  AI tools operated by staff that have not been involved in its development must be trained to the appropriate level to ensure that the AI tool operates effectively.  For example, where ‘management by exception’ is adopted, staff would need the appropriate knowledge and skills to deal with the exceptions.

There are important choices for the operating model into which AI tools are deployed.  These choices have risk management and control implications and these choices may change over time.  An AI tool might start operating in an advisory capacity.  As trust in the AI toll increases then the delegated decision making can be increased.

These implications and choices should be considered as part of the model design.

We hope you found this post of interest. You can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

*  This post is based on my contribution to a virtual panel discussion organised by ActuarTech on AI Governance & Risk Management.

[1] Prudential Regulation Authority (PRA), Algorithmic trading, Supervisory Statement, 5/18, June 2018.

Wednesday, 26 February 2020

Good risk management is not just about good ideas

One might say that this is stating the obvious and that it is understood that implementation also matters.  A recent FCA enforcement case against Moneybarn would suggest that it is not so obvious after all.

Moneybarn is a lender that provides motor finance for used vehicles to ‘non-standard’ customers.[1] The case against them related to the regulatory expectations for treatment of and communication to customers that fall into financial difficulties, i.e. the exercise and communication of appropriate forbearance by the lender.  Here, we seek to tease out the implications of this case for the risk management activities of FCA regulated business.

1.  Appropriate policy design

As one would expect, policies need to cover the appropriate ground.  This can include articulating the appropriate range of options (in this case, for customers forbearance and resolution), the considerations that would be taken into account and the governance that would apply to different options. 

It is worth noting that in this enforcement case, it appears that the FCA had no obvious concerns about the relevant policies and procedures reviewed.  

2.  Implementation

The challenge is how these policies and procedures are translated in the business, e.g. whether the call scripts are consistent with the policies.  In some case, this means that calls would be far from “linear”.  Customer service agents will have to consider a range of options and guide the customer.  This would have implications for training and tools available for customer service agents. 

The FCA notes that “from the review of the sample the use of any other forbearance options”, other than clearing their arrears over a short period of time, “despite the fact that policies and procedures referred to other available options”.   

3.  Monitoring and assurance

There is usually a combination of first line monitoring and oversight by 2nd and 3rd line functions.  To some extent, who provides assurance becomes less important than whether assurance is provided.

It is important to recognise that assurance should be provided about the processes and about the outcomes.  Where the nature of the issue involves considering customers’ individual circumstances in response to financial difficulties, then it is important to evidence that the range of options set out in the policy have been delivered.   This is more challenging to monitor than following a process. 

It is interesting that in this enforcement note there are no references to assurance or to the role of 2nd and 3rd line functions.

4.  Regulatory relationship management

The FCA initial engagement starts with a seemingly low-profile review of a “limited number” of files and call records leading to a visit in July 2016 to assess forbearance and termination practices.  There were then several interactions with the FCA in September 2016 and January 2017, leading to a formal request for imposition of a requirement in June 2017 and eventually enforcement action.  One must wonder if a more proactive engagement with the FCA would have prevented the escalation to enforcement.

It is usually noted that proactive engagement with the FCA and the issues raised would have been expensive.  Hindsight may be a powerful tool but it is not clear that the cost of the proactive engagement would have been unlikely to exceed the enforcement costs, which ended up being very substantial – the fine of £2.7m, the impact on senior management’s time, and the £30.3m of compensation paid to customers potentially affected by these failings. 

This post is part of the materials discussed in episode 3 of RegNut Podcast.   If you found this post of interest, subscribe to RegNut.  You can also subcribe to the blog and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

[1] Non-standard customers are those that cannot access finance from mainstream lenders because they have a poor or no credit history or past problems with credit due to unemployment, ill health or other adverse events.

Monday, 27 January 2020

Operational Resilience

By Shirley Beglinger, Advisory Board Member, Crescendo Advisors

In today's interconnected financial world, "organisational resilience" must be taken to mean much more than just "a fully tested disaster recovery plan". Regulators are requiring boards to see beyond the walls of their own firm and identify its position in the economic, IT and service-delivery ecosystem with an emphasis on important services provided. This is a completely different perspective on risk.  Boards and CROs need to reconsider many tried and tested risk methodologies and metrics.

In reviewing the drivers of potential operational disruption, the CRO may identify several which are difficult or expensive to address. "Reliance on legacy infrastructure" for example will likely lead to a lengthy boardroom discussion of the expense and dangers of IT integration projects. Supply chains and data sharing quickly lead to the realisation that even if the firm's own arrangements are top-notch, there are probably other firms in their ecosystem who may not have the same level of preparedness.

Having identified potential sources of disruption, the board must then quantify potential costs (internal and external) and assess the ability to recover from severe and plausible scenarios of operational disruption and compare these with the firm's stated tolerance for operational disruption. Where necessary, remediation plans must be put in place.

While no board member wishes to explain to the regulator why their firm was the first domino in the ecosystem to fall over, such far-reaching change needs to be carefully managed.  To implement these requirements firms will benefit from a pilot that enables them to develop an understanding of the steps that would be required.  This will be less disruptive and more beneficial than a firm-wide initiative.

However, the need to scale up means that firms will need to identify or acquire in-house "resilience capabilities". A key aspect of the output from a successful pilot project would be to identify exactly what capabilities are required and how they can best be embedded within the firm's business.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

Thursday, 1 August 2019

ERM in Three Lines*

One of the challenges with enterprise risk management (ERM) is how much is written on the subject.  I find it useful to identify the key components.  This provides a structure to sort out the detailed views and comments, though it is also more than that. If you are a busy CRO or senior risk leader, identifying the key components enables you to take stock and think about challenges and improvements that may be relevant to your priorities. 

Here is an attempt to sum up ERM and provide that clarity in three headlines.

1.       A vision of the ERM purpose 

My preference for financial services is ‘protect and enable’. This highlights that risk management is more than just about avoiding the downside; it is about how risk management supports decision making, including the role of the CRO in that decision making. (More on ‘protect and enable’ and different views from practitioners shared on Crescendo Advisors’ blog are available here.)

2.       An articulation of how to deliver and embed ERM in the business 

This is your ERM framework, roles and responsibilities, policies, and risk appetite. They must provide the right balance between the level of detail and clarity to create a durable product and support business implementation.

3.       Evidence of the outcomes of vision and articulation of ERM (1 and 2 above) 

This is the outcome of the ERM, i.e. the assurance that is provided to the Board. This means that a feedback mechanism that supports improvement is in place. This is partly about risk or thematic reviews, but it also represents a wider perspective that involves 1st line and 3rd line as well. I also find that focusing on assurance is more ‘real’ than a discussion on the extent to which processes are implemented or embedded.

At the risk of oversimplifying, here is my own take on the UK insurance business position on these three aspects
  • The articulation of the ERM vision is progressing but there is still work to be done. There is a sense that, broadly speaking, people operate according to the ‘protect and enable’ vision without articulating it as clearly as it could be.  
  • Good progress has been made articulating how to deliver and embed ERM in businesses; all businesses have ERM frameworks and policies in place.  Some are considering external reviews after the frameworks have been in place for some time.  
  • The biggest challenge ahead is evidencing ERM implementation and providing structured assurance to the Board about ERM expectations. This is a challenge for risk management function (risk reviews?), first line (business and control reviews?) and internal audit (coordinate with first and second line?).  Please get in touch if you want to receive a paper with initial thoughts on this challenge. 

Do you agree with views about these views about the insurance sector in the UK? How about banking and asset management? How is this seen in other countries?

*  No pun intended about the three lines of defence.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

Thursday, 18 July 2019

AI and Risk Management

Earlier this year, I gave a presentation to a group of actuaries - the Network of Consulting Actuaries - on the challenge of adopting Artificial Intelligence tools in Financial Services and how risk management help.  I have transformed the speaking notes into a paper - here.  

Happy reading!

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

Thursday, 4 July 2019

3+1 Types of Digital Transformations and How to Prioritize Them

A former insurance CEO once said that if you want to understand risk in financial services, you should start by looking at the products you are offering. I have been exploring how incumbents in financial services, and specifically risk management, should change to embrace FinTech. Inevitably then the subject of ‘digital transformation’ comes up. I have been speaking with various colleagues and friends recently and I realised that there are rather different forms of digital transformations with different implications for risk management and the business.  Here is my take on the various types. 

1.       Data-driven

Someone in the business takes the initiative and starts collating, curating and using the many data sources in the business to address specific analytical issues and enhance the quality of decision making.  This represents a bottom-up transformation with potential transformational features. 

In this case, buy-in is unlikely to be an issue. The main risk management challenge may arise from the scaling up of this initiative. For example, scaling up may involve using external data rather than internal data or bringing new technology to store the data, e.g. a data lake, which needs to be integrated into existing systems. It is also important that the consideration of analytical issues in the business factors in the need to maintain (and enhance, where necessary) an understanding of the risk profile of the business. For example, if additional data allows the business to modify its underwriting approach in a significant way, you should also consider how the (different) exposures would be monitored. There are a couple of examples here.

2.       Enhancing Customer Journeys 

This can be about how customers are serviced, given their existing journeys, and might include enhancing the front-end applications or rolling out new IT equipment to service customers. Alternatively, the transformation may be about changing or enhancing aspects of customer journeys. This might include, for example, introducing chat-bots as part of customer journeys (e.g. claims management) or applying an artificial intelligence-based tool to a specific process (e.g. underwriting).

This type of transformation has become the most visible form of digital transformation thanks to the various accelerators that incumbents in financial services have created. The challenge of buy-in is typically addressed by specifying that the accelerator should partner external providers with business leaders for whom the technology may be relevant. The impact on the risk profile of the business is also dependent on the specific transformation and should be considered from the outset. 

3.       IT-enabler

There are cases where the legacy systems become the main challenge and where the adoption of cloud-based services can be part of the answer. There are several approaches here, ranging from incremental steps to a ‘big-bang’ approach. One interesting idea is focusing on reducing the functionality of the legacy system and replicating that outside using new technology. 

These transformations may be motivated by concerns about operational resilience in the short term but might also support the transformations outlined above and enable more effective risk management. 

4.       Digital ‘Non-transformation’

This involves applying new technologies in the context of a new product line where there is no transformation as such. This clearly avoids the transformation in the short term but it can also provide the business with the means to build confidence in specific technologies (AI, blockchain) and the capability to execute and bring on board new technologies.

These types of digital transformations are not mutually exclusive, but it is important to be clear that they are different. Equally, they are not substitutes for each other and the real challenge is prioritising between them. This will inevitably vary between businesses, though I believe that there are standard considerations shaping the priorities such as the need to change the culture in order to mobilize the business for the digital era and the state of the core IT infrastructure, including the need to leverage technology as an enabler.  

What do you think about these categories? 

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

Monday, 27 May 2019

The New and the Old in Risk Management

I have been writing about the new and the old in risk management over the past year. This starts with the slow pace of adoption of FinTech by incumbents in financial services. I have suggested that an important component of the change needed includes incumbents amending and enhancing risk management frameworks to reflect new FinTech innovations. (See my last post on the subject.)

Recently, I came across an article from McKinsey that makes a similar point in the context of model risk and the adoption of artificial intelligence (AI) and machine learning. It turns out I am in good company! 

McKinsey’s article notes that banks have developed and implemented frameworks to manage model risk, including model validation reflecting specific regulatory frameworks, in this case from the US Federal Reserve (here). They recognise that the implementation of these frameworks is not appropriate to deal with the model risk associated with AI and machine learning. Banks are therefore proceeding cautiously and slowly introducing new modelling approaches even when these are available.

The article then shows how a standard framework for model risk management is used to identify extra considerations required for this framework to cover appropriately AI and machine learning models.  The key message is that the challenge of adopting AI and machine learning can be addressed through a careful consideration of existing approaches. 

Two further thoughts from McKinsey’s article. Firstly, the article rightly refers to model management rather than validation. It is always useful to reiterate that model validation undertaken by the risk function is just a component of how models are managed in the business. Secondly, model management should not apply only to internal models used to calculate regulatory capital, but should apply more widely to models used in the business such as those used for pricing, valuation of assets and liabilities.

The article ends with a cautionary tale of an unnamed bank where the model risk management function took initial steps to ready itself for machine learning models on the assumption that there were none in the bank. It then discovered that an innovation function had been established and was developing models for fraud detection and cybersecurity.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

Monday, 29 April 2019

The Curse of Risk Appetite

In this post, I go back to one of the fundamental aspects of an ERM framework: risk appetite. ‘The Curse of Risk Appetite’ is part of the title of an interesting paper reviewing the misuses of risk appetite.[1] Some of the misuses described in the paper might sound familiar, but perhaps the key point to take away from the paper is that there is a potential for risk appetite to become synonymous with ‘a consideration of risk’. I am not sure this was ever the intention. 

The paper includes several useful suggestions to enhance risk appetite. They are focused on the long-run value of the firm and on the structure of risk appetite statements, reflecting a view that risk is the likelihood of falling below critical levels of performance. However, my attention was really caught by the authors’ suggestion to improve the organisational process for risk management. They suggest that a risk function’s role should be defined to include responsibility for evaluating the combined effect of strategic initiatives and capital budgeting on the firm’s overall risk profile.

On one level, this prescription is consistent with the view that the aim of the risk function should be to ‘protect and enable’, with the emphasis on the ‘enable’ aspect which sometimes gets overshadowed by ‘protect’. I am attracted to this suggestion because it turns a vision into a practical requirement that can be incorporated into an articulation of roles and responsibilities for a CRO or risk function. 

If, however, this was implemented literally in UK financial services, I suspect there would be an issue with regulators’ expectation about the independence of the risk function (second line of defence) from the business (first line). 

A similar outcome could be reached by clarifying that the role of the CRO/risk function includes providing a risk opinion in the early stages of the consideration of major strategic initiatives that have the potential to alter the business’s risk profile. The emphasis on timing is important. Providing a risk opinion only when major strategic initiatives are presented for approval is unlikely to add value. A CRO/risk function opinion in the early stages is likely to support consideration of the details of the initiatives and how they can be shaped to strike the appropriate balance between risk and return.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here

[1] Alviniussen, Alf and Jankensgård, Håkan, The Risk-Return Tradeoff: A Six-Step Guide to Ending the Curse of Risk Appetite (May 7, 2018). 

Wednesday, 3 April 2019

Risk Management as Infrastructure for Artificial Intelligence and FinTech

During 2018, I wrote several posts about FinTech, Artificial Intelligence (AI) and risk management.  I was kindly invited to present to the Network of Consulting Actuaries, I chose to use this opportunity to consolidate my views on the subject.  

There were several ideas flowing through my mind.

Firstly, informal evidence suggests that, for all the hype, FinTech and AI have not yet become mainstream in insurance or in financial services more generally.

Secondly, the largest business transformation arising from FinTech and AI is the adoption of these technologies by incumbents.  Indeed, I explored this in the context of banking through the group project at the Oxford FinTech Programme I completed in December 2018.

Thirdly, someone who works for a multinational insurer made the observation during an InsurTech event in London that as a regulated entity, the insurer has responsibilities and obligations towards their customers and must follow due process before they roll out new technologies.  There was a hint of an apology in this observation to the nimble start-ups in the audience.

Putting all these thoughts together led me to see the main challenge to the adoption of FinTech by incumbents as governance, including how risk management is applied in practice.  If the aim of risk management is to ‘protect’ or block, then the incumbent does not have an obvious lever to support the introduction of AI tools and FinTech.  

If, on the other hand, the aim of risk management is perceived as to ‘protect and enable’, then risk management can be part of the solution.  Risk management can lead to the creation of necessary infrastructure to ensure that AI tools achieve their transformational potential.  This includes articulating a vision of how a control framework should be leveraged, considering the impact of FinTech and AI on risk management frameworks, focusing on explainable AI, and articulating the implications for the target operating model.  This will facilitate incumbents’ adoption of FinTech and AI.  

Take a look at the presentation I gave (here) for a more detailed articulation of these points.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Friday, 23 November 2018

Integrating Artificial Intelligence in Risk Governance in Financial Services

I have written a number of articles about the interaction between artificial intelligence (AI) and governance and risk management.  My latest piece highlighted the impact that AI tools can have on a firm’s risk profile (here).  I am in the process of gathering my thoughts on how to provide for the ongoing and timely monitoring of AI-based tools, including their integration in business governance and risk management in financial services.

While reading on the subject (and keeping up with the pace of the Oxford FinTech Programme), I came across a succinct and interesting paper from Singapore’s Monetary Authority (here) that articulates how the principles of fairness, ethics, accountability and transparency can - and should - be applied to promote the use of AI in the financial sector. 

The paper highlights four useful points for senior management and Boards considering the strategic use of AI-based tools.

1. It is important to invest time and effort understanding and monitoring the decision making of AI based tools – i.e. do not regard AI-based tools as black boxes. 

2.  AI decisions are subject to the same ethical standards that apply to decisions made by staff – i.e. if AI-based tools purport to make decisions as humans would, but without their ‘feelings’, they should be scrutinised like humans.

3.  Governance has an important internal element not just an external one - i.e. firms should not rely on regulatory initiatives to support them delivering appropriate outcomes.

4.  The use of AI-based tools is transparent to consumers in terms of their use and outcomes – i.e. while there are limits to explanations that can be provided this cannot be dismissed altogether.

The paper also includes a number of practical illustrations that highlight the principles identified.

All in all this paper represents a useful starting point for firms to consider how they can equip themselves to benefit from the opportunities that AI technology offers for their businesses.

If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here