Showing posts with label governance. Show all posts
Showing posts with label governance. Show all posts

Sunday, 14 June 2020

Delegating Decision Making to AI Tools – Choices and Consequences*


Sometimes when I hear about Artificial Intelligence (AI) tools it seems like it is all about the technical details of the model and the data, which is certainly very important. This post is about another important aspect: the operating model in which the AI tool will operate.

There are many aspects of such an operating model.  Some are practical, such as ensuring that the tools integrate with other parts of the business.   In this post, I am focusing on the delegation of decision making to the AI tool – the choices that exist in most cases and the implications for the control environment.  These are summarised in the figure below.

At one extreme of the delegation of decision making, you have AI tools that operate independently of human intervention.  An example is algorithmic trading or an automated trading system which trade without any human intervention to use the speed and data processing advantages that computers have over a human trader.  Interestingly, this also represents one of the few prescriptive examples of PRA intervention where it requires that a human has the possibility of stopping the trading system.[1]

At the other end of the spectrum, there are AI tools used by experts in a professional environment.  For example, actuaries might use machine learning techniques to undertake experience analysis and support reserving work.

Between these two examples, you have AI tools that provide a forecast or recommendation for consideration by an analyst.  For example, the AI tool could provide a credit rating that validates a rating derived using more traditional methods.

Another middle of the road alternative is ‘management by exception’.  This means that the AI tools have a degree of autonomy to operate within a ‘norm’, which is inferred from historical data.  Cases that are outside the norm are then referred to an analyst for consideration to improve and verify the predictions. 

These are business choices and in turn have implications for the development process of AI tools.   You would expect controls around data and model documentation in all cases.  But broadly speaking you would also expect a tighter control and a more intense validation for AI tools that operate more independently of human intervention.  This includes the depth of model’s understanding, including:

  • explainability – why did the model do that;
  • transparency – how does the model work;
  • the impact on customers – e.g., the difference between Netflix recommendations and credit card underwriting.

The choices of operating model also have important implications for staff training.  AI tools operated by staff that have not been involved in its development must be trained to the appropriate level to ensure that the AI tool operates effectively.  For example, where ‘management by exception’ is adopted, staff would need the appropriate knowledge and skills to deal with the exceptions.

There are important choices for the operating model into which AI tools are deployed.  These choices have risk management and control implications and these choices may change over time.  An AI tool might start operating in an advisory capacity.  As trust in the AI tool increases then the delegated decision making can be increased.

These implications and choices should be considered as part of the model design.

We hope you found this post of interest. You can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.



*  This post is based on my contribution to a virtual panel discussion organised by ActuarTech on AI Governance & Risk Management.

[1] Prudential Regulation Authority (PRA), Algorithmic trading, Supervisory Statement, 5/18, June 2018.


Wednesday, 3 April 2019

Risk Management as Infrastructure for Artificial Intelligence and FinTech


During 2018, I wrote several posts about FinTech, Artificial Intelligence (AI) and risk management.  I was kindly invited to present to the Network of Consulting Actuaries, I chose to use this opportunity to consolidate my views on the subject.  

There were several ideas flowing through my mind.

Firstly, informal evidence suggests that, for all the hype, FinTech and AI have not yet become mainstream in insurance or in financial services more generally.

Secondly, the largest business transformation arising from FinTech and AI is the adoption of these technologies by incumbents.  Indeed, I explored this in the context of banking through the group project at the Oxford FinTech Programme I completed in December 2018.

Thirdly, someone who works for a multinational insurer made the observation during an InsurTech event in London that as a regulated entity, the insurer has responsibilities and obligations towards their customers and must follow due process before they roll out new technologies.  There was a hint of an apology in this observation to the nimble start-ups in the audience.

Putting all these thoughts together led me to see the main challenge to the adoption of FinTech by incumbents as governance, including how risk management is applied in practice.  If the aim of risk management is to ‘protect’ or block, then the incumbent does not have an obvious lever to support the introduction of AI tools and FinTech.  

If, on the other hand, the aim of risk management is perceived as to ‘protect and enable’, then risk management can be part of the solution.  Risk management can lead to the creation of necessary infrastructure to ensure that AI tools achieve their transformational potential.  This includes articulating a vision of how a control framework should be leveraged, considering the impact of FinTech and AI on risk management frameworks, focusing on explainable AI, and articulating the implications for the target operating model.  This will facilitate incumbents’ adoption of FinTech and AI.  

Take a look at the presentation I gave (here) for a more detailed articulation of these points.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Friday, 23 November 2018

Integrating Artificial Intelligence in Risk Governance in Financial Services


I have written a number of articles about the interaction between artificial intelligence (AI) and governance and risk management.  My latest piece highlighted the impact that AI tools can have on a firm’s risk profile (here).  I am in the process of gathering my thoughts on how to provide for the ongoing and timely monitoring of AI-based tools, including their integration in business governance and risk management in financial services.

While reading on the subject (and keeping up with the pace of the Oxford FinTech Programme), I came across a succinct and interesting paper from Singapore’s Monetary Authority (here) that articulates how the principles of fairness, ethics, accountability and transparency can - and should - be applied to promote the use of AI in the financial sector. 

The paper highlights four useful points for senior management and Boards considering the strategic use of AI-based tools.

1. It is important to invest time and effort understanding and monitoring the decision making of AI based tools – i.e. do not regard AI-based tools as black boxes. 

2.  AI decisions are subject to the same ethical standards that apply to decisions made by staff – i.e. if AI-based tools purport to make decisions as humans would, but without their ‘feelings’, they should be scrutinised like humans.

3.  Governance has an important internal element not just an external one - i.e. firms should not rely on regulatory initiatives to support them delivering appropriate outcomes.

4.  The use of AI-based tools is transparent to consumers in terms of their use and outcomes – i.e. while there are limits to explanations that can be provided this cannot be dismissed altogether.

The paper also includes a number of practical illustrations that highlight the principles identified.

All in all this paper represents a useful starting point for firms to consider how they can equip themselves to benefit from the opportunities that AI technology offers for their businesses.

If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Monday, 25 June 2018

An FCA Enforcement Case Or an Example of Board Maturity?


The FCA issued an enforcement action recently against the CEO of Barclays –– as a result of the CEO’s attempt to identify a whistle-blower.  (Click here for the FCA enforcement notice and here for a short summary of the facts of the case.) There have been impassioned comments about the appropriateness of the FCA’s response, i.e. a fine imposed on the CEO. However, I would like to focus on something else.  

One of the most revealing aspects of FCA enforcement cases is how the issue comes to the FCA’s attention. Typically, FCA supervision or thematic work would identify serious shortcomings in a firm that lead to enforcement action. This one was rather interesting because there was none of that. 

There was an internal investigation of the anonymous letters by Group Compliance which was formally closed on 9 January 2017. The FCA explained that “early in 2017”, the Board became aware of the CEO’s attempt to identify the whistle-blower and that after conducting its own investigation, the Board decided to refer the CEO to the FCA. Can you imagine this ten or twenty years ago? Unlikely, I would say.

There are a number of interpretations one could advance. However, I am inclined to see this as evidence of the significant progress made in corporate governance in recent years and of the maturity boards can achieve in the appropriate environment. I can guess that it may not have been easy for Barclays’ board to refer the CEO to the regulator, but who said that being a board director would be easy?


If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Thursday, 24 May 2018

Artificial Intelligence (AI) and the Board Risk Committee


The purpose of risk management in financial services is usually defined as to ‘protect and enable’.  The ‘protect’ dimension can refer to the franchise value of the business but is mainly about protecting from regulatory intervention. ‘Enable’ has a perspective of value (however defined) and achievement of company objectives. (Click here to read more about ‘protect and enable’.)

AI-based solutions, leveraging on vast amounts of data, are already a reality in the world of financial services, and these solutions are only likely to become more prevalent in the next ten years. What are the implications of AI developments for a Board Risk Committee? 

The simple ‘protect and enable’ approach suggests a number of points for discussion:

  • How would your company evidence that AI systems comply with relevant legislation, e.g. non-discriminatory laws?
  • How would the wider data needs of AI system cope with data protection legislation? What about the so-called ‘right of explanation’? What would be the impact of these wider data needs on cyber-security?
  • What is the business purpose of introducing an AI system? Does the business seek to enhance operational efficiencies? Does it aim to enhance business performance? How would you ensure that this purpose is achieved?  
  • What would be the operational impact of the deployment of specific AI tools in the business? Would it also alter the overall risk profile of the business? The profile of certain risks?
  • What are the implications for risk governance, the risk management function and other oversight functions?

These are not simple questions that can be covered in a meeting of the Risk Committee. In some cases, the answer to the questions may not be clear-cut.  For example, an AI-based underwriting system can be deployed to enhance business performance or to seek operational efficiencies. In other cases, addressing some of the issues would require the development of appropriate monitoring systems rather than a point-in-time consideration.

However, it is also worth bearing in mind that unless you operate in a start-up business, there would be a fair amount of technology available which would not necessarily be based on AI, and can be applied to improve existing business processes and reflect a (more) customer-centric perspective.  So perhaps the main question about AI systems is really whether there is an adequate understanding of technology in the business to ensure that AI is the appropriate technology.

So where should a Risk Committee start?  It may be useful to think about this as discussions outside the usual calendar of the Risk Committee meetings and develop a programme that consider these over time.

If you found this post of interest, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here