Showing posts with label board. Show all posts
Showing posts with label board. Show all posts

Monday, 25 June 2018

An FCA Enforcement Case Or an Example of Board Maturity?


The FCA issued an enforcement action recently against the CEO of Barclays –– as a result of the CEO’s attempt to identify a whistle-blower.  (Click here for the FCA enforcement notice and here for a short summary of the facts of the case.) There have been impassioned comments about the appropriateness of the FCA’s response, i.e. a fine imposed on the CEO. However, I would like to focus on something else.  

One of the most revealing aspects of FCA enforcement cases is how the issue comes to the FCA’s attention. Typically, FCA supervision or thematic work would identify serious shortcomings in a firm that lead to enforcement action. This one was rather interesting because there was none of that. 

There was an internal investigation of the anonymous letters by Group Compliance which was formally closed on 9 January 2017. The FCA explained that “early in 2017”, the Board became aware of the CEO’s attempt to identify the whistle-blower and that after conducting its own investigation, the Board decided to refer the CEO to the FCA. Can you imagine this ten or twenty years ago? Unlikely, I would say.

There are a number of interpretations one could advance. However, I am inclined to see this as evidence of the significant progress made in corporate governance in recent years and of the maturity boards can achieve in the appropriate environment. I can guess that it may not have been easy for Barclays’ board to refer the CEO to the regulator, but who said that being a board director would be easy?


If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Thursday, 24 May 2018

Artificial Intelligence (AI) and the Board Risk Committee


The purpose of risk management in financial services is usually defined as to ‘protect and enable’.  The ‘protect’ dimension can refer to the franchise value of the business but is mainly about protecting from regulatory intervention. ‘Enable’ has a perspective of value (however defined) and achievement of company objectives. (Click here to read more about ‘protect and enable’.)

AI-based solutions, leveraging on vast amounts of data, are already a reality in the world of financial services, and these solutions are only likely to become more prevalent in the next ten years. What are the implications of AI developments for a Board Risk Committee? 

The simple ‘protect and enable’ approach suggests a number of points for discussion:

  • How would your company evidence that AI systems comply with relevant legislation, e.g. non-discriminatory laws?
  • How would the wider data needs of AI system cope with data protection legislation? What about the so-called ‘right of explanation’? What would be the impact of these wider data needs on cyber-security?
  • What is the business purpose of introducing an AI system? Does the business seek to enhance operational efficiencies? Does it aim to enhance business performance? How would you ensure that this purpose is achieved?  
  • What would be the operational impact of the deployment of specific AI tools in the business? Would it also alter the overall risk profile of the business? The profile of certain risks?
  • What are the implications for risk governance, the risk management function and other oversight functions?

These are not simple questions that can be covered in a meeting of the Risk Committee. In some cases, the answer to the questions may not be clear-cut.  For example, an AI-based underwriting system can be deployed to enhance business performance or to seek operational efficiencies. In other cases, addressing some of the issues would require the development of appropriate monitoring systems rather than a point-in-time consideration.

However, it is also worth bearing in mind that unless you operate in a start-up business, there would be a fair amount of technology available which would not necessarily be based on AI, and can be applied to improve existing business processes and reflect a (more) customer-centric perspective.  So perhaps the main question about AI systems is really whether there is an adequate understanding of technology in the business to ensure that AI is the appropriate technology.

So where should a Risk Committee start?  It may be useful to think about this as discussions outside the usual calendar of the Risk Committee meetings and develop a programme that consider these over time.

If you found this post of interest, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here