Friday, 23 November 2018

Integrating Artificial Intelligence in Risk Governance in Financial Services


I have written a number of articles about the interaction between artificial intelligence (AI) and governance and risk management.  My latest piece highlighted the impact that AI tools can have on a firm’s risk profile (here).  I am in the process of gathering my thoughts on how to provide for the ongoing and timely monitoring of AI-based tools, including their integration in business governance and risk management in financial services.

While reading on the subject (and keeping up with the pace of the Oxford FinTech Programme), I came across a succinct and interesting paper from Singapore’s Monetary Authority (here) that articulates how the principles of fairness, ethics, accountability and transparency can - and should - be applied to promote the use of AI in the financial sector. 

The paper highlights four useful points for senior management and Boards considering the strategic use of AI-based tools.

1. It is important to invest time and effort understanding and monitoring the decision making of AI based tools – i.e. do not regard AI-based tools as black boxes. 

2.  AI decisions are subject to the same ethical standards that apply to decisions made by staff – i.e. if AI-based tools purport to make decisions as humans would, but without their ‘feelings’, they should be scrutinised like humans.

3.  Governance has an important internal element not just an external one - i.e. firms should not rely on regulatory initiatives to support them delivering appropriate outcomes.

4.  The use of AI-based tools is transparent to consumers in terms of their use and outcomes – i.e. while there are limits to explanations that can be provided this cannot be dismissed altogether.

The paper also includes a number of practical illustrations that highlight the principles identified.

All in all this paper represents a useful starting point for firms to consider how they can equip themselves to benefit from the opportunities that AI technology offers for their businesses.

If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Wednesday, 10 October 2018

This Time is Different - The Digital Revolution

The August issue of Central Banking, a journal, includes my review of a book about the digital revolution by Chris Skinner.  It is a fascinating book that can change pre-determined views.  You can read the review here or below.

This Time Is Different: A book review of Digital Human by Chris Skinner, Marshall Cavendish (2018)

Mr Skinner has written two books on FinTech and banking (Digital Bank and ValueWeb), and now Digital Human is his third. This represents an opportunity to take a step back and consider some of the bigger questions about FinTech. How much of a change could this represent for banking? For financial services? For society?

His main argument is that digitalisation has reduced the cost associated with a minimum viable product beyond recognition for nearly anything in financial services. One way of looking at FinTech is as ‘one big bucket of finance and technology’ with a range of technologies from InsurTech (based on artificial intelligence) to digital currencies, with mobile wallets and peer-to-peer lending in between. Indeed, one could make the argument that it should be called ‘TechFin’ instead. However, it is possible to make overall sense of these technologies by distinguishing between those that challenge existing business structures and those that create new ones. 

One of the main aspects of the digital revolution with respect to banking is the differential effect between the (developed) West and developing world. Surprisingly, it is not in the direction you might expect. For the West overall, FinTech represents a challenge to existing business structures. Current IT systems took shape in the 1970s and 1980s at a time when now-ubiquitous ATMs were first introduced. While the front-ends of these systems have changed over time, the core architecture has not. As Mr Skinner points out, CEOs invest significantly in systems maintenance to pass on to the next CEO rather than overhauling technology. I was left wondering if this might also be a reflection of misplaced risk aversion that contributes to the relatively short tenure of CEOs.

There also seems to be a potentially systemic issue arising from the natural ageing process of the programmers who can still write code in the language of the legacy systems (COBOL). Mr Skinner observes that more than 50% of COBOL programmers are over 45 years old, so the challenge of maintaining legacy systems is not going to get any easier.

However, the real challenge does not seem to be adopting new technologies but the vertically integrated business model of banking or, as Mr Skinner puts it rather eloquently, being ‘control freaks in a proprietary operation building everything themselves’.  As usual, technology enables the challenge but does not help the incumbent figure out how the business model should evolve and how to remain profitable. Mr Skinner offers two suggestions. The first is leveraging on its capital, history and brands and repositioning the business as a trusted party that can select specialised providers, like Amazon Marketplace. The second is leveraging on the data and focusing on advice and data analytics.

Indeed, there seems to be a change in emphasis in FinTech. Between 2010 and 2014, the focus was on disrupting existing banking business models and unbundling. Since 2014, the focus has shifted to collaboration with more dynamic banks leveraging on their customers’ reach and capital.
Perhaps the key point to emphasise is that the regulatory framework has already adapted to some extent, at least in the EU where Open Banking is already a reality because of EU directives.

If you don’t work full time in FinTech, it is difficult to form an impression about how far these trends could go.  (Yes, I know there are forecasts, but they are merely forecasts.) This is where the other part of the book is particularly useful.  

In the developing world, banking tends to be restricted to affluent clients. . FinTech does not challenge major incumbents; rather, it represents more of a development opportunity. FinTech allows for servicing relatively small transactions (by Western standards) which is compensated through a relatively large number of transactions. In this way, financial inclusion becomes a business and stops being a form of charity.

Mr Skinner illustrates extensively how far and deep these trends are going. In sub-Saharan Africa, mobile banking and e-wallets lead with the overall number of accounts growing fivefold between 2011 and 2016, reaching around 275 million accounts out of 420 million mobile subscribers. Interestingly, use is not evenly spread. Institutional design continues to matter even in the age of FinTech. In some countries, these developments are led by mobile network operators and in others by banks. Some countries actively encourage partnership and agreements to enable domestic and cross-border money transfers cheaply.

This is not just a matter of convenience. If you cannot get paid reliably and must rely on cash, there is a limited number of business opportunities that can thrive. The case study of China’s Ant Financial is therefore fascinating. It starts with a problem of trust between buyers and sellers that limits the development of the e-commerce that evolved into what we call now electronic payments. One of the lessons of this is really about the central role that the consumer plays. The business scale is staggering: in 2016, the value of transactions in the peak day (called Singles’ Day) was double the amount transacted on the US’s Thanksgiving Day, Black Friday and Cyber Monday together. It’s not just payments, as there seems to be an emerging pattern that starts with electronic payments and moves to managing money, and Ant’s money market fund is already larger than JP Morgan’s US Government money market fund.

And what about society? Living longer, 3D printing, the Internet of Things and conquering space may well change how we live. I am sure you have heard before the old dictum that this time is different. Perhaps this time it is indeed, if only for banking because of FinTech.

Sunday, 16 September 2018

Monitoring the Risk and Business Impact of AI-Based Solutions



AI-based solutions can shape how financial services businesses make money, whether the business model is the same or not. For an existing financial services business, the motivations may vary and range from efficiency to expanding the business. There would be project risk as with any development, but leaving that important consideration aside, it is worth bearing in mind that AI-based solutions would also impact the risk profile of the business. This may or may not be the original intention, but it becomes more likely. The key implication is that implementing an AI-based solution would require a radically different risk oversight approach by the business.

Standard computer algorithms which are not AI-based canand dosolve complex problems. The main feature of such algorithms is that the problem is somehow defined and an algorithm developed to solve it which will produce the same answer as long as the same inputs are provided. So a credit-scoring mechanism calibrated to capture a certain type of client gives you just that.

The answers offered by an AI-based system may change over time. New data is used to reassess the underlying relationships and recalibrate the relationship between the target variable and the potential explanatory variables. This “learning” can also happen in a standard programme when there is a process of recalibration. The difference is that in the case of AI, learning would happen on a real-time basisthat’s the essence of AI.

Alternatively, with AI a target variable may not have been defined. That’s not as unusual as it might sound. For example, algorithms assessing a loan or credit card underwriting may fall in this category because there is no single rule to predict a borrower’s likelihood of repayment. New data can lead to a certain recalibration or can be used to identify new relationships between certain data. For example, over time an AI-based system might identify that outstanding debt is a better predictor of the likelihood of borrower repayment than repayment history and penalise someone with a relatively good track record of timely repayments.

The first type of AI-based solution is called “supervised machine learning” and the second one “un-supervised machine learning”. The key difference is the extent of autonomy that goes with the learning.

Consider the potential impact on conduct risk of AI-based tools. One of the expectations from Treating Customers Fairly (TCF) with respect to product governance is that they are designed to meet the needs of identified consumer groups and are targeted accordingly. This requires a clear business strategy, including identification of the target market through a combination of qualitative and quantitative research and oversight of the business to ensure that it is aligned with initial expectations of customers and business generated. Take the example of automated investment services covered in a recent FCA review. These providers would rely on some type of AI-based solution, whether supervised or unsupervised machine learning. The possibility of capturing different customers or the advice generated being different from what was envisaged cannot be ruled out. The challenge is how to put in place a monitoring approach which ensures that outcomes and risks which arise are consistent with the expectations in the business plan.

Something similar can apply from the perspective of credit risk, impacting the quality of the portfolio and performance. Suppose you have been targeting retail customers with a specific risk rating for a credit card business. If you roll out an AI-based solution to enhance the efficiency of product underwriting, you would need to have in place mechanisms to ensure that the credit quality of the portfolio is consistent with your expectationsor else change those expectations. Both options are fine. You may want to keep your target credit rating constant and seek more volume, or perhaps you see AI-based solutions as a more robust tool to support decision making and, in a controlled manner, can relax your target rating. Regardless of your choice, you would need to put in place a credit risk monitoring approach that is suited to the new AI-based solutions, as well as ensure that the business understands the portfolio implications of “learning” that is at the core of an AI-based solution system.

The salient point to take away is that the roll-out plan of AI-based tools may focus on the launch. However, the greatest challenge may well be the need to provide for the ongoing and timely monitoring of the AI-based tools and their integration in business governance and risk management, which I will cover in the next post.


Wednesday, 25 July 2018

Artificial Intelligence (AI) – Fear or Uncertainty?



During the Q&A session at a recent panel discussion on FinTech, some people in the audience spoke openly about the fear of AI. This appeared to be driven by the potential impact of AI on jobs. Technological progress has often led to changes in the labour market, destroying and also creating new jobs. Who knew a web designer 25 years ago?

A lesson I take from world events and other experiences is that dismissing people’s fears about AI (or anything else, for that matter) is not usually an effective strategy. Unfortunately, I do not have a crystal ball to predict the future or provide reassurance. Instead, I thought I would take the hard road and write something about the genuine uncertainty that exists about the progress of AI and its potential value.

It is useful to start by taking a step back and thinking about the key aspects of AI tools. The main purpose of these AI tools is to make predictions about the future based on historical data and their underlying correlations. If you have been trained in economics, building models that employ data to generate predictions that support decision making is nothing new. 

AI, however, is different because advances in computer technology facilitate real-time predictions based on an extended range of data sources, including data originating from social media, text and many devices. Not surprisingly, these enhanced capabilities can uncover dependencies and correlations that might not have been visible to the naked eye or with more traditional research methods or data sources. It is worth noting at this point that despite its progress, AI shares with more traditional research methods the challenge of distinguishing between correlation and causation.

Progress in computing brings with it the ability to build decision-making tools that transform predictions into actions that can be executed without human intervention. The simplest and most ubiquitous example is predicting the correct spelling as you type. An example in FS would be any AI tool that forecasts an individual’s or corporate creditworthiness, then offers a product with no human intervention. However, predictions will remain predictions.

Another source of uncertainty about the impact of AI is the breadth of these tools. So far, AI tools have been confined to what is usually described as “narrow” activities, such as the two examples above. The possibility of material employment substitution comes with what is called general artificial intelligence that operates in broader settings. There has been progress moving from these narrow settings, one example being Google’s AI tool which beat the world champion of the board game, Go. However, a breakthrough in which that gives rise to general artificial intelligence still remains undiscovered. There are different views about how close we may be to reaching that point, but the reality is that no one knows. In the end, does it really matter if the world will be very different in 100 years?

At the same time, it is worth remembering that in an increasingly complex and interdependent world, enhanced predictions can benefit everyone, for example by preventing diseases and helping people enjoy work more. There are countless stories of this. The latest one I have come across (The Economist, “Diligence Disrupted”, 12 July 2018) relates to the challenge of legally reviewing a huge number of documents as part of due diligence or preparation for court. This is a labour-intensive activity that usually represents a significant cost hurdle. AI tools (and there are many) play a triage role, scanning the many documents searching for specific clauses, specific aspects of clauses, or their absence. These then allow lawyers to conduct a focused review and legal diligence, resulting in greater job satisfaction and, potentially, cost savings for the client. As the article notes, the net impact on employment remains unclear. However, lower fees could lead to higher demand requiring more lawyers.

So there is uncertainty about the impact of AI tools and a fair amount of benefit to society that can be derived before that uncertainty is resolved. This does not mean that progress with AI will always be smooth; like any other tool, AI tools can be misused intentionally or unintentionally. This is where applying governance and risk management to AI tools would generate value, but I will leave that discussion for another post.

If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Monday, 25 June 2018

An FCA Enforcement Case Or an Example of Board Maturity?


The FCA issued an enforcement action recently against the CEO of Barclays –– as a result of the CEO’s attempt to identify a whistle-blower.  (Click here for the FCA enforcement notice and here for a short summary of the facts of the case.) There have been impassioned comments about the appropriateness of the FCA’s response, i.e. a fine imposed on the CEO. However, I would like to focus on something else.  

One of the most revealing aspects of FCA enforcement cases is how the issue comes to the FCA’s attention. Typically, FCA supervision or thematic work would identify serious shortcomings in a firm that lead to enforcement action. This one was rather interesting because there was none of that. 

There was an internal investigation of the anonymous letters by Group Compliance which was formally closed on 9 January 2017. The FCA explained that “early in 2017”, the Board became aware of the CEO’s attempt to identify the whistle-blower and that after conducting its own investigation, the Board decided to refer the CEO to the FCA. Can you imagine this ten or twenty years ago? Unlikely, I would say.

There are a number of interpretations one could advance. However, I am inclined to see this as evidence of the significant progress made in corporate governance in recent years and of the maturity boards can achieve in the appropriate environment. I can guess that it may not have been easy for Barclays’ board to refer the CEO to the regulator, but who said that being a board director would be easy?


If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Wednesday, 6 June 2018

Why should we consider artificial intelligence (AI) from a risk management perspective?


I have been reading about AI in financial services and thinking about it from a risk management perspective. What is there to be gained from this? 

AI, like computers and other innovations, is a general purpose technology. One of the insights from Tim Harford , economist and journalist, about the impact of this type of innovation is that sometimes it takes time for innovations to have an impact because people do not immediately change the mindset associated with the previous technology. The main example he provides of this time lag is electricity. Following his invention of the light bulb in the late 1870s, Edison built electric power stations in 1881 in the US, and within a year electricity was available as a commodity. Yet as late as 1910, when electric motors had more to offer, manufacturing still relied on steam power. You can read about this here.

I found Tim’s explanation for this conundrum compelling. Steam-powered factories were arranged in a specific form to benefit from the single steam engine through a central drive shaft that ran through the factory.  Initially, owners changed the source of power to electricity but to fully benefit from it, factories had to be rearranged according to a different logic. In addition, rearranging factories gave workers more autonomy and flexibility, and the way staff was recruited, trained and paid had to change as well. As a result, adopting electricity meant much more than simply substituting one source of power for another and the pace of adopting electricity was slow.

I think this analogy might be relevant in applying AI to financial services. AI offers a new way of powering decision making in businesses. The example of replacing steam power with electricity suggests that to get the full value of AI, financial services need to think about AI as more than enhancing or substituting for existing tools. Risk management requires a broader perspective to support decision making and achieve business objectives.  I would hope that considering AI from this perspective would help financial services business to fully benefit from AI. 

You can also help by submitting questions about AI from the perspective of risk management and governance. Send your questions by email (isaacalfonblog@gmail.com) or leave a comment.  I am not sure I will have answers, but who knows?

If you found this post of interest, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here

Thursday, 24 May 2018

Artificial Intelligence (AI) and the Board Risk Committee


The purpose of risk management in financial services is usually defined as to ‘protect and enable’.  The ‘protect’ dimension can refer to the franchise value of the business but is mainly about protecting from regulatory intervention. ‘Enable’ has a perspective of value (however defined) and achievement of company objectives. (Click here to read more about ‘protect and enable’.)

AI-based solutions, leveraging on vast amounts of data, are already a reality in the world of financial services, and these solutions are only likely to become more prevalent in the next ten years. What are the implications of AI developments for a Board Risk Committee? 

The simple ‘protect and enable’ approach suggests a number of points for discussion:

  • How would your company evidence that AI systems comply with relevant legislation, e.g. non-discriminatory laws?
  • How would the wider data needs of AI system cope with data protection legislation? What about the so-called ‘right of explanation’? What would be the impact of these wider data needs on cyber-security?
  • What is the business purpose of introducing an AI system? Does the business seek to enhance operational efficiencies? Does it aim to enhance business performance? How would you ensure that this purpose is achieved?  
  • What would be the operational impact of the deployment of specific AI tools in the business? Would it also alter the overall risk profile of the business? The profile of certain risks?
  • What are the implications for risk governance, the risk management function and other oversight functions?

These are not simple questions that can be covered in a meeting of the Risk Committee. In some cases, the answer to the questions may not be clear-cut.  For example, an AI-based underwriting system can be deployed to enhance business performance or to seek operational efficiencies. In other cases, addressing some of the issues would require the development of appropriate monitoring systems rather than a point-in-time consideration.

However, it is also worth bearing in mind that unless you operate in a start-up business, there would be a fair amount of technology available which would not necessarily be based on AI, and can be applied to improve existing business processes and reflect a (more) customer-centric perspective.  So perhaps the main question about AI systems is really whether there is an adequate understanding of technology in the business to ensure that AI is the appropriate technology.

So where should a Risk Committee start?  It may be useful to think about this as discussions outside the usual calendar of the Risk Committee meetings and develop a programme that consider these over time.

If you found this post of interest, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here


Monday, 14 May 2018

Lessons from Bank Recovery and Resolution


The latest issue of the Central Banking Journal includes my review of a book about the Euro Crisis in Cyprus written by Panicos Demetriades, who was Governor of the Central Bank of Cyprus at the time.   It is an fascinating book with insights about the challenge of bank recovery.   You can read the review here or below.

Book Review: A Diary of the Euro Crisis in Cyprus: Lessons for Bank Recovery and Resolution by Panicos Demetriades, Palgrave McMillan, 2017

This book is about Panicos Demetriades’ tenure as Governor of the Central Bank of Cyprus between May 2012 and April 2014. It covers the banking crisis that hit Cyprus, the banks’ resolution and the wider lessons learned from the event. Reading this book felt in some ways like a simultaneous reading of Gabriel Garcia Marquez’s novel, Chronicle of a Death Foretold, and an economics-based thriller like Murder at the Margin by Marshall Jevons.

The book begins with Demetriades’ appointment as Governor of the Central Bank of Cyprus. You know from the beginning how it ends: Demetriades resigns as Governor. This is a manifestation of the challenge that Central Bank independence represents; banking resolution is the specific context in which the Central Bank’s independence is tested. In fact, writing this sentence already reveals one of the underlying issues: the only feature of Central Banks’s independence enshrined in European treaties is the independence of the Governor of the Central Bank.

As Demetriades discovered, there are ways to limit the practical independence of the Governor such as appointing (or firing) Deputy Governor(s), creating new Executive Directors with a seat on the Board whose roles are determined by the Board rather than the Governor, and requiring Board approval for bank licensing and amendments to existing licenses. These might look like arcane corporate governance issues, but they do matter, especially when independence is most needed, i.e. in times of financial crisis. Interestingly, the European Central Bank (ECB) and the Commission witnessed these changes but had limited powers to intervene other than expressing concerns through legal opinions.

Demetriades also plays a detective role and explains how the crisis in Cyprus came about. It is interesting that the origin of the crisis is traced back to the country’s business model – an offshore financial centre for wealthy Russians and Eastern Europeans, supported by a network of lawyers and introducers to banks. Like many of you, I have seen the term business model applied to companies, but this is first time I have seen it applied to describe a country. This suggests to me that avoiding the crisis would have required a very tough regulatory stance, and that it would have happened sooner or later, regardless of the Euro crisis.

The book identifies the trigger event for the crisis.  Interestingly for me, someone who works in risk management, the trigger is the decisions of Cyprus’ two main banks to invest most of their equity capital in Greek debt in the spring 2010, when Greece was being downgraded. This resulted in losses in excess of €4 billion.  As Demetriades notes, this decision ignored the fundamental relationship between yields and risk, and diversification of investments.

There were also challenges for international institutions in the troika. There are a number of references to the IMF analysis of debt sustainability and the assumptions underpinning it. A debt to GDP ratio of 100% was assumed to be sustainable for Cyprus, compared to 120% for Greece. In Demetriades’ view, this made the bail-in for Cyprus larger than might have been necessary. 

Demetriades’ tenure as a Governor of the Central Bank spanned a right-wing and a left-wing government. You might have preconceptions about which government would find the notion of an independent Central Bank more challenging. In fact, both governments found it equally challenging because of national pride and voting considerations. These challenges weigh heavily on Demetriades who concludes the book with a stark warning about the future of the Euro, which is in fact relevant to all the members of the Eurosystem: ‘[P]opulism, if left unchecked, can shake the foundations of the monetary union beyond the point of repair’.

While the book is entitled ‘a diary’, don’t let that word put you off. It is much more than a personal diary.

Just as I did when reading Chronicle of a Death Foretold, I wondered if Demetriades could have done something to maintain the independence of the Central Bank and avoid the clash that led to his resignation. I could not identify anything.

If you found this post of interest, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here



Wednesday, 4 April 2018

Conflicts of Interest: Connecting Enforcement and Supervision



The FCA announced enforcement action against a commercial broker and a fine of £4 million in late 2017 as a result of failures associated with the broker’s management of conflicts of interest. The details of the case are here.

Conflicts of interest can be anywhere, and firms are well aware of that. However, there is a qualitative difference between the conflict of interest that an individual might have with, say, a supplier, and what the FCA identifies as an ‘inherent conflict of interest’ in the business model or ownership structure. This is the risk that commercial intermediaries must manage.[1] It is not static, and it changes as intermediaries take up other activities where they act as an agent of the insurer.

The FCA has also undertaken a thematic review of commercial insurance intermediaries focusing on this issue. (It published the results in 2014 here.) The FCA evidence included a survey of small and medium enterprises (SMEs).[2] This suggests that many SME customers do not fully understand the intermediary’s role and how it may have changed in recent years. For example, four of five SME customers expect an intermediary to get quotes from two insurers, which is not consistent with how intermediaries operate, in particular for micro SME customers (fewer than nine employees).

There are wider messages from this enforcement action for the practical management of inherent conflicts of interest. To begin with, there should be a regular process to identify conflicts of interest.  This might be challenging but following the sources of revenue would enable a robust identification of conflicts and of the impact of changes in the business model.

While a policy on conflict of interest is a regulatory requirement, it needs to be comprehensive enough to enable staff in the business to actually manage conflicts of interest. This would require specific guidance articulating how to deal with customers, including what information to collect, what checks to undertake, and the production of meaningful management information.

Business arrangements such as ‘preferred facilities’ are not ruled out but must be managed and monitored carefully, taking into account links to brokers’ remuneration, how the firm presents itself to SMEs, the existence of ‘Chinese walls’ and customers’ (probably limited) understanding of the intermediary’s role.

Any quality reviews by the first line should be designed with a view to oversee how inherent conflicts of interest have been identified, managed and mitigated. The process should be risk based, i.e. always applying the same degree of checks to all brokers is unlikely to be appropriate.

Last but not least, as ever, culture is a factor. If statements from senior management do not recognise and support the need to manage inherent conflicts of interests, don’t expect much of the above to be in place.

The FCA will usually say something about how the case was discovered, by either supervisory activity or internal review. I was puzzled that the FCA was rather vague on this occasion. On reflection, I suspect (but cannot be certain) that there may be a dependency with the FCA’s thematic review on conflicts of interest mentioned earlier. If that’s the case, it is useful for firms to understand the potential consequences of being unprepared for a thematic review when invited to participate.




[1] This risk is not exclusive to commercial intermediaries. It exists in other parts of financial services and has also been covered in other FCA enforcement activities.
[2] Businesses with fewer than 250 employees.

Wednesday, 14 March 2018

Taking Risks: Lessons from a Politician


In my spare time, I like to read about current affairs. I have an interest in Brexit and its resulting economic impact which I covered well before the referendum here.  My current reading list is here.  My interests include the Middle East, and it was with that in mind that I picked up a book by the late Shimon Peres, former President of the State of Israel, which he completed just before he passed away in September 2016.  He also served as Finance Minister when hyperinflation was one of the main features of the economy and initiated a bold programme that tamed inflation successfully.

I found the title of the book, No Room for Small Dreams, a bit puzzling. I guess I did not expect a book title that reflects on someone’s achievements to start with ‘no’.  In any case, the book was quite interesting, articulating Peres’s role in some of the policy challenges of the State of Israel.  However, I can never stray too far from my professional interests, and I found that the book included a good many observations relevant to the practice of risk management.

The first observation is that often, not taking a risk is a risk in itself.

So many times in our lives, we struggle to confidently leap forward, averse to the possibility that we will fall flat. Yet this fear of taking risks can be the greatest risk of all.

People in risk and compliance functions should bear this in mind when they advise against a course of action.  However, if you want to take risks or are implementing regulatory risk requirements, you will need to consider meaningful options:  

I’d come to believe that when you have two alternatives, the first thing you must do is look for a third—the one you did not think of, that doesn’t yet exist.

I learned about the virtue of imagination and the power of creative decision making. ... We were quick and creative, and boldly ambitious, and in that we found our reward.

The challenge is really about options being meaningful.  That is not straightforward and requires consistent support from leadership:

“We have to use our imagination and examine any idea, as crazy as it may seem,” I insisted to those assembled. “I want to hear the plans you have.”  “We have no plans,” responded one. “Then I want to hear the plans you don’t have,” I replied.

If leaders demand allegiance without encouraging creativity and outside inspiration, the odds of failure vastly increase. … [W]ithout emboldening people to envisage the unlikely, we increase risk rather than diminish it.

Interestingly, it is Peres’ view that leadership also has an obligation to understand the technical details of the subject matter. 

I felt it essential to gain a degree of mastery in the science that would be driving the project. In previous endeavours, I have come to understand that in addition to a clear vision and strategy, true leadership requires intricate knowledge—a facility with the granular details of every aspect of the mission.

And finally, a word of caution about learning too much from failures:

It is only after we see failure that we can know if we misjudged the risk. ... But one must avoid the temptation to overlearn specific tactical lessons born out of failure or success. … This is one of the hardest things for some leaders to understand: a decision can be right even if it leads to failure.  

This is something that I have covered here. It is not an easy perspective for politicians and business leaders, though I’d like to think that this is where governance might prove itself valuable.
  
If you found this post of interest, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here

Monday, 5 March 2018

Risk Assurance: The Challenge Ahead


I wrote about risk assurance a while ago (here). More recently, I have had a chance to talk with a few people in banking and consulting about it, and to reflect further on the subject.

By way of background, my working definition of risk assurance is a structured activity undertaken by the risk function (second line) which is aimed at evidencing that risk management is embedded in the business. Feel free to comment on this definition.

The important thing about risk assurance is that it matters because it contributes to shifting (or to maintaining, if you wish) the appropriate risk culture in the business. What do I mean by this? I hope we can all agree that the appropriate risk culture in financial services is one that includes the following:
  • the business takes into account risks in decision making and can evidence that, including compliance with regulatory requirements; and
  • the risk function provides the parameters for taking into account risk in decision making (risk appetite framework, stress testing, etc) and aggregate risks.
Truly achieving that is a challenging journey that takes time. Many insurers and banks started the risk management journey as a result of regulatory requirements—Solvency 2 or Basel. In practice, this has meant that sometimes risk functions have taken up activities like approvals that belong to business functions. Risk assurance will generate evidence about how risk management operates in practice. It will also help to shift the focus of the risk function—and, in turn, the business—in the appropriate direction.

I have worked with a number of clients to implement programmes of risk assurance. Interestingly, these engagements have turned out to be rather different because they must reflect the starting point for the business. In one case, the risk function was well resourced, and the focus was planning. In another case, the focus was a combination of up-skilling and evidencing through pilot risk reviews that the activity can add value.

Leaving aside the considerations associated with implementation, it is important that there be a shared perspective about the overall aim of risk assurance, i.e. ‘integrated assurance’. This reflects two simple observations:
  • internal audit functions already provide assurance about the overall control environment;
  • from a Board perspective, assurance is assurance, regardless of which team/line of defence provides it.
In other words, the aim would be to develop a risk-based assurance plan which covers deliverables by 2LOD and 3LOD in such a way that the Board can understand where independent assurance has been provided.

I would be interested to hear your thoughts.


If you found this post useful, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here

Tuesday, 13 February 2018

Artificial Intelligence and Machine Learning in Financial Services: Implications for Credit Risk Management


A recent paper from the Financial Stability Board[1] considers the implications for artificial intelligence (AI) and machine learning in a number of financial services sectors, including credit risk.
The paper includes a useful section on background and definitions, and provides a clear reminder that these tools identify patterns and correlations rather than causality. I suspect that we will need to be reminded of this distinction more and more, as these tools are being used to explore complex relationships. 

When it comes to credit risk scoring, the FSB is clear that AI may help to make lending decisions quicker. However, regulators are not persuaded that AI credit scoring models outperform traditional models – or at least, “it has not been proved”. For example, a recent paper from Moody’s[2] compares the performance of their own credit scoring model for corporates against three machine learning approaches. Moody’s finds that, on average, the accuracy levels of the four models are comparable, and notes that the key to enhancing credit scoring models is data.  

The FSB notes that the deployment of these AI tools would also allow access to credit to people or businesses whose creditworthiness cannot be reliably assessed through traditional credit scoring models. The FSB believes that this would be a positive development for countries with shallow credit markets (emerging markets?), though less positive for countries with deep credit markets (developed markets?). You have been warned…

Regulators are also concerned with the overall auditability of artificial intelligence models used for credit scoring and the wider impact on credit risk governance. There is an important dimension here about how the model is used in business. Is it operating with some human oversight? This is an important issue for business culture as it forces a consideration of who is ultimately in control. I suspect that the distinction between retail and commercial lending in terms of volume of transactions may become important; the volume of retail transactions might make human oversight more challenging. 

Where does that leave the CEO, CFO or CRO of a bank contemplating the use of AI tools? Here are a few suggestions: 
1.  Have a shared view of the expected business outcomes from deploying AI tools.
2.  Keep monitoring credit risk exposures and alignment with risk appetite even more intensively, as the AI tool might have unintended effects.
3.  Focus on the auditability of the AI tool and its impact on credit risk governance.


If you found this post useful, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here