Crescendo Advisors is a boutique risk management consultancy. There is a lot happening at the crossroads between enterprise risk management, financial regulation and governance.
Thursday, 18 July 2019
AI and Risk Management
Earlier this year, I gave a presentation to a group of actuaries - the Network of Consulting Actuaries - on the challenge of adopting Artificial Intelligence tools in Financial Services and how risk management help. I have transformed the speaking notes into a paper - here.
Happy reading!
If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.
Thursday, 4 July 2019
3+1 Types of Digital Transformations and How to Prioritize Them
A former insurance CEO once said that if you want to
understand risk in financial services, you should start by looking at the
products you are offering. I have been exploring how incumbents in financial
services, and specifically risk management, should change to embrace FinTech. Inevitably
then the subject of ‘digital transformation’ comes up. I have been speaking
with various colleagues and friends recently and I realised that there are rather
different forms of digital transformations with different implications for risk
management and the business. Here is my take on the various types.
1.
Data-driven
Someone in the business takes the initiative and starts collating,
curating and using the many data sources in the business to address specific
analytical issues and enhance the quality of decision making. This represents a bottom-up transformation
with potential transformational features.
In this case, buy-in is unlikely to be an issue. The main risk
management challenge may arise from the scaling up of this initiative. For example,
scaling up may involve using external data rather than internal data or
bringing new technology to store the data, e.g. a data lake, which needs to be integrated
into existing systems. It is also important that the consideration of
analytical issues in the business factors in the need to maintain (and enhance,
where necessary) an understanding of the risk profile of the business. For
example, if additional data allows the business to modify its underwriting
approach in a significant way, you should also consider how the (different)
exposures would be monitored. There are a couple of examples here.
2.
Enhancing Customer Journeys
This can be about how customers are serviced, given their existing
journeys, and might
include enhancing the front-end applications or rolling out new IT equipment to
service customers. Alternatively, the transformation may be about changing or
enhancing aspects of customer journeys. This might include, for example, introducing
chat-bots as part of customer journeys (e.g. claims management) or applying an
artificial intelligence-based tool to a specific process (e.g. underwriting).
This type of transformation has become the most visible form
of digital transformation thanks to the various accelerators that incumbents in
financial services have created. The challenge of buy-in is typically addressed
by specifying that the accelerator should partner external providers with
business leaders for whom the technology may be relevant. The impact on the
risk profile of the business is also dependent on the specific transformation
and should be considered from the outset.
3.
IT-enabler
There are cases where the legacy systems become the main
challenge and where the adoption of cloud-based services can be part of the
answer. There are several approaches here, ranging from incremental steps to a ‘big-bang’
approach. One interesting idea is focusing on reducing the functionality of the
legacy system and replicating that outside using new technology.
These transformations may be motivated by concerns about
operational resilience in the short term but might also support the transformations
outlined above and enable more effective risk management.
4.
Digital ‘Non-transformation’
This involves applying new technologies in the context of a
new product line where there is no transformation as such. This clearly avoids
the transformation in the short term but it can also provide the business with
the means to build confidence in specific technologies (AI, blockchain) and the
capability to execute and bring on board new technologies.
These types of digital transformations are not mutually
exclusive, but it is important to be clear that they are different. Equally,
they are not substitutes for each other and
the real challenge is prioritising between them. This will inevitably vary between
businesses, though I believe that there are standard considerations shaping the priorities such as the need to change the culture in
order to mobilize the business for the digital era and the state of the core IT
infrastructure, including the need to leverage technology as an enabler.
What do you think about these categories?
If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.
Monday, 27 May 2019
The New and the Old in Risk Management
I
have been writing about the new and the old in risk management over the past
year. This starts with the slow pace of adoption of FinTech by incumbents in
financial services. I have suggested that an important component of the change
needed includes incumbents amending and enhancing risk management frameworks to
reflect new FinTech innovations. (See my last post on
the subject.)
Recently,
I came across an article from
McKinsey
that makes a similar point in the context of model risk and the adoption of artificial
intelligence (AI) and machine learning. It turns out I am in good company!
McKinsey’s
article notes that banks have developed and implemented frameworks to manage
model risk, including model validation reflecting specific regulatory
frameworks, in this case from the US Federal Reserve (here). They recognise
that the implementation of these frameworks is not appropriate to deal with the
model risk associated with AI and machine learning. Banks are therefore proceeding
cautiously and slowly introducing new modelling approaches even when these are
available.
The
article then shows how a standard framework for model risk management is used
to identify extra considerations required for this
framework to cover appropriately AI and machine learning models. The key message is that the challenge of adopting
AI and machine learning can be addressed through a careful consideration of
existing approaches.
Two
further thoughts from McKinsey’s article. Firstly, the article rightly refers
to model management rather than validation. It is always useful to reiterate
that model validation undertaken by the risk function is just a component of
how models are managed in the business. Secondly, model management should not
apply only to internal models used to calculate regulatory capital, but should
apply more widely to models used in the business such as those used for
pricing, valuation of assets and liabilities.
The
article ends with a cautionary tale of an unnamed bank where the model risk
management function took initial steps to ready itself for machine learning
models on the assumption that there were none in the bank. It then discovered
that an innovation function had been established and was developing models for
fraud detection and cybersecurity.
If you found this post of interest,
you can subscribe and receive further posts by email. See the box on the
right-hand side of the blog's screen or click here.
Monday, 29 April 2019
The Curse of Risk Appetite
In this post, I go back to one of the fundamental aspects of an ERM framework:
risk appetite. ‘The Curse of Risk Appetite’ is part of the title of an
interesting paper reviewing the misuses of risk appetite.[1]
Some of the misuses described in the paper might sound familiar, but perhaps
the key point to take away from the paper is that there is a potential for risk
appetite to become synonymous with ‘a consideration of risk’. I am not sure
this was ever the intention.
The paper includes
several useful suggestions to enhance risk appetite. They are focused on the
long-run value of the firm and on the structure of risk appetite statements, reflecting
a view that risk is the likelihood of falling below critical levels of
performance. However, my attention was really caught by the authors’ suggestion
to improve the organisational process for risk management. They suggest that a risk
function’s role should be defined to include responsibility for evaluating the
combined effect of strategic initiatives and capital budgeting on the firm’s
overall risk profile.
On one level, this
prescription is consistent with the view that the aim of the risk function should
be to ‘protect and enable’, with the emphasis on the ‘enable’ aspect which
sometimes gets overshadowed by ‘protect’. I am attracted to this suggestion
because it turns a vision into a practical requirement that can be incorporated
into an articulation of roles and responsibilities for a CRO or risk function.
If, however, this was
implemented literally in UK financial services, I suspect there would be an
issue with regulators’ expectation about the independence of the risk function (second
line of defence) from the business (first line).
A similar outcome could
be reached by clarifying that the role of the CRO/risk function includes
providing a risk opinion in the early stages of the consideration of major
strategic initiatives that have the potential to alter the business’s risk
profile. The emphasis on timing is important. Providing a risk opinion only
when major strategic initiatives are presented for approval is unlikely to add
value. A CRO/risk function opinion in the early stages is likely to support
consideration of the details of the initiatives and how they can be shaped to
strike the appropriate balance between risk and return.
If you found this post of interest,
you can subscribe and receive further posts by email. See the box on the
right-hand side of the blog's screen or click here.
[1] Alviniussen, Alf and Jankensgård, Håkan, The Risk-Return Tradeoff: A Six-Step Guide to Ending the Curse of Risk
Appetite (May 7, 2018).
Wednesday, 3 April 2019
Risk Management as Infrastructure for Artificial Intelligence and FinTech
During 2018, I wrote several posts about FinTech, Artificial Intelligence (AI) and risk management. I was kindly invited to present to the Network of Consulting Actuaries, I chose to use this opportunity to consolidate my views on the subject.
There were several ideas flowing through my mind.
Firstly, informal evidence suggests that, for all the hype,
FinTech and AI have not yet become mainstream in insurance or in financial
services more generally.
Secondly, the largest business transformation arising from
FinTech and AI is the adoption of these technologies by incumbents. Indeed, I explored this in the context of
banking through the group project at the Oxford FinTech Programme I completed
in December 2018.
Thirdly, someone who works for a multinational insurer made
the observation during an InsurTech event in London that as a regulated entity,
the insurer has responsibilities and obligations towards their customers and must
follow due process before they roll out new technologies. There was a hint of an apology in this
observation to the nimble start-ups in the audience.
Putting all these thoughts together led me to see the main
challenge to the adoption of FinTech by incumbents as governance, including how
risk management is applied in practice.
If the aim of risk management is to ‘protect’ or block, then the
incumbent does not have an obvious lever to support the introduction of AI
tools and FinTech.
If, on the other hand, the aim of risk management is
perceived as to ‘protect and enable’, then risk management can be part of the
solution. Risk management can lead to the
creation of necessary infrastructure to ensure that AI tools achieve their
transformational potential. This
includes articulating a vision of how a control framework should be leveraged,
considering the impact of FinTech and AI on risk management frameworks,
focusing on explainable AI, and articulating the implications for the target
operating model. This will facilitate
incumbents’ adoption of FinTech and AI.
Take a look at the presentation I gave (here)
for a more detailed articulation of these points.
If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here.
Friday, 23 November 2018
Integrating Artificial Intelligence in Risk Governance in Financial Services
I have written a number of articles about the interaction
between artificial intelligence (AI) and governance and risk management. My latest piece highlighted the impact that
AI tools can have on a firm’s risk profile (here). I am in the process of gathering my thoughts
on how to provide for the ongoing and
timely monitoring of AI-based tools, including their integration in
business governance and risk management in financial services.
While
reading on the subject (and keeping up with the pace of the Oxford FinTech
Programme), I came across a succinct and interesting paper from Singapore’s
Monetary Authority (here) that articulates how the principles of fairness,
ethics, accountability and transparency can - and should - be applied to promote the use of AI
in the financial sector.
The paper highlights four useful points for senior
management and Boards considering the strategic use of AI-based tools.
1. It is important to invest time and effort
understanding and monitoring the decision making of AI based tools – i.e. do
not regard AI-based tools as black boxes.
2. AI decisions are subject to the same ethical
standards that apply to decisions made by staff – i.e. if AI-based tools
purport to make decisions as humans would, but without their ‘feelings’, they should
be scrutinised like humans.
3. Governance has an important internal element not
just an external one - i.e. firms should not rely on regulatory initiatives to support
them delivering appropriate outcomes.
4. The use of AI-based tools is transparent to
consumers in terms of their use and outcomes – i.e. while there are limits to
explanations that can be provided this cannot be dismissed altogether.
The paper also includes a number of practical illustrations
that highlight the principles identified.
If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here.
Wednesday, 10 October 2018
This Time is Different - The Digital Revolution
The August issue of Central Banking, a journal, includes my review of a book about the digital revolution by Chris Skinner. It is a fascinating book that can change pre-determined views. You can read the review here or below.
This Time Is Different: A book review
of Digital Human by Chris Skinner, Marshall Cavendish (2018)
Mr
Skinner has written two books on FinTech and banking (Digital Bank and ValueWeb),
and now Digital Human is his third. This represents an opportunity to take
a step back and consider some of the bigger questions about FinTech. How much
of a change could this represent for banking? For financial services? For
society?
His
main argument is that digitalisation has reduced the cost associated with a
minimum viable product beyond recognition for nearly anything in financial
services. One way of looking at FinTech is as ‘one big bucket of finance and
technology’ with a range of technologies from InsurTech (based on artificial
intelligence) to digital currencies, with mobile wallets and peer-to-peer
lending in between. Indeed, one could make the argument that it should be called
‘TechFin’ instead. However, it is possible to make overall sense of these technologies
by distinguishing between those that challenge existing business structures and
those that create new ones.
One
of the main aspects of the digital revolution with respect to banking is the
differential effect between the (developed) West and developing world. Surprisingly,
it is not in the direction you might expect. For the West overall, FinTech
represents a challenge to existing business structures. Current IT systems took
shape in the 1970s and 1980s at a time when now-ubiquitous ATMs were first introduced.
While the front-ends of these systems have changed over time, the core
architecture has not. As Mr Skinner points out, CEOs invest significantly in
systems maintenance to pass on to the next CEO rather than overhauling
technology. I was left wondering if this might also be a reflection of
misplaced risk aversion that contributes to the relatively short tenure of CEOs.
There
also seems to be a potentially systemic issue arising from the natural ageing
process of the programmers who can still write code in the language of the
legacy systems (COBOL). Mr Skinner observes that more than 50% of COBOL programmers
are over 45 years old, so the challenge of maintaining legacy systems is not
going to get any easier.
However,
the real challenge does not seem to be adopting new technologies but the
vertically integrated business model of banking or, as Mr Skinner puts it
rather eloquently, being ‘control freaks in a proprietary operation building
everything themselves’. As usual,
technology enables the challenge but does not help the incumbent figure out how
the business model should evolve and how to remain profitable. Mr Skinner offers
two suggestions. The first is leveraging on its capital, history and brands and
repositioning the business as a trusted party that can select specialised
providers, like Amazon Marketplace. The second is leveraging on the data and
focusing on advice and data analytics.
Indeed,
there seems to be a change in emphasis in FinTech. Between 2010 and 2014, the focus
was on disrupting existing banking business models and unbundling. Since 2014,
the focus has shifted to collaboration with more dynamic banks leveraging on
their customers’ reach and capital.
Perhaps
the key point to emphasise is that the regulatory framework has already adapted
to some extent, at least in the EU where Open Banking is already a reality because
of EU directives.
If
you don’t work full time in FinTech, it is difficult to form an impression
about how far these trends could go. (Yes, I know there are forecasts, but they are
merely forecasts.) This is where the other part of the book is particularly
useful.
In
the developing world, banking tends to be restricted to affluent clients. . FinTech
does not challenge major incumbents; rather, it represents more of a
development opportunity. FinTech allows for servicing relatively small transactions
(by Western standards) which is compensated through a relatively large number
of transactions. In this way, financial inclusion becomes a business and stops
being a form of charity.
Mr
Skinner illustrates extensively how far and deep these trends are going. In sub-Saharan
Africa, mobile banking and e-wallets lead with the overall number of accounts
growing fivefold between 2011 and 2016, reaching around 275 million accounts
out of 420 million mobile subscribers. Interestingly, use is not evenly spread.
Institutional design continues to matter even in the age of FinTech. In some
countries, these developments are led by mobile network operators and in others
by banks. Some countries actively encourage partnership and agreements to
enable domestic and cross-border money transfers cheaply.
This
is not just a matter of convenience. If you cannot get paid reliably and must
rely on cash, there is a limited number of business opportunities that can
thrive. The case study of China’s Ant Financial is therefore fascinating. It
starts with a problem of trust between buyers and sellers that limits the
development of the e-commerce that evolved into what we call now electronic
payments. One of the lessons of this is really about the central role that the
consumer plays. The business scale is staggering: in 2016, the value of
transactions in the peak day (called Singles’ Day) was double the amount
transacted on the US’s Thanksgiving Day, Black Friday and Cyber Monday
together. It’s not just payments, as there seems to be an emerging pattern that
starts with electronic payments and moves to managing money, and Ant’s money
market fund is already larger than JP Morgan’s US Government money market fund.
And
what about society? Living longer, 3D printing, the Internet of Things and conquering
space may well change how we live. I am sure you have heard before the old
dictum that this time is different. Perhaps this time it is indeed, if only for
banking because of FinTech.
Sunday, 16 September 2018
Monitoring the Risk and Business Impact of AI-Based Solutions
AI-based
solutions can shape how financial services businesses make money, whether the
business model is the same or not. For an existing financial services business,
the motivations may vary and range from efficiency to expanding the business. There
would be project risk as with any development, but leaving that important
consideration aside, it is worth bearing in mind that AI-based solutions would
also impact the risk profile of the business. This may or may not be the
original intention, but it becomes more likely. The key implication is that
implementing an AI-based solution would require a radically different risk
oversight approach by the business.
Standard
computer algorithms which are not AI-based can—and do—solve complex problems. The main feature of such
algorithms is that the problem is somehow defined and an algorithm developed to
solve it which will produce the same answer as long as the same inputs are
provided. So a credit-scoring mechanism calibrated to capture a certain type of
client gives you just that.
The
answers offered by an AI-based system may change over time. New data is used to
reassess the underlying relationships and recalibrate the relationship between
the target variable and the potential explanatory variables. This “learning”
can also happen in a standard programme when there is a process of
recalibration. The difference is that in the case of AI, learning would happen
on a real-time basis—that’s
the essence of AI.
Alternatively,
with AI a target variable may not have been defined. That’s not as unusual as
it might sound. For example, algorithms assessing a loan or credit card
underwriting may fall in this category because there is no single rule to
predict a borrower’s likelihood of repayment. New data can lead to a certain
recalibration or can be used to identify new relationships between certain
data. For example, over time an AI-based system might identify that outstanding
debt is a better predictor of the likelihood of borrower repayment than repayment
history and penalise someone with a relatively good track record of timely repayments.
The
first type of AI-based solution is called “supervised machine learning” and the
second one “un-supervised machine learning”. The key difference is the extent
of autonomy that goes with the learning.
Consider
the potential impact on conduct risk of AI-based tools. One of the expectations
from Treating Customers Fairly (TCF) with respect to product governance is that
they are designed to meet the needs of identified consumer groups and are
targeted accordingly. This requires a clear business strategy, including
identification of the target market through a combination of qualitative and quantitative
research and oversight of the business to ensure that it is aligned with
initial expectations of customers and business generated. Take the example of
automated investment services covered in a recent FCA review. These providers would rely on some type of
AI-based solution, whether supervised or unsupervised machine learning. The
possibility of capturing different customers or the advice generated being
different from what was envisaged cannot be ruled out. The challenge is how to
put in place a monitoring approach which ensures that outcomes and risks which arise
are consistent with the expectations in the business plan.
Something
similar can apply from the perspective of credit risk, impacting the quality of
the portfolio and performance. Suppose you have been targeting retail customers
with a specific risk rating for a credit card business. If you roll out an
AI-based solution to enhance the efficiency of product underwriting, you would
need to have in place mechanisms to ensure that the credit quality of the
portfolio is consistent with your expectations—or else change those expectations. Both options are
fine. You may want to keep your target credit rating constant and seek more
volume, or perhaps you see AI-based solutions as a more robust tool to support
decision making and, in a controlled manner, can relax your target rating.
Regardless of your choice, you would need to put in place a credit risk
monitoring approach that is suited to the new AI-based solutions, as well as
ensure that the business understands the portfolio implications of “learning”
that is at the core of an AI-based solution system.
The
salient point to take away is that the roll-out plan of AI-based tools may
focus on the launch. However, the greatest challenge may well be the need to
provide for the ongoing and timely monitoring of the AI-based tools and their
integration in business governance and risk management, which I will cover in
the next post.
Wednesday, 25 July 2018
Artificial Intelligence (AI) – Fear or Uncertainty?
During
the Q&A session at a recent panel discussion on FinTech, some people in the
audience spoke openly about the fear of AI. This appeared to be driven by the
potential impact of AI on jobs. Technological progress has often led to changes
in the labour market, destroying and also creating new jobs. Who knew a web designer
25 years ago?
A
lesson I take from world events and other experiences is that dismissing
people’s fears about AI (or anything else, for that matter) is not usually an
effective strategy. Unfortunately, I do not have a crystal ball to predict the
future or provide reassurance. Instead, I thought I would take the hard road and
write something about the genuine uncertainty that exists about the progress of
AI and its potential value.
It
is useful to start by taking a step back and thinking about the key aspects of
AI tools. The main purpose of these AI tools is to make predictions about the
future based on historical data and their underlying correlations. If you have
been trained in economics, building models that employ data to generate
predictions that support decision making is nothing new.
AI,
however, is different because advances in computer technology facilitate real-time
predictions based on an extended range of data sources, including data originating
from social media, text and many devices. Not surprisingly, these enhanced
capabilities can uncover dependencies and correlations that might not have been
visible to the naked eye or with more traditional research methods or data
sources. It is worth noting at this point that despite its progress, AI shares
with more traditional research methods the challenge of distinguishing between
correlation and causation.
Progress
in computing brings with it the ability to build decision-making tools that
transform predictions into actions that can be executed without human
intervention. The simplest and most ubiquitous example is predicting the
correct spelling as you type. An example in FS would be any AI tool that
forecasts an individual’s or corporate creditworthiness, then offers a product
with no human intervention. However, predictions will remain predictions.
Another
source of uncertainty about the impact of AI is the breadth of these tools. So
far, AI tools have been confined to what is usually described as “narrow”
activities, such as the two examples above. The possibility of material
employment substitution comes with what is called general artificial
intelligence that operates in broader settings. There has been progress moving
from these narrow settings, one example being Google’s AI tool which beat the
world champion of the board game, Go. However, a breakthrough in which that gives
rise to general artificial intelligence still remains undiscovered. There are
different views about how close we may be to reaching that point, but the
reality is that no one knows. In the end, does it really matter if the world will
be very different in 100 years?
At
the same time, it is worth remembering that in an increasingly complex and
interdependent world, enhanced predictions can benefit everyone, for example by
preventing diseases and helping people enjoy work more. There are countless
stories of this. The latest one I have come across (The Economist, “Diligence
Disrupted”, 12 July 2018) relates to the challenge of legally reviewing a huge
number of documents as part of due diligence or preparation for court. This is
a labour-intensive activity that usually represents a significant cost hurdle.
AI tools (and there are many) play a triage role, scanning the many documents
searching for specific clauses, specific aspects of clauses, or their absence.
These then allow lawyers to conduct a focused review and legal diligence,
resulting in greater job satisfaction and, potentially, cost savings for the
client. As the article notes, the net impact on employment remains unclear. However,
lower fees could lead to higher demand requiring more lawyers.
So
there is uncertainty about the impact of AI tools and a fair amount of benefit
to society that can be derived before that uncertainty is resolved. This does
not mean that progress with AI will always be smooth; like any other tool, AI
tools can be misused intentionally or unintentionally. This is where applying
governance and risk management to AI tools would generate value, but I will
leave that discussion for another post.
If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here.
Monday, 25 June 2018
An FCA Enforcement Case Or an Example of Board Maturity?
The FCA issued an enforcement action recently against the CEO of Barclays –– as a result of the CEO’s attempt to identify a whistle-blower. (Click here for the FCA enforcement notice and here for a short summary of the facts of the case.) There have been impassioned comments about the appropriateness of the FCA’s response, i.e. a fine imposed on the CEO. However, I would like to focus on something else.
One of the most revealing aspects of FCA enforcement cases is how the issue comes to the FCA’s attention. Typically, FCA supervision or thematic work would identify serious shortcomings in a firm that lead to enforcement action. This one was rather interesting because there was none of that.
There was an internal investigation of the anonymous letters by Group Compliance which was formally closed on 9 January 2017. The FCA explained that “early in 2017”, the Board became aware of the CEO’s attempt to identify the whistle-blower and that after conducting its own investigation, the Board decided to refer the CEO to the FCA. Can you imagine this ten or twenty years ago? Unlikely, I would say.
There are a number of interpretations one could advance. However, I am inclined to see this as evidence of the significant progress made in corporate governance in recent years and of the maturity boards can achieve in the appropriate environment. I can guess that it may not have been easy for Barclays’ board to refer the CEO to the regulator, but who said that being a board director would be easy?
If you found this
post of interest, you may want to subscribe and receive further posts by email.
See the box on the right-hand side of the screen or click here.
Subscribe to:
Posts (Atom)