Showing posts with label enterprise risk management. Show all posts
Showing posts with label enterprise risk management. Show all posts

Tuesday, 29 June 2021

Operational resilience: how much to learn from Covid-19?

One of the questions in the mind of many CROs, COOs and NEDs is how much the industry has learnt from the response to Covid-19.  Crescendo Advisors survey of insurers’ lessons learnt highlighted that the industry has learnt a fair amount.  We have written a separate article about five practical lessons for the implementation of operation resilience for financial services (https://www.crescendo-erm.com/operational-resilience). 

But is there more to learn? To what extent can the industry claim ‘mission accomplished’ because continuity of service has been maintained during Covid-19? These are important questions, and the answer will shape the resources available to implement operational resilience.  

To consider these questions, think about the two main features of the challenge that Covid-19 represents for financial services:  

·       it was not characterised by a failure of IT system(s) but by a challenge of access to offices; and 

·       it was a generalised issue.

If you think of these two features, you can easily develop a grid of operational resilience threats – see below with some examples:

Type of failure vs. scope of failure

IT systems failure

Other business failure

Idiosyncratic

Customer applications unavailable

Transformation project fails

Generalised

Cloud operators fail

Covid-19

 

 

 

 

 

 


Covid-19 represents only one of various types of operational resilience threat for financial services.  It also hints that a different response might be needed to mitigate customer detriment such as remote access to IT systems in the case of Covid-19 and quickly restoring IT systems that have failed. 

So Covid-19 has allowed a good amount of testing of vulnerabilities.  Some of the lessons learnt, e.g. investing in crisis management, are likely to be transferable across the categories in the table.  However, maintaining an appropriate degree of resilience will require a wider consideration of the potential threats and responses.

Crescendo Advisors (www.crescendo-erm.com) is a boutique risk management consultancy.  We would be happy to discuss our findings from the Covid-19 lessons survey.  We can also support your efforts to both learn lessons from Covid-19 using the tools we developed and implement the regulatory requirements for operational resilience.

Sunday, 14 June 2020

Delegating Decision Making to AI Tools – Choices and Consequences*


Sometimes when I hear about Artificial Intelligence (AI) tools it seems like it is all about the technical details of the model and the data, which is certainly very important. This post is about another important aspect: the operating model in which the AI tool will operate.

There are many aspects of such an operating model.  Some are practical, such as ensuring that the tools integrate with other parts of the business.   In this post, I am focusing on the delegation of decision making to the AI tool – the choices that exist in most cases and the implications for the control environment.  These are summarised in the figure below.

At one extreme of the delegation of decision making, you have AI tools that operate independently of human intervention.  An example is algorithmic trading or an automated trading system which trade without any human intervention to use the speed and data processing advantages that computers have over a human trader.  Interestingly, this also represents one of the few prescriptive examples of PRA intervention where it requires that a human has the possibility of stopping the trading system.[1]

At the other end of the spectrum, there are AI tools used by experts in a professional environment.  For example, actuaries might use machine learning techniques to undertake experience analysis and support reserving work.

Between these two examples, you have AI tools that provide a forecast or recommendation for consideration by an analyst.  For example, the AI tool could provide a credit rating that validates a rating derived using more traditional methods.

Another middle of the road alternative is ‘management by exception’.  This means that the AI tools have a degree of autonomy to operate within a ‘norm’, which is inferred from historical data.  Cases that are outside the norm are then referred to an analyst for consideration to improve and verify the predictions. 

These are business choices and in turn have implications for the development process of AI tools.   You would expect controls around data and model documentation in all cases.  But broadly speaking you would also expect a tighter control and a more intense validation for AI tools that operate more independently of human intervention.  This includes the depth of model’s understanding, including:

  • explainability – why did the model do that;
  • transparency – how does the model work;
  • the impact on customers – e.g., the difference between Netflix recommendations and credit card underwriting.

The choices of operating model also have important implications for staff training.  AI tools operated by staff that have not been involved in its development must be trained to the appropriate level to ensure that the AI tool operates effectively.  For example, where ‘management by exception’ is adopted, staff would need the appropriate knowledge and skills to deal with the exceptions.

There are important choices for the operating model into which AI tools are deployed.  These choices have risk management and control implications and these choices may change over time.  An AI tool might start operating in an advisory capacity.  As trust in the AI tool increases then the delegated decision making can be increased.

These implications and choices should be considered as part of the model design.

We hope you found this post of interest. You can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.



*  This post is based on my contribution to a virtual panel discussion organised by ActuarTech on AI Governance & Risk Management.

[1] Prudential Regulation Authority (PRA), Algorithmic trading, Supervisory Statement, 5/18, June 2018.


Thursday, 1 August 2019

ERM in Three Lines*



One of the challenges with enterprise risk management (ERM) is how much is written on the subject.  I find it useful to identify the key components.  This provides a structure to sort out the detailed views and comments, though it is also more than that. If you are a busy CRO or senior risk leader, identifying the key components enables you to take stock and think about challenges and improvements that may be relevant to your priorities. 

Here is an attempt to sum up ERM and provide that clarity in three headlines.

1.       A vision of the ERM purpose 

My preference for financial services is ‘protect and enable’. This highlights that risk management is more than just about avoiding the downside; it is about how risk management supports decision making, including the role of the CRO in that decision making. (More on ‘protect and enable’ and different views from practitioners shared on Crescendo Advisors’ blog are available here.)

2.       An articulation of how to deliver and embed ERM in the business 

This is your ERM framework, roles and responsibilities, policies, and risk appetite. They must provide the right balance between the level of detail and clarity to create a durable product and support business implementation.

3.       Evidence of the outcomes of vision and articulation of ERM (1 and 2 above) 

This is the outcome of the ERM, i.e. the assurance that is provided to the Board. This means that a feedback mechanism that supports improvement is in place. This is partly about risk or thematic reviews, but it also represents a wider perspective that involves 1st line and 3rd line as well. I also find that focusing on assurance is more ‘real’ than a discussion on the extent to which processes are implemented or embedded.

At the risk of oversimplifying, here is my own take on the UK insurance business position on these three aspects
  • The articulation of the ERM vision is progressing but there is still work to be done. There is a sense that, broadly speaking, people operate according to the ‘protect and enable’ vision without articulating it as clearly as it could be.  
  • Good progress has been made articulating how to deliver and embed ERM in businesses; all businesses have ERM frameworks and policies in place.  Some are considering external reviews after the frameworks have been in place for some time.  
  • The biggest challenge ahead is evidencing ERM implementation and providing structured assurance to the Board about ERM expectations. This is a challenge for risk management function (risk reviews?), first line (business and control reviews?) and internal audit (coordinate with first and second line?).  Please get in touch if you want to receive a paper with initial thoughts on this challenge. 

Do you agree with views about these views about the insurance sector in the UK? How about banking and asset management? How is this seen in other countries?

*  No pun intended about the three lines of defence.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

Thursday, 18 July 2019

AI and Risk Management


Earlier this year, I gave a presentation to a group of actuaries - the Network of Consulting Actuaries - on the challenge of adopting Artificial Intelligence tools in Financial Services and how risk management help.  I have transformed the speaking notes into a paper - here.  

Happy reading!

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.


Monday, 27 May 2019

The New and the Old in Risk Management


I have been writing about the new and the old in risk management over the past year. This starts with the slow pace of adoption of FinTech by incumbents in financial services. I have suggested that an important component of the change needed includes incumbents amending and enhancing risk management frameworks to reflect new FinTech innovations. (See my last post on the subject.)

Recently, I came across an article from McKinsey that makes a similar point in the context of model risk and the adoption of artificial intelligence (AI) and machine learning. It turns out I am in good company! 

McKinsey’s article notes that banks have developed and implemented frameworks to manage model risk, including model validation reflecting specific regulatory frameworks, in this case from the US Federal Reserve (here). They recognise that the implementation of these frameworks is not appropriate to deal with the model risk associated with AI and machine learning. Banks are therefore proceeding cautiously and slowly introducing new modelling approaches even when these are available.

The article then shows how a standard framework for model risk management is used to identify extra considerations required for this framework to cover appropriately AI and machine learning models.  The key message is that the challenge of adopting AI and machine learning can be addressed through a careful consideration of existing approaches. 

Two further thoughts from McKinsey’s article. Firstly, the article rightly refers to model management rather than validation. It is always useful to reiterate that model validation undertaken by the risk function is just a component of how models are managed in the business. Secondly, model management should not apply only to internal models used to calculate regulatory capital, but should apply more widely to models used in the business such as those used for pricing, valuation of assets and liabilities.

The article ends with a cautionary tale of an unnamed bank where the model risk management function took initial steps to ready itself for machine learning models on the assumption that there were none in the bank. It then discovered that an innovation function had been established and was developing models for fraud detection and cybersecurity.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here.

Monday, 29 April 2019

The Curse of Risk Appetite



In this post, I go back to one of the fundamental aspects of an ERM framework: risk appetite. ‘The Curse of Risk Appetite’ is part of the title of an interesting paper reviewing the misuses of risk appetite.[1] Some of the misuses described in the paper might sound familiar, but perhaps the key point to take away from the paper is that there is a potential for risk appetite to become synonymous with ‘a consideration of risk’. I am not sure this was ever the intention. 

The paper includes several useful suggestions to enhance risk appetite. They are focused on the long-run value of the firm and on the structure of risk appetite statements, reflecting a view that risk is the likelihood of falling below critical levels of performance. However, my attention was really caught by the authors’ suggestion to improve the organisational process for risk management. They suggest that a risk function’s role should be defined to include responsibility for evaluating the combined effect of strategic initiatives and capital budgeting on the firm’s overall risk profile.

On one level, this prescription is consistent with the view that the aim of the risk function should be to ‘protect and enable’, with the emphasis on the ‘enable’ aspect which sometimes gets overshadowed by ‘protect’. I am attracted to this suggestion because it turns a vision into a practical requirement that can be incorporated into an articulation of roles and responsibilities for a CRO or risk function. 

If, however, this was implemented literally in UK financial services, I suspect there would be an issue with regulators’ expectation about the independence of the risk function (second line of defence) from the business (first line). 

A similar outcome could be reached by clarifying that the role of the CRO/risk function includes providing a risk opinion in the early stages of the consideration of major strategic initiatives that have the potential to alter the business’s risk profile. The emphasis on timing is important. Providing a risk opinion only when major strategic initiatives are presented for approval is unlikely to add value. A CRO/risk function opinion in the early stages is likely to support consideration of the details of the initiatives and how they can be shaped to strike the appropriate balance between risk and return.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the blog's screen or click here




[1] Alviniussen, Alf and JankensgÃ¥rd, HÃ¥kan, The Risk-Return Tradeoff: A Six-Step Guide to Ending the Curse of Risk Appetite (May 7, 2018). 

Wednesday, 3 April 2019

Risk Management as Infrastructure for Artificial Intelligence and FinTech


During 2018, I wrote several posts about FinTech, Artificial Intelligence (AI) and risk management.  I was kindly invited to present to the Network of Consulting Actuaries, I chose to use this opportunity to consolidate my views on the subject.  

There were several ideas flowing through my mind.

Firstly, informal evidence suggests that, for all the hype, FinTech and AI have not yet become mainstream in insurance or in financial services more generally.

Secondly, the largest business transformation arising from FinTech and AI is the adoption of these technologies by incumbents.  Indeed, I explored this in the context of banking through the group project at the Oxford FinTech Programme I completed in December 2018.

Thirdly, someone who works for a multinational insurer made the observation during an InsurTech event in London that as a regulated entity, the insurer has responsibilities and obligations towards their customers and must follow due process before they roll out new technologies.  There was a hint of an apology in this observation to the nimble start-ups in the audience.

Putting all these thoughts together led me to see the main challenge to the adoption of FinTech by incumbents as governance, including how risk management is applied in practice.  If the aim of risk management is to ‘protect’ or block, then the incumbent does not have an obvious lever to support the introduction of AI tools and FinTech.  

If, on the other hand, the aim of risk management is perceived as to ‘protect and enable’, then risk management can be part of the solution.  Risk management can lead to the creation of necessary infrastructure to ensure that AI tools achieve their transformational potential.  This includes articulating a vision of how a control framework should be leveraged, considering the impact of FinTech and AI on risk management frameworks, focusing on explainable AI, and articulating the implications for the target operating model.  This will facilitate incumbents’ adoption of FinTech and AI.  

Take a look at the presentation I gave (here) for a more detailed articulation of these points.

If you found this post of interest, you can subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Friday, 23 November 2018

Integrating Artificial Intelligence in Risk Governance in Financial Services


I have written a number of articles about the interaction between artificial intelligence (AI) and governance and risk management.  My latest piece highlighted the impact that AI tools can have on a firm’s risk profile (here).  I am in the process of gathering my thoughts on how to provide for the ongoing and timely monitoring of AI-based tools, including their integration in business governance and risk management in financial services.

While reading on the subject (and keeping up with the pace of the Oxford FinTech Programme), I came across a succinct and interesting paper from Singapore’s Monetary Authority (here) that articulates how the principles of fairness, ethics, accountability and transparency can - and should - be applied to promote the use of AI in the financial sector. 

The paper highlights four useful points for senior management and Boards considering the strategic use of AI-based tools.

1. It is important to invest time and effort understanding and monitoring the decision making of AI based tools – i.e. do not regard AI-based tools as black boxes. 

2.  AI decisions are subject to the same ethical standards that apply to decisions made by staff – i.e. if AI-based tools purport to make decisions as humans would, but without their ‘feelings’, they should be scrutinised like humans.

3.  Governance has an important internal element not just an external one - i.e. firms should not rely on regulatory initiatives to support them delivering appropriate outcomes.

4.  The use of AI-based tools is transparent to consumers in terms of their use and outcomes – i.e. while there are limits to explanations that can be provided this cannot be dismissed altogether.

The paper also includes a number of practical illustrations that highlight the principles identified.

All in all this paper represents a useful starting point for firms to consider how they can equip themselves to benefit from the opportunities that AI technology offers for their businesses.

If you found this post of interest, you may want to subscribe and receive further posts by email. See the box on the right-hand side of the screen or click here

Sunday, 16 September 2018

Monitoring the Risk and Business Impact of AI-Based Solutions



AI-based solutions can shape how financial services businesses make money, whether the business model is the same or not. For an existing financial services business, the motivations may vary and range from efficiency to expanding the business. There would be project risk as with any development, but leaving that important consideration aside, it is worth bearing in mind that AI-based solutions would also impact the risk profile of the business. This may or may not be the original intention, but it becomes more likely. The key implication is that implementing an AI-based solution would require a radically different risk oversight approach by the business.

Standard computer algorithms which are not AI-based canand dosolve complex problems. The main feature of such algorithms is that the problem is somehow defined and an algorithm developed to solve it which will produce the same answer as long as the same inputs are provided. So a credit-scoring mechanism calibrated to capture a certain type of client gives you just that.

The answers offered by an AI-based system may change over time. New data is used to reassess the underlying relationships and recalibrate the relationship between the target variable and the potential explanatory variables. This “learning” can also happen in a standard programme when there is a process of recalibration. The difference is that in the case of AI, learning would happen on a real-time basisthat’s the essence of AI.

Alternatively, with AI a target variable may not have been defined. That’s not as unusual as it might sound. For example, algorithms assessing a loan or credit card underwriting may fall in this category because there is no single rule to predict a borrower’s likelihood of repayment. New data can lead to a certain recalibration or can be used to identify new relationships between certain data. For example, over time an AI-based system might identify that outstanding debt is a better predictor of the likelihood of borrower repayment than repayment history and penalise someone with a relatively good track record of timely repayments.

The first type of AI-based solution is called “supervised machine learning” and the second one “un-supervised machine learning”. The key difference is the extent of autonomy that goes with the learning.

Consider the potential impact on conduct risk of AI-based tools. One of the expectations from Treating Customers Fairly (TCF) with respect to product governance is that they are designed to meet the needs of identified consumer groups and are targeted accordingly. This requires a clear business strategy, including identification of the target market through a combination of qualitative and quantitative research and oversight of the business to ensure that it is aligned with initial expectations of customers and business generated. Take the example of automated investment services covered in a recent FCA review. These providers would rely on some type of AI-based solution, whether supervised or unsupervised machine learning. The possibility of capturing different customers or the advice generated being different from what was envisaged cannot be ruled out. The challenge is how to put in place a monitoring approach which ensures that outcomes and risks which arise are consistent with the expectations in the business plan.

Something similar can apply from the perspective of credit risk, impacting the quality of the portfolio and performance. Suppose you have been targeting retail customers with a specific risk rating for a credit card business. If you roll out an AI-based solution to enhance the efficiency of product underwriting, you would need to have in place mechanisms to ensure that the credit quality of the portfolio is consistent with your expectationsor else change those expectations. Both options are fine. You may want to keep your target credit rating constant and seek more volume, or perhaps you see AI-based solutions as a more robust tool to support decision making and, in a controlled manner, can relax your target rating. Regardless of your choice, you would need to put in place a credit risk monitoring approach that is suited to the new AI-based solutions, as well as ensure that the business understands the portfolio implications of “learning” that is at the core of an AI-based solution system.

The salient point to take away is that the roll-out plan of AI-based tools may focus on the launch. However, the greatest challenge may well be the need to provide for the ongoing and timely monitoring of the AI-based tools and their integration in business governance and risk management, which I will cover in the next post.


Wednesday, 6 June 2018

Why should we consider artificial intelligence (AI) from a risk management perspective?


I have been reading about AI in financial services and thinking about it from a risk management perspective. What is there to be gained from this? 

AI, like computers and other innovations, is a general purpose technology. One of the insights from Tim Harford , economist and journalist, about the impact of this type of innovation is that sometimes it takes time for innovations to have an impact because people do not immediately change the mindset associated with the previous technology. The main example he provides of this time lag is electricity. Following his invention of the light bulb in the late 1870s, Edison built electric power stations in 1881 in the US, and within a year electricity was available as a commodity. Yet as late as 1910, when electric motors had more to offer, manufacturing still relied on steam power. You can read about this here.

I found Tim’s explanation for this conundrum compelling. Steam-powered factories were arranged in a specific form to benefit from the single steam engine through a central drive shaft that ran through the factory.  Initially, owners changed the source of power to electricity but to fully benefit from it, factories had to be rearranged according to a different logic. In addition, rearranging factories gave workers more autonomy and flexibility, and the way staff was recruited, trained and paid had to change as well. As a result, adopting electricity meant much more than simply substituting one source of power for another and the pace of adopting electricity was slow.

I think this analogy might be relevant in applying AI to financial services. AI offers a new way of powering decision making in businesses. The example of replacing steam power with electricity suggests that to get the full value of AI, financial services need to think about AI as more than enhancing or substituting for existing tools. Risk management requires a broader perspective to support decision making and achieve business objectives.  I would hope that considering AI from this perspective would help financial services business to fully benefit from AI. 

You can also help by submitting questions about AI from the perspective of risk management and governance. Send your questions by email (isaacalfonblog@gmail.com) or leave a comment.  I am not sure I will have answers, but who knows?

If you found this post of interest, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here

Thursday, 24 May 2018

Artificial Intelligence (AI) and the Board Risk Committee


The purpose of risk management in financial services is usually defined as to ‘protect and enable’.  The ‘protect’ dimension can refer to the franchise value of the business but is mainly about protecting from regulatory intervention. ‘Enable’ has a perspective of value (however defined) and achievement of company objectives. (Click here to read more about ‘protect and enable’.)

AI-based solutions, leveraging on vast amounts of data, are already a reality in the world of financial services, and these solutions are only likely to become more prevalent in the next ten years. What are the implications of AI developments for a Board Risk Committee? 

The simple ‘protect and enable’ approach suggests a number of points for discussion:

  • How would your company evidence that AI systems comply with relevant legislation, e.g. non-discriminatory laws?
  • How would the wider data needs of AI system cope with data protection legislation? What about the so-called ‘right of explanation’? What would be the impact of these wider data needs on cyber-security?
  • What is the business purpose of introducing an AI system? Does the business seek to enhance operational efficiencies? Does it aim to enhance business performance? How would you ensure that this purpose is achieved?  
  • What would be the operational impact of the deployment of specific AI tools in the business? Would it also alter the overall risk profile of the business? The profile of certain risks?
  • What are the implications for risk governance, the risk management function and other oversight functions?

These are not simple questions that can be covered in a meeting of the Risk Committee. In some cases, the answer to the questions may not be clear-cut.  For example, an AI-based underwriting system can be deployed to enhance business performance or to seek operational efficiencies. In other cases, addressing some of the issues would require the development of appropriate monitoring systems rather than a point-in-time consideration.

However, it is also worth bearing in mind that unless you operate in a start-up business, there would be a fair amount of technology available which would not necessarily be based on AI, and can be applied to improve existing business processes and reflect a (more) customer-centric perspective.  So perhaps the main question about AI systems is really whether there is an adequate understanding of technology in the business to ensure that AI is the appropriate technology.

So where should a Risk Committee start?  It may be useful to think about this as discussions outside the usual calendar of the Risk Committee meetings and develop a programme that consider these over time.

If you found this post of interest, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here


Monday, 5 March 2018

Risk Assurance: The Challenge Ahead


I wrote about risk assurance a while ago (here). More recently, I have had a chance to talk with a few people in banking and consulting about it, and to reflect further on the subject.

By way of background, my working definition of risk assurance is a structured activity undertaken by the risk function (second line) which is aimed at evidencing that risk management is embedded in the business. Feel free to comment on this definition.

The important thing about risk assurance is that it matters because it contributes to shifting (or to maintaining, if you wish) the appropriate risk culture in the business. What do I mean by this? I hope we can all agree that the appropriate risk culture in financial services is one that includes the following:
  • the business takes into account risks in decision making and can evidence that, including compliance with regulatory requirements; and
  • the risk function provides the parameters for taking into account risk in decision making (risk appetite framework, stress testing, etc) and aggregate risks.
Truly achieving that is a challenging journey that takes time. Many insurers and banks started the risk management journey as a result of regulatory requirements—Solvency 2 or Basel. In practice, this has meant that sometimes risk functions have taken up activities like approvals that belong to business functions. Risk assurance will generate evidence about how risk management operates in practice. It will also help to shift the focus of the risk function—and, in turn, the business—in the appropriate direction.

I have worked with a number of clients to implement programmes of risk assurance. Interestingly, these engagements have turned out to be rather different because they must reflect the starting point for the business. In one case, the risk function was well resourced, and the focus was planning. In another case, the focus was a combination of up-skilling and evidencing through pilot risk reviews that the activity can add value.

Leaving aside the considerations associated with implementation, it is important that there be a shared perspective about the overall aim of risk assurance, i.e. ‘integrated assurance’. This reflects two simple observations:
  • internal audit functions already provide assurance about the overall control environment;
  • from a Board perspective, assurance is assurance, regardless of which team/line of defence provides it.
In other words, the aim would be to develop a risk-based assurance plan which covers deliverables by 2LOD and 3LOD in such a way that the Board can understand where independent assurance has been provided.

I would be interested to hear your thoughts.


If you found this post useful, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here

Tuesday, 5 April 2016

Five Risk Management Lessons From Pixar


I read an interview in McKinsey Quarterly with Ed Catmull, one of the co-founders of Pixar, about his management approach for keeping the business innovative (here; registration may be required).  I hoped this article would provide an interesting window into a different sector.  When I finished reading the article, I had found something very different instead.  I had learned a number of useful lessons about the design and implementation of risk management:   

1.  That clear business objectives inform risk taking.  Are there clear business objectives?  How do they relate to risk management?

2.  The impossibility of delivering absolute clarity. Is risk management striking a balance between providing clarity and enabling staff at all levels to respond to challenges as they arise?   

3.       The importance of running experiments.  How do/can we experiment with risk management?  Is this about testing risk metrics?  Product features and claims?  Changes to underwriting criteria? 

4.       Articulating business culture to make it less dependent on key individuals and ensure it resonates beyond senior management.  How do we ensure that the ‘tone from the top’ is echoed by middle management?  

5.       The important distinction between assuming and spreading risks and their focus on the former.  How close is the risk management oversight to product development and risk taking? 

So the next time you watch a Pixar movie, remember that there is a fair amount of risk management behind the scenes. 

This post is part of the series "Aspects of Risk Management".  Other articles are available here.  

If you found this post useful, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here.  My target is to post on a regular basis (no more than weekly) and I will not be flooding your inbox. 

Saturday, 19 March 2016

‘Nudging’ Meets Enterprise Risk Management?


It is no exaggeration to say that behavioural economics has become mainstream.  With hindsight, this is not really surprising because the assumptions underpinning economic theory have always been regarded as just that: assumptions. 

The key innovation of behavioural economics are the identification of specific circumstances where there are systematic departures from rational decision making and the development of context-specific predictions of behaviour.  Broadly speaking, departures from rational decision making are referred to as ‘biases’ because outcomes are poorer than the optimal outcomes under rational conditions.  These biases may affect preferences, beliefs or decision making.   Box 1 below shows some common types of biases.

Box 1: Sample of Common Types of Biases Affecting Decision Making

Type
Bias
Description
Example of bias in consumer decision making
Preferences
Reference dependence
Assessments are influenced by the reference point for the assessment ― typically the status quo ― or by a fear of losses.  Depending on the context, this can encourage either too much or too little risk taking.
Purchase decisions are driven by alternatives or product features which are irrelevant to the consumer.
Beliefs
Over-extrapolation
Predictions are made on the basis of few observations believed to be representative from which a real pattern or trend is inferred and, as a result, uncertainty is over- or under-estimated.
The quality of financial advice is assessed on the basis of few successful investments even if these could reflect pure luck.
Decision making
Rules of thumbs
Decision making is simplified by adopting specific rules of thumb such as choosing the most familiar and avoiding the most ambiguous.
Products at the top of a list or offered by large companies are selected.


Another innovation of behavioural economics is the notion that it is sometimes possible to address those biases, and thereby enhance outcomes, by making small changes to the environment ― hence the number of books about behavioural economics with the word ‘nudging’ in the title.  I have come across nudging considerations in terms of sales (e.g. how the default option affects customers’ choices) and in terms of public policy (e.g. the introduction of cooling-off periods in financial services). 

One of the key motivating aspects of enterprise risk management is its effectiveness.  This is not just a challenge concerning an outcome at a particular point in time.  The main aspect of the challenge is putting in place a process that drives enhanced effectiveness.  This is an aspect that has not escaped EU supervisors framing risk and capital requirements for banks and insurers in the EU, which require assessments of risk management effectiveness. 

So how could these two meet?  An assessment of risk management effectiveness could seek to identify behavioural biases that affect the management of risk across the business: for example, in terms of underwriting and investments.  Consider again the biases set out in Box 1: which ones could be relevant to risk management?  If we identify the biases that shape risk management, we can also assess their materiality and consider whether there are ways of addressing them through changes in the operating environment.  If you have any thoughts about how these biases, or others, could affect risk management, I would be very interested to hear them.

This post is part of the series "Aspects of Risk Management".  Other articles are available here.  

If you found this post useful, you may want to subscribe and receive further posts by email – see box on the right hand side of the screen or click here.  My target is to post on a regular basis (no more than weekly) and I will not be flooding your inbox.