Thursday 28 August 2014

Big-Data – a Small Risk Transformation?


These days big-data seems to be so ‘big’ that it is everywhere.   I have read with some interest ‘Big data’ by Mayer-Schonberger and Cukier.  I was looking to form some views about it reflecting my perspective of risk management in financial services and ‘value enhancement’. These are three of the key points I took from the book. 

1.    Big-data is not about size but about the ability to work with full data sets. 

This means that the constraints that might arise from sampling are avoided.  Interestingly, there will be cases where adopting a big-data approach means handling a relatively small data set. 

2.    The shift from causation (small data) to correlation (big-data). 

The ability to create additional data at low cost and join up data sets means that we are likely to increase our ability spot correlations.  This would help us understand the ‘what’ even if we don’t fully understand the ‘why’ or the causation. 

3.    All data has value and a company’s ability to extract the value depends on the business model and skills.   

The value of data arises from secondary uses which are difficult to predict when the data is collected.  Companies can extract the value by hoarding the data, analysing it and identifying opportunities for big-data. 

This led me to three observations about big-data and risk management:

1.    Risk managers need to identify the aspects of risk management that can be enhanced by understanding correlations (‘what’) and the aspects that can be enhanced by causation (‘why’). 

While the message of big-data is that correlation is becoming cheaper to identify and offers more value in a shorter period of time, there isn’t a one fits all!  For example, insurers’ ability to spot financial crime, cases of fraud and price insurance risks would be enhanced by the ability to identify the correlation between key variables.  On the other hand, understanding correlations between risk drivers may need some plausible stories to make them actionable.

2.    Existing risk management and regulatory concepts would need to be revisited.

One of the features of big-data is that when different data sets are combined the resulting data is ‘messy’ with many empty cells.  How do you apply existing criteria for data quality governance, in particular ‘completeness’?

How do you validate models?  The authors bring in an interesting example where a simple model performs more effectively than any of the alternatives when a significant amount of data is fed into the model.

3.    Extracting value from data would need careful thinking.

One of the fundamental technological changes is that data is generated in many un-suspected places and situations, e.g. internet searches.  Spotting those opportunities requires a big-data mind set.  Capitalising them requires the ability to capture the data and / or use it.  One implication is that the value of data is something that would need to be factored into commercial outsourcing with third parties.

Overall, this could lead to a significant transformation of how risk is managed and become a new ‘normal’.   However, between now and then companies would need to tread carefully to avoid chasing ‘big-data’ opportunities of limited value.   

If you work in financial services, I would be keen to hear your thoughts about big-data and risk management.  If you don’t, I would be keen to know if these lessons resonate with your experience. 

You can subscribe to future posts at http://crescendo-erm.blogspot.co.uk and receive them by email - no more than once a week.  You will need to provide an email address and then confirm the subscription.  Your email address will not be shared.

No comments:

Post a Comment