Eleven years after the financial crisis, US financial institutions are operating in a landscape with heightened regulatory and compliance controls. Simultaneously, there is a cost explosion, because of the investment needed in infrastructure, data management, and resources to adhere to the changing regulatory requirements.
In mid-May, I attended the Risk Americas 2019 conference, organized by the Centre for Financial Professionals, in NYC. I have highlighted two defining features of the current risk management arena that stood out during the conference, and the associated pain points.
1. The increasing importance of data lineage across risk modeling: Based on a few interactions during the conference, I gathered that, going forward, review of data sources, assessment for pattern change, and monitoring of nodal transformation processes would be enforced more strictly. This is especially likely in view of the CECL or current expected credit loss directive on data lineage, a new accounting standard that aims to change the way financial institutions account for expected credit losses.
One of the main concerns of industry leaders was that regulators could raise the bar on the data front, and demand to trace the flow of data across, for lack of an apt description, the data ‘assembly line’ or ‘supply chain.’
Were that to happen, absolute integrity and smoothness of data as well as a well-documented flow through the entire cycle, would be critical to the success of a good risk model. It would also compel financial institutions to lay accurate processing and imputation methodologies. Given the possibility, one would expect greater importance of a hybrid area of big data engineering and big data science, entwined with risk management expertise – a field axonometrically projected as ‘Risk Informatics’ by Goldman Sachs.
2. The advent and increasing prominence of ML in risk management: Listening to the speakers in some sessions, I could have easily concluded that ML has become the panacea for all risk problems. However, my conversations with some of the participants revealed that as a terminology, ML has different connotations for different users, ranging from simple task automation to robotic process automation and unsupervised model decomposition and recomposition. Moreover, not all businesses are at the same point of the ML life cycle.
Risk heads from a few institutions pointed out that they have maneuvered the various cautionary points associated with ML models, including bias (mostly algorithmic) and the ability to correctly interpret output. They defined hyperparameters and made their ML models adaptive to cater to regime shifts and pattern anomalies in data.
However, most institutions continue to run their traditional models, which they expect to decommission only after their ML models are tested and statistically proven to be better placed. That day may not be too far away for a handful of institutions, but it might be a long road ahead for a vast majority of them.
Impediments to ML
My discussions on ML models made two aspects very clear: 1) Their rollout involves a steep cost, mainly that for running traditional and ML models in parallel; and 2) ML models could result in over-trained neural networks, and a set of strict guardrails or a robust protective framework is a prerequisite.
Assuming a utopian scenario, where concerns related to overtrained neural networks and associated fairness of a model can be taken care of, the looming question is – will regulators accept ML models and to what extent? I believe we can safely assume that ML and traditional models will co-exist over the next few years, at the least, and the need for optimization of the Model Risk Management (MRM) life cycle will continue.
A quasi-solution for the pain points
I expect teething problems to continue while the transition to fully loaded ML models is underway. However, organizations can somewhat dull the pain by proactively taking these steps:
Bringing in data experts at the beginning of the modeling process: Financial entities should undertake data and process-tracking through sound data analytics / data science and adopt a ‘techno-functional’ approach to risk management. They should bring in data experts at the initial stages of the model development / model validation process. This will help in understanding data flow and transformation, thus reducing the time required to validate and monitor models.
As a model is built, or when the validation process begins, data sets should be further force-coded for early warnings / alerts whenever a pattern shift occurs or when the data flow records an anomaly, both on the input and output sides.
Involving entities with a strong background in risk and data practice: Companies have the option of using experts with a well-established DSP (data science program) and domain expertise in risk and compliance. Our Risk and Compliance practice has over 150 quants and data scientists, who partner with G-SIBs (global systemically important banks) and other financial institutions to strengthen their MRM function. We use our dynamic DSP and risk training programs to cross-train associates, enabling them to scale up significantly faster to provide data, regulatory reporting, or MRM support to clients.
Optimizing risk management models: My experience with multiple clients has underlined the following as key to erecting seamless and highly coherent model development-validation-monitoring-cum-review platforms:
For a detailed read, please refer to my colleagues’ blog “Four steps to make your MRM 50% more efficient”.