Model Risk Management in a post-COVID World:
Highlights From “Advanced Model Risk 2021”

The year 2021 has brought so many changes to every sector and area of life. As we have lived with the pandemic for well over a year now, industries worldwide are finding ways to improve and grow from the difficulties they have faced the previous year. The financial services sector focuses on how model risk management (MRM) models can be developed through technology and practices to handle unprecedented events.  

Near the end of March, we attended the Advanced Model Risk 2021 global virtual event organized by the Center for Financial Professionals. This virtual event offered an eye-opening view of how MRM models were being enhanced through technology capabilities and improved practices. we have highlighted five subjects that stood out from the conference below:

  • Climate & Environmental Risks—Factoring in climate-related and environmental risks: a new challenge in model risk management 
  • Model Volatility—Reviewing model reaction to COVID-19 and assessing the impact of volatility in 2021
  • AI/ML—Banks’ use of AI/ML and its relation to existing MRM frameworks
  • Inventory—Ensuring completeness and accuracy of model inventory to monitor uses across all areas
  • Automation—Path to automation in model validation and monitoring  

Climate-Related and Environmental Risks as a New Challenge in Model Risk Management 

Climate risks are likely to keep us busy for the next ten years within MRM. Since the Paris Agreement, governments worldwide have been developing low-carbon and more circular economies, with the financial sector being leveraged to reach these goals. Andreas Peter, Managing Partner of Fintegral, introduced four different risks related to climate change: financial risks, sustainability risks, climate change, and physical risks. However, the one risk that Andreas focused on was CER, which possesses characteristics of climate-related and environmental risks that directly and indirectly impact banks’ risk profiles. CER drivers act either directly or indirectly on the bank’s risk profile via specific transmission channels. The indirect channel focuses on the risks banks are exposed to via the impact of CER on its counterparties, whereas the direct channel looks at the impact of CER on the bank itself. To deal with the complexities of CER, a forward-looking, comprehensive, and strategic approach is necessary for CER. Andreas stressed that risk models and risk management concepts should be further developed to address CER.

There are quite a few blind spots in risk models for CER, with some of the highest blind spots in categories such as credit risk and NFR/OpRisk. CER impact is only considered within the credit risk area to a limited extend and is not systematically in the rating models. Andreas suggested that banks should develop specific scorecard models to address CER in relevant sub-portfolios. The NFR/OpsRisk area (including legal, reputational, and outsourcing risk) also has a high level of blind spots since this modeling often includes a forward-looking scenario component. To combat this, the scenario database should be updated to include CER-driven scenarios. Risks areas such as business and real estate risk have a moderate level of blind spots. The business risk area’s business risk quantification (ICAAP) is based on a volatility analysis of a time series of earnings components. To account for the possible blind spots, banks should combine their forward-looking approach with a scenario approach (typically adopted in the business strategy process). In the real estate risk area, quantification is often based on regular valuating or mapping out a time series of total returns. Thus, the analysis of physical risks due to tenants and energy efficiency should be expanded.

CER should be high on regulators’ and banks’ agendas since MRM governance will have to engage in ongoing CER projects to promptly consider new requirements and developments. MRM processes, tools, and methods will have to be updated and developed to keep up with these rising needs, which will require cooperation between various roles such as model owners, developers, users, and validators.

Model Volatility and the Future of Model Risk Management 

These past 12 months have been challenging, to say the least, and have pushed many models to their limits and beyond. However, how can firms deal with unexpected and unanticipated volatility? In this panel, this question was explored and broken down. First, Mohit Dhillon—Managing Director of Quantitative Analytics at Barclay, highlighted the difficulties firms face when making sense of volatility. The overreliance on data from the past 12 years has been one of the core weaknesses of models, especially when firms try to make sense of historical data versus government stimulus. Mohit explained that the key to provisioning and capital stress testing is to not over-rely on known history. Understanding the source of volatility and finding which individual drivers contribute to that volatility is critical for senior management. Alexey Smurov, SVP of Balance Sheet Analytics and Modeling at PNC Bank, added that firms find models to be overly complex. We need to think in an agile way and develop quick signals and tools to think about these changing elements. 

Another issue is the disconnection between regulation processes and data scientist teams who are tasked with being ambitious. David Bloch, Directors of Data Science Strategy at Domino Data Lab, stressed that this disconnection needs to be patched up through environment sharing and data sets. Now, it is easier to take a piece of code and move it around. However, with this ease comes risk since current management principles are still behind. David emphasized that firms need better management principles during the designing and development stage to avoid issues in the future. When testing models’ sensitivity, we should look at multiple cycles. Moez Hababou, Director of Model Risk Management at BNP Paribas, says that we should subject models to different shocks and sensitivity tests during development to understand how the model will behave in unprecedented environments. Our over-reliance on historical data from the last observed credit cycle can be detrimental to our models. We should step away from these examples and take a proactive approach to testing.

As we progress into 2021, the role of MRM in a post-pandemic world will change based on the way we think about performance monitoring and the recalibration of our models. We must re-code our models to use them at scale, with a lot of trial and error on the way. However, this can be made easier through new technologies.

Banks’ Use of AI and ML Within Existing MRM Frameworks

Artificial Intelligence (AI) and Machine Learning (ML) have become synonymous with being forward-thinking and innovative. However, does this view skew AI and ML’s actual capabilities, along with the critical controls when used in MRM frameworks? David Palmer, Senior Supervisory Financial Analyst of the Federal Reserve Board, explored these two questions from the perspective of soundness and performance. AI and ML have become buzzwords that many companies are touting as wonder-tech. However, David highlighted the need for careful consideration and planning need before implementing it. Traditional models have produced nonsensical results with the breakdown of assumed relationships and correlation, so AI and ML appeared to be the next logical step in developing MRM frameworks. However, despite AI and ML’s abilities, there are still limitations that these technologies suffer from. To combat these limitations, banks should establish sound governance, risk management, and controls for AI and ML that align with the use and sophistication of AI and ML methods. Banks should ask themselves how they will establish internal standards for explainability and transparency.

Data plays a key role within AI and ML since it drives the parameters and features that these technologies can bring (instead of human beings performing these specifications). With the increased velocity of data (real-time or close to real-time), data rapidly being processed through AI and ML. However, David shows that banks should now focus on data quality and suitability. Some data may be error-free, but is it appropriate for what is being measured or recorded? There are so many risks and biases inherent within data that banks should also be mindful of. Banks are using additional and more comprehensive data sources with AI and ML to enhance performance and combat these issues. If a bank’s data is conceptually sound but untested or opaque but tested, then they need to check if more controls are needed. If the data is opaque and untested, then banks should consider not using that data. Traditional stress testing models that do not use AI and ML tend to be conceptually sound but are not guaranteed to perform well.

The main advantage of AI and ML in credit risk modeling is their ability to process large data sets and identify relationships and patterns. Manually processing large data sets is usually error-filled and time-consuming. Additionally, some patterns are not apparent to humans. Solely relying on AI and ML is still not feasible for credit risk models; however, many institutions have found success in pairing AI and ML with traditional models. Some institutions have traditional models, such as the primary model with AI and ML as the benchmark, while others have AI and ML as their primary model with traditional models as the benchmark.

Ensuring the Accuracy of Model Inventory across All Areas

Despite the effort required to develop, organizations need to have a complete model inventory to avoid risks. Nikolai Kukharkin, MD and Head of Model Risk Management at TIAA, explained that there is no single way to ensure an inventory’s completeness, but the 1st and 2nd lines need to drive it forward. There needs to be a cultural shift to model inventories becoming mandatory for organizations of all shapes and sizes. However, to do this, there needs to be a classification of models. Suman Datta, Head of Portfolio Quantitative Research for Lloyds Bank—Group Corporate Treasury, suggested that models should be classified either under regulatory purposes or under other reasons. This helps create an overall nested structure that companies can establish, which considers how models are being used. Finally, when going through regular audition processes, there needs to be a golden source of truth. Jens Jakob Rasmussen, Head of Model Risk & Validation at Nordea, saw risk arising from a model’s completeness. There needs to be a threshold that models have to cross to be considered complete, which will then need to be divided into verification (which rests with the model itself). Whether models are conceptually sound will depend on the purpose of the model.

When it comes to accountability, the 1st line and the 2nd line are both accountable. Jens sees core accountability for identifying and having frameworks lies between both the 1st and 2nd line. The 2nd line’s responsibility will most likely be auditing, while the 1st line’s responsibility will be under senior ownership. There will need to be a solid governance structure in place for both the 1st and 2nd lines. Suman suggested that models need to have a continuous monitoring scheme since models’ evolutions are primarily contingent on market changes. Jens suggested that designated coordinators be established between the 1st and 2nd lines when it comes to solid coordination. The two most important elements to successful coordination will be contingent on identifying a process and being transparent. The main challenge with managing model inventory will be the different stakeholders accepting a process. Jen recommended that companies build or purchase an advanced database that has reporting capabilities and has validating results.

Ensuring a complete and accurate model inventory will also rely on input and cooperation from other departments. Suman found that holding a monthly forum for business model users and the 2nd line to discuss existing and evolving model issues is a great way to promote cooperation. Aside from these monthly meetings, day-to-day monitoring will also need to become a regular occurrence since there could be issues during unexpected times.

The Path to Automation in Model Validation and Monitoring

Despite the benefits that automation brings to model validation and monitoring, firms still resist adopting it. Rick Boesch, Head of MRM Automation at Evalueserve, found that a large part of this resistance comes from institutions thinking of automation as a regulatory-imposed cost. Additionally, there are still some pieces of MRM that institutions may not want to be automated. Aside from these issues, one of the biggest slow-downs to implementing automation is deploying resources. Steve Lindo, Course Designer & Instructor in Enterprise Risk Management at Columbia University, illustrated how automation projects could get stalled or relegated, which can create issues with demand. MRM’s workflow can also be very confusing to institutions since they do not know where to begin.

Because of these challenges, many begin their automation journeys late and need to play catch-up. However, the benefits of automation far outweigh the challenges automation can bring. The most significant pay-off, according to Steve, is that talent can be freed from repetitive tasks. Often, institutions implement automation into their models in a piecemeal fashion. Rick suggested that companies find a way to drive the process so that the tasks can be specified, and the appropriate tests and digested data can be used to create context and documentation. Anna Slodka-Turner, Head of the Rick and Compliance Practice at Evalueserve, highlighted that many companies do not have the option to work full time on automation. To get started, Rick suggested that companies focus on ensuring that their code remains agnostic to scale with the number of document types.

In the future, Steve sees these scales in high demand, which would then be absorbed into the workforce. More experienced MRM teams will be the key in ensuring that MRM functions become automated since the balance of incoming skills combined with the native abilities of individuals who see the bigger picture will be needed to manage the process.

Dicksey Mathew
Managing Director, Financial Services Posts
Gaurav Goyal
Posts

Latest Posts