Model Validation

In the Model Risk Management process, model validation is key to assessing the reliability of a model and identifying errors and corrective actions. However, ensuring high-quality validation is a challenging task given tight regulations and increasing complexity and usage of models.


Model Validation

We can look at model validation as the second step in the model risk management process. Once development is complete, models need to be independently reviewed by a team of experts to ensure that they are performing as intended.

According to the Fed’s Supervisory Guidance on Model Risk Management (SR 11-7), all model components, including input, processing, and reporting must be validated. Model validation should include:

  • Conceptual Review:
    • Evaluate the quality of model construction through a review of documentation and empirical evidence supporting the methods and variables used.
    • Assess key assumptions and variables as well as their impact and limitations.
  • System Validation:
    • Review the technology used to support models and implement controls as needed to address gaps or limitations.
  • Data Validation:
    • Ensure the relevance of data used to build the model.
    • Assess the quality and accuracy of data.
  • Testing:
    • Tests will vary based on model type and context, and multiple tests should be performed on each model. Some effective methods of testing include:
  • Documentation:
    • Interpret test results and complete detailed reports that outline which aspects of a model were reviewed, highlight potential flaws, and establish whether adjustments or controls are needed.

However, the validation process does not stop once a model has been implemented. Models continue to be monitored on an ongoing basis and reviewed periodically.

Ongoing Monitoring and Periodic Review

While initial review of models prior to implementation is often the most rigorous part of validation, models need to be continuously monitored after they are put into use. Monitoring how a model is performing allows you to track known limitations and identify any new ones that can occur from changes in markets, products, exposures, activities, clients, or business practices.

SR 11-7 states that banks should design an ongoing testing and performance evaluation program that includes process verification and benchmarking. The process verification should check that all components are functioning as designed while benchmarking will compare the model’s inputs and outputs to estimates from alternative, rigorous models that are more comprehensive.

In addition to ongoing monitoring, periodic reviews should be conducted, at minimum, on an annual basis. This review should verify if models are working correctly and whether validation activities are sufficient.

Challenges of Today

It’s clear that model validation is critical to mitigating the risks associated with model usage. With the steep increase in model malfunctioning due to the pandemic, validation has become more significant than ever. However, given the widespread use of Artificial Intelligence (AI) and Machine Learning (ML) in model development, validation has also become more difficult.

These factors, paired with the overall rise in model usage, have led to backlogs and delays in the validation process as teams simply cannot keep up with the demand. Banks have been trying to address this issue by expanding headcount, but due to the high level of technical expertise required, filling these positions is extremely costly.

Many firms are now looking towards automation to scale their model validation process. While adding more team members will only increase capacity minimally, automating lengthy manual processes can scale operations significantly. 

Documentation is one of the most time-consuming pieces of model validation, as the detailed reports for each model often amount to hundreds of pages. We built our platform, MRMraptor, to automate test interpretation and documentation and reduce overall review time from months to weeks. By significantly cutting the manual effort involved in documentation, your high-value quant experts can spend their time on more critical work.


Connect with one of our experts to learn more about how MRMraptor can quickly scale your model validation capacity.

Allison Cornett

Latest Posts