This is a transcription of our interview with David Asermely, Global Lead - Model Risk Management at SAS.
You can watch the full video interview here. Make sure to subscribe to our Risk Management Show via iTunes, Spotify or other major podcast apps. Just search using the keyword "Risk Management Show" inside your favorite app so that when interviews will start rolling in, you receive your notifications or podcast will download straight to your phone.
Boris: Welcome to our Interview with David Asermely. David is a global Model Risk Management Lead at SAS driving strategic conversations with global institutions and influencing the SAS model risk management solution roadmap. He is passionate about translating data into actionable intelligence, and he focuses on combining the best technologies and design principles to improve modelling efficiency and quality.
David, Thank you for coming to our interview today.
David: Boris thank you for having me. I'm very excited to have this conversation.
Boris: Absolutely. SAS is an analytics leader. It has been named one of the top risk and compliance technology providers by Chartis for 15 consecutive years. It ranked fifth overall in the Global Chartis RiskTech100® 2021 report and won three industry solution categories where Model Risk Management being one of the three.
Based on that, we invited David Asermely to talk about the importance of the MRM discipline, as well as some of the current challenges and threats.
David to get us started, when Introducing model risk management to a business level audience, can you describe the role MRM plays to manage and govern governing firms models life cycle.
David: Absolutely. When you really think of the role of Model risk management, to summarize it it's to identify models that should be replaced, that are hurting your business, that are not providing the value that they should. We've just been working on a white paper that is talking around the importance of model risk management and what it brings an organization.
And one of the analogies that we developed in the paper is your models are your digital workforce and Model risk management make sure that your workforce is of the highest quality that is performing tasks as appropriate. And we are really think of that Model risk manager’s job is to independently review the models and identify ones that should not be used.
Boris: Do you see robust model risk management as a competitive advantage in current situation and current development with analyrics tools
David: Absolutely, to me that's like asking, can analytics be a competitive advantage? So if you have one organization that has better overall model of quality that is removing models that are performing poorly, you are going to have a competitive advantage over an organization that does not have models that are performing.
So from a competitive advantage perspective, model risk management brings to an organization and additional level of certainty to the model, a modeling process that drives value up and down the Model operations perspective and also to the business perspective.
Boris: It seems that most of the Model risks has been focused on the larger banking and insurance firms, mainly in North America and Europe. Why hasn't that been more prevalent in other industries?
David: There has been a number of regulations in the financial markets, partly because it has been very clear, over the last couple of decades that models can create not only risk to a firm, but to the global financial system. So there have been regulations that are continued to be enforced at these organizations on understanding where your models are, how they're performing, having that independent review associated with the model to bring that additional rigor and perspective to the modeling process.
It's interesting as I attend a number of these conversations over the last several years, you can see the level of sophistication continues to rise in the world of model risk management, especially as machine learning models become more prevalent, things need to be done more efficiently yet still provide that critical information to these organizations. We have had a number of conversations outside of the financial markets, to organizations that do use models.
We've actually been talking to a number and actually have sold our product to one of the postal services globally and that may seem strange, but you think of how many models that they're currently using. So, the financial markets and those regulations had been working on model risk management for a number of years, there are some best practices that have been developed. And now we're seeing other industries wanting to take a look at what those best practices are and seeing how they can be implemented, so they increase the quality of analytics that they're using within their organization.
Boris: We know that the subprime crisis and the reliance on models at that time. Can you give us some examples of what might happen if MRM goes wrong? I saw in your last, not yet published white paper nice images of a Swedish ship that went down just a few minutes after she was released to the public in the shipyard. It was very interesting to know that this incident also related to model risk management.
David: Absolutely. So that white paper, we had some fun on it and one of the things that we focused on is historic examples of where a proper model risk management would have helped avoid a disaster. So interesting read, and I welcome you all to take a look at it. We will be publishing it very shortly. One of the tenets of model risk management is understanding what could go wrong with that model, what is the absolute risk that could be associated with a poor model risk management?
One of the things that is required in the domain of model risk management is to stress a model in various ways, to look at different data, extreme conditions on the different features of a model. By doing this type of a stress test, it often flushes out the potential damage that a model can cause within an organization.
The subprime crisis, a very good example, there is an analogy that I heard and I loved on the subprime and it's where investors were picking up pennies in front of a steam roller, and it was a free money, right? Of course at some point that steam steam roller hit the gas pedal and did heard a number of investors. So from a model risk management perspective, one of the most important aspects of the whole process of governing a model is understanding what is the potential that can be done.
Is it financial, is that reputational, is it a combination of the two? Is there a possibility that a model can have you on the front page of the Wall Street Journal because your Model has a gender bias? These risks are only exacerbated with machine learning in which there's a capability of taking in a lot of other data that maybe historically has not been brought in.
So it's understanding, again, the total risk that is being taken by firm by using that model, but the other aspect is knowing the conditions in which a model will work properly. So for example, if a model's performance starts to degrade quickly with interest rates get closer to zero, then that's something that should be understood. And as those market conditions moving that way, the model should be flagged, highlighted, and possibly decommissioned and replaced.
Boris: Looking at the new technologies and new buzzwords, including alternative data, Opensource languages, AI, machine learning and cloud. What does MRM offer to enable firms to manage and orchestrate introducing a structured approach to Model Governance?
David: Great question. My view of model risk management as we within SAS have developed our product and I believe one of the reasons we have ranked to be the top vendor in this space is we've developed a solution that is able to work within your existing ecosystem. So it works with all types of models, SAS models, of course, but also your Opensource, Python models, spreadsheets.
So the capability to be a single repository that can govern your entire set of models in one location, I think that's extremely important. The other thing that we have focused on is the ability that the entire Model life cycle to be seamlessly connected to those tools. One example is, and I get asked this question often, how can you ensure that models that are active and being used in production are actually approved in the Model Risk system and the answer is by connecting that system to your Model Ops in a way that requires that MRM sign-off before a model is moved into production and used.
One of the other areas that we've focused on is automating as much of the Model Risk system as possible. So collecting data on when the model is used in a systematic, automated way and then when you look at how this has been done in the past, it was manual, it was done very infrequently. Now with the machine learning, your performance monitoring has to be done more frequently and there are more and more of those models.
So unless you plan on hiring more analysts, you better find a way to help automate your system in the world of model risk management. So we have a very advanced APIs that allow you to connect across your system, across the various platforms. I would say, those are some of the capabilities, but as machine learning, as more data sources become available, it's extremely important to understand, where the data is coming from, are there any problems with that data that will affect the model? And if that is the case, have a way in which the MRM system can react. and then, if guided to or coated this way, have that Model shut down for additional review. So try and have as much automation as possible across your Model Ops system in a way that provides usable data for your model risk management team,
Boris: What does a future MRM offer firms and can it adapt across a range of firms and industries as you already started describing? Maybe give some example without dropping names what are you guys really achieved recently in practical terms and what you are proud of.
David: One of the areas that we are really focused on is if you ask most firms where their Model risk data is located, they are going to tell you its in some database somewhere, maybe even on some spreadsheets. And there's a team that works with that data. If there are regulations, try to meet those regulations and meet the requirements of the regulator.
But in many cases that data is not accessible across an organization. So what we have done is we've developed a Model Summary card and what this card does is it pulls in the most important information from a Model Risk perspective on that given Model. So for example, simple things like the Model name, the version of the model, what is the data sources associated, who were the stakeholders, what has been the last validation rating, whereas the Model been used, is the model being overlaid, is there overrides associated with that model? Are there critical findings open on this model? And so this card is accessible via API calls across an organization. So if you have a Model user in another country and they're running the model, they simply can hover over an icon on that application.
That can be any application that has an API capability and this Model card will pop up and it's really extracting the most important data. So if I'm a Model user and I pull up that card and I see that there is a critical finding on a model that may result in maybe not using the Model doing some inquiries or at least be some hesitancy on the output of a model. Is that it's something that should be used to make a big bet, for example for that model user?
So we were really working hard on creating a common MRM language that can be understood across the organization that's focused on the wellness of the Model and then making that information available via APIs.
We believe this will help bring MRM to the forefront within the whole modeling Community in a way that's constructive, in a way that will improve the overall usage and quality of a given model and will alert in real-time the users of the model of potential problems.
And in my mind, the real benefit of investing in Model risk management is providing that type of information as you go forward and a lot of this will be automated as well. So having the capability where if you have models that are making automatic decisions for you, loan applications, for example, having the kind of capability in real time to adjust a model or turn a model off if needed or require human review in a way that prevents losses from racking up.
And there was a recent example with COVID where airlines have models that predict or give you your pricing when you look to purchase a flight. Those models are looking at a lot of different information, the models were dramatically lowering prices because there was less orders coming in to buy airline tickets.
And the model's just had a history where if seats were available, prices would continue to go down. Not knowing it didn't matter how low the prices when people were not going to be flying as COVID was breaking out. So again, having your MRM system in a place where it can make a difference to your bottom line and prevent reputational damage,
Boris: I’d like to hear your personal opinion, what is a commonly held belief as it relates to your area of expertise, like model risk management that you strongly disagree with.
David: Sure. I have one. So, Boris, I do talk to a number of organizations that use models and there are some organizations that they look at Model Risk as a cost. And I would argue that it's a cost of doing business.
Models bring automation, consistency to our organization and it provides information that allows them to better compete. Model risk management helps in that process and provides data to utilize those models in a more effective way. I think the other area that I’d like to discuss is automated MRM actually allows you to save across an entire Model life cycle.
So for an example, if I am a Model developer, there are things I have to document. If I am a validator, there are things I have to document. If you ask a developer, a validator what's the job that they hate the most it's documentation. There are other areas where these highly skilled expensive members of your team are required to do a lot of boring, repeatable, mundane pieces of steps. Having a Model Risk Management system allows you to take a look at those and reduce the manual effort, automate it, automate some of the documentation associated with it.
You're always going to need that expert in this process, but, Model Risk Management allows the expert to focus on providing the expertise to the process, not doing some of the mundane components that typically are being asked in. Some of my recent conversations have been with the audit teams at a number of the large banks and imagine being the Model auditor and in your job you're going to see and make sure that the policy was followed for a given Model.
This auditor may be internal or a highly priced consultant. They start to work. And often the first thing that has to happen is identifying the model's history. I want all of the documents, I want to know who's using this model. I want to know where the data is coming from. And typically this requires emailing several individuals someone's out on vacation. So there's a lot of wasted time to collect all the information.
Imagine having an MRM system where the history is complete, the model's complete history has been recorded and available to the auditor. All the documentation is available. We can see what are the data inputs of the model. We can look at the downstream components at the Model. We can view it from a risk profile for the modeling system, not just that individual model.
So for organizations that do this well, there is a value that brought across an organization. The last example is having a report that shows you how well you're doing Model Risk, . This is an area where many organizations have senior people spending a number of hours, trying to create a rapport where if you could have that automated and available at any time, it provides tremendous value.
I guess the last example, one of the questions that has come up over last year, will the tool allow me to identify models that use Libor? Or It had some assumptions based on Libor? Imagine the project that is required to answer that question within an organization with a proper MRM system. That is something that you can get within hours if the system is up to date.
Boris: Okay, fantastic. Thank you David for taking your time today and participate in our Risk Management Show and I wish you and your team at SAS a great success in growing your company and collecting more awards. Perhaps we can come back in a few months and see what the developments are, what’s new on your side.
David: Boris. Thank you so much. I really appreciate the opportunity to talk to you and your audience. And I look forward to having the conversation again.