An Interview with Lian Van Oudheusden, Head of Model Risk Management, at First Rand Bank
Ahead of the 3rd Edition Model Risk Management Conference, we spoke with Lian Van Oudheusden, Head of Model Risk Management at First Rand Bank. Lian is responsible for model risk strategy and framework development, model risk oversight and reporting and oversight of the independent validation function. Lian also established and chairs FirstRand’s Advanced Analytics Working Group, which is a multi-disciplinary community that develops frameworks, processes and guidance aimed at managing the risks associated with AI and ML while enabling optimal generation of business value.
What would you highlight as the biggest impact of machine learning and AI on MRM?
Machine learning and AI provide exciting opportunities for organisations to incorporate complex interrelationships in their data-driven decision-making processes, to retrain models as these interrelationships shift, and to enhance the speed of decision-making through automation. These opportunities greatly enhance the analytics value proposition, but also present higher levels of risk, particularly given ML and AI’s higher inherent complexity, commensurate reduced transparency, and the tendency of algorithms to exaggerate biases within data.
While traditional MRM has focused heavily on the risk of model misspecification, effective MRM in the context of AI and ML requires robust risk management practices across the entire analytics lifecycle, with particular focus on data and implementation governance, which in turn requires effective partnership between MRM teams, information governance teams and deployment platform owners.
As the scale of adoption and the level of automation in retraining of AI and ML solutions increases, MRM practices and processes designed to deal with model changes that occur at discrete intervals will no longer be appropriate. MRM will need to adapt to be sufficiently dynamic to match the speed of change in models while ensuring that model risk remains adequately managed.
How can ethical bias in AI and ML models be overcome?
The existence of ethical bias within models implies models that result in unfair outcomes. Multiple possible definitions of fairness and multiple ways of measuring fairness in outcomes exist, and overcoming ethical bias in models requires a clear understanding of the definition of fairness that is to be applied, how fairness is to be measured, and the thresholds that fairness will be measured against.
Ethical bias can also arise at any point in the analytics lifecycle, from the point of problem specification and conceptualisation through to eventual model use. This should be recognized within relevant frameworks, which should include explicit requirements for consideration and avoidance of ethical bias at each lifecycle stage.
Processes for challenge and approval of models should include explicit requirements and mandates for ethical challenge, and any committees involved should be sufficiently diverse to limit the potential impact of unconscious bias.
Lastly, accountability for ethical outcomes should be clearly assigned and communicated.
What else would you say are the biggest challenges of ML models?
Transparency of algorithms and explainability of ML model outputs present challenges, particularly where model owners, who are accountable for decisions made on the basis of model outputs, do not have a full understanding of the key drivers in the decision-making process.
Robust implementation of ML models, particularly where automated retraining and decisioning are involved, also presents challenges, given that implementation of algorithms needs to be accompanied by implementation of adequate monitoring mechanisms and safety protocols.
What areas of technological investment are you focussing on? Where, for example, are you planning to invest within next 3 to 6 months?
From an MRM perspective, our current focus is on investment in technological capabilities that will enable MRM to keep pace with the speed and scale of ML adoption in the organisation. This means investment in tools that support, for example, automation of routine validation activities. We are also continuing to invest in technological solutions that support governance lifecycle management, which help to ensure that only models for which the required governance processes have been completed are used in decisioning.
In terms of our platform, (our conferences are informal and intimate peer led meetings where all speakers and delegates are senior executives from top financial institutions), how do you see it assisting you with overcoming the challenges you currently face?
Engagement with peers through specialised conferences provides an excellent platform for staying abreast of leading practice and of common (and uncommon) challenges faced by similar organisations. The insights into which challenges should be prioritised, and the sharing of knowledge and experience regarding addressing these challenges is immensely valuable.
Lian will be presenting during Day Two!
Bad robot: Combating bias in machine learning and AI
- Understanding ethical bias in AI and machine learning models
- Testing tools against bias: For example Amazon’s CV screening challenges
- How to ensure the ethical treatment of customers (particularly in retail decision making): Having the right guidelines, monitoring, and understanding of the models
Panel Discussion: The state of MRM in 2021: How wide open are the gates of progress?
- Is this a period of consolidation for MRM, with a focus on embedding and refining principals?
- Alternatively, are we still in a more pioneering phase?
- Is SR11-7 still the lapidary reference point for banks globally, or has the time come to go beyond SR11-7?
For more information about the event, please visit https://bit.ly/3D13fxU or contact:
Ria Kiayia, Digital Media and PR Marketing Executive
T: +357 22849 404