Debunking the Myths of AI for TPRM

It’s no wonder many third-party risk management (TPRM) professionals are skeptical about the hype surrounding AI. Their very careers require them to be risk averse, and the technology has been associated with spectacular failures as well as successes, such as Tay, the chat bot that adopted racial slurs and inappropriate language from users.

The early failure of Tay underscores the most important success factor in deploying AI technologies such as machine learning and natural language processing. Tay began responding inappropriately because people taught it those responses. Designed to reflect the cognitive processes of humans (but with far greater processing power), AI systems do not have innate understanding. Rather, they learn from actions taken by humans, from how they choose to configure it to the data they feed into it. So it’s important for humans to make good decisions before they deploy AI-assisted TPRM solutions. An example of this is driverless cars. Widely touted to have been mainstream by 2020, instead limited AI training situations has put the brakes on general market release and adoption.

8219695060?profile=original

The hype around AI makes it difficult for TPRM professionals to feel confident about deploying this technology, despite the promise that it will increase efficiency and consistency. Here are some of the myths that underlie these uncertainties and result in failed implementations:

Myth #1

AI can define the best process

Vendors often position AI as a panacea, with vague promises that somehow your system will automatically be better because it leverages the technology. But that will only happen if you already have a well-developed process in place. Again, the strength of AI is that it can learn, and to take advantage of that you have to “teach” it. Typically, this means using training data to show it what good decisions look like.

For example, you may want to use machine learning to help with decisions about onboarding third parties. Records of those decisions exist in your TPRM system in the form of the assessments and other supporting data (risk intelligence data, external screenings, etc.) and the decisions TPRM professionals made regarding whether to approve/deny/conditionally approve those third parties. Technologies like machine learning and natural language processing can identify the patterns in those decisions (i.e. your underlying process) and make consistent decisions when new onboarding requests come in without the need for an explicit model.

Why not just create rules to automate this process? Writing and testing the vast number of rules that go into a complex decision-making process that involves huge amounts of data would take a long time and lots of resources. It would also be prone to human error if rules were overlooked or nuances of the decision-making process were overlooked.

Myth #2

AI will take over control of decisions in my system

AI shouldn’t be a black box. Not only does it only make the decisions you train it to make, it only controls the decisions you give it control over. In the scenario above in which a machine learning engine has been trained to make decisions about onboarding, you would likely only give it the ability to approve low risk third parties. This would expedite the process, making your business owners happy, while giving your team more time to focus on the due diligence required for higher risk third parties, making your risk and compliance leadership happy.

That doesn’t mean that machine learning is useless for high risk decisions if you don’t use it for automation. For instance, the machine learning engine could present risk experts with recommendations regarding higher risk decisions, including confidence levels, to support better decision making. The machine learning engine would learn from the resulting decision, continuously increasing the accuracy and confidence of future recommendations.

To ensure that you maintain control of decisions, it’s important to evaluate how transparent the use of machine learning is within any AI-enabled TPRM software you’re evaluating. If the system isn’t trained on your data and can’t be applied to the actions you choose, it may be difficult to maintain and troubleshoot. Some solutions train machine learning engine on data aggregated from multiple organizations, which may not have same risk appetite and priorities you do.

Myth #3

AI will fix bad decisions made in the past

Just as it won’t create a process, AI can’t fix one either. You have to identify areas of good practice within your program and use that to train the machine learning engine to make good decisions. In other words, the past decisions you use to train the system have to make sense.

If you present a machine learning engine with a data set in which there is little or no consistency, the confidence level of the model it creates will not be very high because the patterns are not as clear as they would be in consistent data. As a result, it would be substantially less useful. For example, if you want to use AI to automate a process that initiates issues and corrective actions workflow when a third parties risk score changes, you have to train it on data in which users have taken the same action under the same circumstances at least most of the time. Otherwise, the machine learning engine may not generate results with a high enough confidence level for you to rely on it to perform that task.

However, once you have your machine learning system trained, it may be able to help you identify bad decisions. For instance, a machine learning engine that understands how third parties are classified can be used to analyze an existing group of third parties and their classifications and identify any outliers. This is helpful in uncovering flaws in the process or areas for staff re-training.

Myth #4

AI is a workaround for bad data

The hype around AI has led some people to believe that you can implement it to sort out data problems that have made it difficult to centralize, automate, and standardize processes. In fact, nothing could be further from the truth. One of the dangers of implementing a machine learning solution is rushing in before addressing data issues, such as data quality. Normalizing the data is critical to the success of any machine learning project in order for it to find patterns and make decisions that make sense, so it’s important not to undertake an AI project until any data issues have been understood and addressed.

However, if the bulk of your data is fairly reliable, an AI-enabled system can help to identify outliers and determine how they are entering the system.

Myth #5

AI is superior to humans

The danger in putting blind faith in AI systems cannot be overstated. They don’t have the inherent understanding that humans do. Rather, they are complementary to human intelligence and can accelerate what a smart team of TPRM professionals is able to accomplish. It’s human processes and decisions by valued employees who know and support your business that AI systems learn from and that can’t be just replaced by any vendor’s machine learning solution.

You also need human oversight to make sure that the outcomes from an AI system continue to make sense. Just like Tay was decommissioned when leaders realized it had been corrupted with the wrong data, you want to make sure that skilled people are teaching and reinforcing the right “lessons” for your machine learning engine.

If you follow a practical approach and debunk these myths, AI can add tremendous value to your organization by automating process, delivering intelligent decision support, and predicting risk exposure. You can learn more about AI technology and how to incorporate it into your TPRM program in the white paper AI for Third-party Risk Management: Going beyond the hype to build a practical approach to machine learning.

8219695463?profile=original

Download Whitepaper

Votes: 0
E-mail me when people leave their comments –

You need to be a member of Global Risk Community to add comments!

Join Global Risk Community

Comments

  • Many leaders are considering Artificial Intelligence (AI) as a way to facilitate Automation and steer good growth.For learning data science course, artificial intelligence, deep learning and machine learning join in Learnbay where I done my data science course.

This reply was deleted.

    About Us

    The GlobalRisk Community is a thriving community of risk managers and associated service providers. Our purpose is to foster business, networking and educational explorations among members. Our goal is to be the worlds premier Risk forum and contribute to better understanding of the complex world of risk.

    Business Partners

    For companies wanting to create a greater visibility for their products and services among their prospects in the Risk market: Send your business partnership request by filling in the form here!

lead