10952025270?profile=RESIZE_710x

In this week’s blog post, we are excited to share insights from our latest interview with Kareem Saleh, the Founder and CEO at FairPlay, the first Fairness-as-a-Service solution for financial institutions. FairPlay's AI-powered tools assess organisations automated decisioning models in minutes to increase both fairness and profits.

As technology advances and Artificial Intelligence (AI) becomes increasingly embedded in our everyday lives, it is important to ensure that algorithms are not only efficient and effective, but also fair and unbiased. AI is used to make decisions in a variety of areas, from healthcare to criminal justice, and it is essential that algorithms are not biased in any way. In this blog, we will discuss the importance of fairness in AI and de-biasing algorithms, and what practices can help ensure that AI is used responsibly.

It is surprising to see that the credit underwriting and risk modeling practices of some of the world’s most prestigious financial institutions are often quite simple, relying on as few as 20 to 50 variables to build linear models in Excel. However, the state of the art has now progressed to more advanced mathematics, such as machine learning and artificial intelligence. Unfortunately, these systems are often limited by data from the past, which can be tainted by our history of financial exclusion, especially in America, through the discriminatory practice of redlining. This practice disproportionately impacted people of color, women, and other historically disadvantaged populations by denying them access to credit and capital in certain predominantly black neighborhoods. To make matters worse, these systems continue to reflect the lack of diversity in our past, rather than the diversity of our current society.

To move forward, we must strive to build models that are more reflective of our current society and that reflect the diversity of our population. With the rapid advancement of technology, Artificial Intelligence (AI) is increasingly being used in a variety of aspects of our lives, from healthcare to education to finance. As AI continues to be applied in more areas, the potential for its misuse and potential discrimination becomes more of a concern. Fairness in AI is a growing trend as organizations and individuals strive to ensure that AI is used ethically and responsibly. By taking into account ethical considerations, such as fairness and non-discrimination, AI can be used to create fairer and more equitable systems.

The implementation of a fair algorithm can bring numerous benefits, such as improved customer satisfaction, increased trustworthiness and better decisions. Furthermore, companies that understand that fairness is the future can gain a competitive advantage from a business perspective. However, there are risks to companies that must be taken into consideration, such as potential discrimination and unfair decisions that may lead to legal and financial repercussions. To mitigate these risks, companies should ensure that their algorithms are properly tested and reviewed to ensure fairness and accuracy.

From a social standpoint, we have seen a massive interest in algorithmic credit scoring and AI/ML solutions as a way to solve issues with credit and fairness in the market. However, it is clear that algorithmic systems can cause harm in a variety of domains, such as the example of the Facebook algorithm, which has the objective of keeping the user engaged. But if you give some thought to it, having an algorithm with only one objective can lead to all sorts of unanticipated outcomes. In the case of the Facebook algorithm, for example, it may do what it has to do to keep the user engaged, even if that means displaying things that are harmful to their mental health or society.

There are also examples of similar issues in other industries, particularly with self-driving cars. If you have a self-driving car with the mirror objective of getting a passenger from point A to point B, it could potentially do so while breaking the rules of the road, like driving the wrong way down a one-way street or on the sidewalk, thus endangering pedestrians. To avoid this kind of situation, companies like Tesla have designed their self-driving cars to have two objectives: to get the passenger from point A to point B, while also respecting the rules of the road.

Companies should be looking into answering certain questions to ensure they can prevent bias. Is my algorithm fair and equitable? If not, why not? Could it be fairer, and what would be the economic implications of doing so? What are the potential risks of not addressing any biases that may be present? How can we ensure our algorithms are properly validated and audited? What steps can we take to ensure our algorithms are free from bias and discrimination?

There are a few steps that can be taken to ensure that algorithms are free from bias and discrimination. First, it is important to ensure that all data used in the algorithm is fair and unbiased. This means that any data that is used should be gathered from a variety of sources, and not just from a certain group of people. Finally, it is important to regularly audit and revise the algorithm to make sure that it is still free from bias and discrimination.

In general, when designing fairness for credit scoring, or anything that may involve discrimination, it is important to strive for a balance between predicting who is likely to default on a loan and minimizing disparities for historically disadvantaged groups. Fortunately, this is possible, but it requires an understanding that left to their own devices, algorithmic systems may tend towards negative outcomes if they are only seeking to achieve a single objective, which can lead to detrimental results.

The current state of machine learning algorithms is fraught with bias. Algorithms are often deployed before they have been properly evaluated for bias. Companies should make a greater effort to reduce bias in their algorithms before deploying them into production. This entails performing a more thorough evaluation of the data for biases, as well as utilizing new modeling techniques that aim to correct for biases within the data or risk models. The future of model governance is key and a critical stage will be the de-biasing of algorithms. Just as Google built search infrastructure for the internet and Stripe built payments infrastructure for the internet, in financial services, we need to build Fairness infrastructure for credit decisioning infrastructure that enables digital decisions to be made in real-time.

For many years in financial services, we have tried to achieve fairness through blindness by believing that variables can be neutral and objective predictors of credit risk, which in essence is a fallacy. For example, a variable that is often used in credit underwriting is consistency of employment. On the surface, it may seem like a reasonable variable to assess a person’s credit worthiness. But, if all things are equal, this variable will have a disparate effect on women between the ages of 18 and 45 who take time out of the workforce to start a family. This illustrates how seemingly neutral variables can appear to be fair on a univariate basis, but when combined with other variables, they can encode information that no human could possibly discern by simply looking at those variables.

To demonstrate this further, let’s imagine we are constructing a model to predict the sex of an individual. If we include height as an input variable, it is somewhat predictive of sex because men tend to be taller than women. However, even at the same height, men tend to be heavier than women due to factors such as bone and muscle density, testosterone, etc. Therefore, if we include weight as an additional input variable, it adds to the predictive power of the model. But, if we also include birth date as an input variable, the model will now classify every child as a woman. This highlights how seemingly neutral variables can be used to encode information that no human could possibly understand. Thus, the biggest misconception in credit risk modeling is the idea of neutrality.

On that note, misconceptions about AI fairness arise from a lack of understanding about the complexities of AI systems and the potential for bias to be embedded in them. AI fairness must be addressed proactively to ensure that AI systems are designed, implemented, and evaluated in a manner that is equitable and just. To do this, we must understand the potential sources of bias and how to mitigate them. We also need to be aware of the ethical considerations associated with AI fairness, such as privacy and data protection and ensure that these are taken into account in the design and implementation of AI systems.

Incumbent credit modeling methods like linear and logistic regression models have traditionally worked well on datasets with complete and accurate information. However, the future of credit underwriting lies in leveraging machine learning models, complex machine learning ensemble algorithms like those used by Google in search, to be resilient to data that is messy, missing, or incorrect. Furthermore, this shift from traditional credit bureau data, which is limited and 30 days old, to cash flow underwriting, which provides real-time visibility into a borrower’s balance sheet, will revolutionize the way we assess an individual’s ability and willingness to repay a loan. This shift towards more advanced analytics and greater diversity in data sources will prove to be invaluable in the coming age.

AI/ML technologies are being used increasingly in the financial sector, and credit modelling is no exception. AI/ML technologies are being used to automate the process of credit scoring and risk assessment, creating more accurate and efficient credit models. As AI/ML technologies continue to evolve and become more sophisticated, their applications in credit modelling will become even more powerful. They will allow for more accurate predictions and insights into customer behaviour, enabling companies to make better decisions about their customers and their creditworthiness. Ultimately, AI/ML technologies will help to make credit modeling more cost-effective and reliable, allowing financial institutions to offer better credit products and services to their customers.

First takeaway point is to understand that fairness is not only good for profits, people, and progress but also essential for them. Algorithmic systems and machine learning can be incredibly powerful tools, but without intentional oversight, they can learn the wrong things and become a threat to consumers, businesses, and financial institutions. It is therefore essential to ensure these systems are used for good and not for harm.

Secondly, people should strive to learn and embrace the latest innovations in the finance industry in order to stay at the cutting edge of risk modeling and algorithmic decision systems. To do this, they should spend time reviewing the latest developments from academia and then work to commercialize and productize these innovations for the commercial sector.

In conclusion, fairness in AI and the importance of de-biasing algorithms are critical for the successful use of AI in our society. Fairness in AI helps ensure that AI technologies are used in a way that is fair to all members of society and takes into consideration any potential bias that may be built into the algorithms. De-biasing algorithms is critical to ensure that any biases are corrected, allowing for more equitable outcomes.

As the Global Risk Community team, we thank Kareem Saleh for his expertise and insight into fairness of AI. More information about this topic is available in our original interview, which is accessible here.

Votes: 0
E-mail me when people leave their comments –

Ece Karel - Community Manager - Global Risk Community

You need to be a member of Global Risk Community to add comments!

Join Global Risk Community

    About Us

    The GlobalRisk Community is a thriving community of risk managers and associated service providers. Our purpose is to foster business, networking and educational explorations among members. Our goal is to be the worlds premier Risk forum and contribute to better understanding of the complex world of risk.

    Business Partners

    For companies wanting to create a greater visibility for their products and services among their prospects in the Risk market: Send your business partnership request by filling in the form here!

lead