Algorithmic Bias in Credit Lending
- Nadim Khairy Douma
- May 8
- 6 min read
Author:
Nadim Khairy Douma - Senior Data Analyst at PwC Belgium
Introduction
Credit lending bias is not a new phenomenon, but a longstanding and pervasive issue that has affected millions of people and contributed to social and economic inequalities and injustices.
Credit lending bias refers to the unfair or unequal treatment of borrowers based on their characteristics, such as race, gender, age, or location, rather than their actual creditworthiness or risk profile. Credit lending bias can take various forms, such as denying, limiting, or charging higher interest rates or fees to certain groups of borrowers, or steering them towards subprime or predatory loans. It can have serious consequences for borrowers, such as reducing their access to credit, increasing their debt burden, lowering their credit scores, or exposing them to foreclosure or bankruptcy.
Furthermore, credit lending bias can also have negative impacts on the credit market, the financial system, and the economy, such as creating inefficiencies, distortions, or instabilities, or undermining trust, fairness, or inclusion.
One of the most notorious examples of credit lending bias was the subprime mortgage crisis that triggered the 2008 global financial crisis. The subprime mortgage crisis involved the widespread origination and securitization of mortgages to borrowers with low credit scores or incomes, often with predatory or fraudulent terms, such as adjustable interest rates, balloon payments, or prepayment penalties. Many of these borrowers were disproportionately offered subprime loans, even when they qualified for prime loans.
As the housing market collapsed and interest rates rose, many of these borrowers defaulted on their loans, leading to massive losses for lenders, investors, and taxpayers, and devastating effects for homeowners, communities, and the economy.
The subprime mortgage crisis exposed the extent and severity of credit lending bias, and prompted legal, regulatory, and policy reforms to address it.
However, credit lending bias remains a persistent and complex challenge, especially in the era of fintech and big tech credit lending, which rely on algorithms and data analytics to assess and manage credit risk. This article discusses algorithmic bias in credit lending, highlighting how algorithms can lead to bias, what are the implications of biased algorithms on access to credit, and what are the strategies for mitigating such biases.
Focus: Algorithmic Bias
In recent years, there has been a rise of fintech and big tech companies that offer credit lending services, often using algorithms and data analytics to assess and manage credit risk, automate decision-making, and optimize pricing and terms.
While fintech and big tech credit lending can offer some benefits, such as convenience, speed, lower costs, and wider access, they also pose some challenges and risks, such as algorithmic bias.
Algorithmic bias can affect the credit lending market in several ways, such as:
Discrimination: Algorithms can discriminate against certain groups of borrowers, either intentionally or unintentionally, by using variables or proxies that are correlated with protected characteristics, such as race, gender, age, or location. For example, an algorithm that uses zip codes as a factor to determine credit scores or interest rates could result in higher costs or lower access for borrowers from low-income or minority neighbourhoods, even if they have similar credit histories and incomes as borrowers from other areas;
Exclusion: Algorithms can exclude certain segments of borrowers, either deliberately or inadvertently, by relying on data sources or criteria that are not representative, inclusive, or relevant for them. For example, an algorithm that uses social media data or online behaviour as a factor to assess creditworthiness could disadvantage borrowers who have limited or no digital footprint, such as the unbanked, the elderly or the rural population, or who have different cultural or privacy preferences, such as opting out of sharing or tracking their online activities;
Inaccuracy: Algorithms can produce inaccurate or unreliable outcomes, either due to errors or limitations in the design, implementation, or validation of the system, or due to changes or uncertainties in the environment, data, or behaviour of the borrowers. For example, an algorithm that uses historical data or patterns to predict future credit performance could fail to account for new or emerging factors, such as the adoption of new technologies, sudden shifts in a country’s political landscape or the evolution of consumer preferences, that could affect the credit risk or demand of the borrowers.
Numbers Don’t Lie
Suppose there are two borrowers, A and B, who apply for a personal loan from a fintech company that uses an algorithm to determine the loan amount and interest rate. The algorithm uses the following factors and weights to calculate the credit score of the borrowers: credit history (40%); income (30%); education (10%); zip code (10%); and social media activity (10%).
The algorithm assigns a credit score between 0 and 100, and uses the following rules to determine the loan amount and interest rate:
If the credit score is above 80, the borrower can get a loan of up to $10,000 at an interest rate of 5%;
If the credit score is between 60 and 80, the borrower can get a loan of up to $5,000 at an interest rate of 10%;
If the credit score is below 60, the borrower is rejected.
The data for the two borrowers are as follows:
Borrower A has a good credit history, a high income, a college degree, lives in a zip code with a high median income and low default rate and has a high social media activity;
Borrower B has a fair credit history, a medium income, a high school diploma, lives in a zip code with a low median income and high default rate and has a low social media activity.
The algorithm calculates the credit scores, and the loan offers for the two borrowers as follows:
Borrower A
Credit score = 0.4 * 100 + 0.3 * 100 + 0.1 * 100 + 0.1 * 100 + 0.1 * 100 = 100. Loan offer = $10,000 at 5% interest rate.
Borrower B
Credit score = 0.4 * 80 + 0.3 * 40 + 0.1 * 60 + 0.1 * 20 + 0.1 * 20 = 52. Loan offer = Rejected.
The algorithm shows bias against borrower B, who is denied access to credit, even though they may have a similar repayment capacity and credit risk to borrower A, who is offered a large loan at a low interest rate. The algorithm discriminates against borrower B based on their zip code and social media activity, and excludes them based on their income and education, which are not representative or relevant for their creditworthiness.
The algorithm also produces inaccurate outcomes, as it does not account for the current or future factors that could affect the credit performance of the borrowers, the availability or affordability of alternative sources of financing, or the personal or financial goals or needs of the borrowers.
All of the issues identified in this analysis are, of course, applicable to evaluations made by humans on the basis of biased formulas. This is an unfortunate reality of current credit evaluations, even when processed by humans.
However, the retreat of the human element would mean that no human evaluation would be included in the mathematical construction of the formula, and worse, that the bias-filled effect would be treated as mathematical evidence, an issue routinely present in economic models (which are often treated as factual and not as the social science models they are). Put simply, the automation of the usage of biased formulas has no chance of being less biased than its usage by in-the-flesh humans.
Mitigating the Bias
To mitigate algorithmic bias in credit lending, several strategies can be employed:
Transparency and Explainability: Ensure that the algorithms used are transparent and their decision-making processes can be explained. This helps in understanding how decisions are made and identifying any potential biases;
Diverse and Representative Data: Use diverse and representative data sets to train algorithms. This reduces the risk of biases that arise from unrepresentative data;
Regular Audits and Monitoring: Conduct regular audits and monitoring of algorithms to detect and correct biases. This includes testing algorithms for disparate impacts on different groups;
Human Oversight: Incorporate human oversight in the decision-making process to catch and address biases that algorithms might miss.
While AI and algorithms can greatly enhance efficiency and decision-making, the human aspect remains crucial and should not be totally neglected, especially in a field such as credit lending which has the potential to greatly alter people’s lives.
Biography of the Guest Expert
Holding a Bachelor’s in Business Administration and a Master’s in Business Information Management, both from Katholieke Universiteit (KU) Leuven (Belgium), Nadim’s academic background reflects his interest in business systems and efficiency, with theses exploring production process optimization and the global expansion of FinTech firms in the credit lending market (the subject of research of his Master’s thesis).
Nadim is a Senior Data Analyst with a strong foundation in business intelligence, financial analysis, and data visualization. Based in Brussels, he currently works at PwC Belgium, where he develops data solutions and AI models that support organizational performance and strategic decision-making. His professional experience also includes roles at EY Belgium and EFG Hermes, where he contributed to market research and valuation projects in the real estate and industrial sectors, analyzing regional trends in the UAE and Saudi Arabia. He is also actively involved in delivering training sessions.
Comments