How to ensure fairness in AI in financial services

Modelling for the “average” customer is exclusionary and creates biased models that deliver unfair decisions for many, which is an increasingly untenable operational model for financial services organizations

November 18, 2021

5 Min Read

AI has been part of the fintech ecosystem for decades now in some form.

Linear regression models, support vector machines, and other AI algorithms have been delivering value across industries since the 50s and 60s.

It was not until the mid-2000s, however, when datasets and computational power increased enough to show its true potential, that consumers began to benefit directly from AI.

Fast forward to today and we are awash with services delivering advanced analytics and predictions through the power of AI. Neural network-based chatbots now enable users to gain a deeper understanding of their personal finances, democratising financial advisory services. Now, users can understand their balance and spending behaviours better than ever and get better insights into transaction details without the need for manual human intervention.

These are all positive trends for the average banking customer. But if the financial services landscape is to be equal, financial firms must consider the diversity of the populations they serve. In short, just modelling for the “average” customer, i.e. a majority group, is exclusionary and creates biased models that deliver unfair decisions for many, which is an increasingly untenable operational model for financial services organisations.

The majority rule

With most projects, we seek the highest performance for the defined task. For example, let’s say we are building a classifier and our deciding performance metric is accuracy. We build the model and we see 99% accuracy. This looks fantastic and we might immediately want to deploy and operationalise the model. However, performance metrics can sometimes be misleading if we do not analyse and balance our data correctly. It could be that we have a high skew with our majority group and our model has learned to just predict the majority target.

Take the example of a model that predicts loan defaults. If the data that informs the model has been trained on a majority group and is 95% accurate, this is also a high-performing model; however, the model clearly favours the majority group. What then happens when a customer who does not fall into the majority group applies for a loan and is denied? Though this customer may have the means to fulfil their repayment obligations, the fact that they do not meet the majority criteria leaves them without access to an essential financial service. Now, we have a model that is high-performing but ultimately unfair.

It is this challenge that financial services organisations must actively seek to overcome, as advanced models are increasingly being adopted to improve processes and provide a more accurate assessment of financial risk. 

Building in fairness

As with the example of lending, unbalanced data between two classes will result in predictions that can potentially favour the majority group. To eliminate this problem, fairness must be built into machine learning projects from the outset. This can be done by defining data and algorithmic objectives that ensure all groups are treated equally.

Of course, interpretations of fairness will always vary, but if a model is built in line with well-defined principles of fairness—so that its outcomes can be explained—financial organisations will be able to stand by those decisions.

Putting this into practice requires that, instead of just employing performance-related metrics like accuracy or F1 score, financial organisations optimise for fairness metrics that will safeguard against biased outcomes. 

A key part of this process involves safeguarding against the ways in which bias can infect models, either directly, or indirectly. Sensitive data that pinpoints specific groups with a high level of accuracy is a direct example, but in many industries the use of this data is not permissible and must be anonymised. While this may protect against the direct usage of sensitive data, links to this data can still find their way into models, as related fields can serve as proxies.

Gender, ethnicity, and religion, for example, cannot be used in a lending model designed to predict mortgage approval. However, a postal code or name could potentially enable users to derive these things, which could bring in bias. When creating models that can profoundly affect people’s lives, financial organisations must ensure bias is not part of the decision-making process. Many groups are already disenfranchised when it comes to access to financial services; allowing bias to also creep in through the back door is just an added means of maintaining inequality.

The rise of alternative data

When we think of a person who has succeeded in securing a loan, we might make a number of assumptions. They likely have a stable job, consistent spending habits, and will have a set number of outgoings each month. Crucially, they will have built up a long credit history that can be used to assess eligibility for their loan. The FICO score is a classic example of a loan eligibility calculator, but such scoring systems stand in the way of financial inclusion for millions.

Lending is a key line of revenue for banks and loan providers. It’s a big deal for customers, too, as securing credit can provide a lifeline for businesses and individuals alike—particularly so over the course of the pandemic. But those who lack extensive financial histories are at a significant disadvantage when it comes to securing credit.

Increasingly, the combination of alternative data and AI is forging a path for the “credit invisible” to gain access to the financial services many of us take for granted. Through analysis of multiple alternative data sources, such as rental payments, asset ownership, utility bills, and monthly subscriptions, banks and loan providers are able to establish a picture of an individual’s financial responsibility. While it may take some time for this model to be widely adopted, the research and development in this field is promising.

These are just some of the key areas where financial services firms must take steps to create a more equal landscape for all. As AI adoption continues to accelerate across industries, we should expect increased scrutiny from regulators and society as a whole on the ethical foundations of models.

Adam Lieberman is head of Artificial Intelligence and Machine Learning at Finastra, a financial software company headquartered in London

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like