But can computers become better judges of financial risk than human bank tellers? Some computer scientists and data analysts certainly think so.
How banking is changing
On the face of it, bank lending is rather simple.
People with excess money deposit it in a bank, expecting to earn interest. People who need cash borrow funds from the bank, promising to pay the amount borrowed plus interest. The bank makes money by charging a higher interest rate to the borrower than it pays the depositor.
Where it gets a bit trickier is in managing risk. If the borrower were to default on payments, not only does the bank not earn the interest income, it also loses the amount loaned (provided there wasn’t collateral attached, such as a house or car).
A borrower who is deemed less creditworthy is charged a higher interest rate, thereby compensating the bank for additional risk.
Consequently, the banks have a delicate balancing act – they always want more borrowers to increase their income, but they need to screen out those who aren’t creditworthy.
Traditionally this role was fulfilled by an experienced credit manager — a judge of human character — who could distinguish between responsible borrowers and those who would be unlikely to meet their repayment schedules.
Are humans any good at judging risk?
When you look at the research, it doesn’t seem that humans are that great at judging financial risk.
Two psychologists conducted an experimental study to assess the kind of information that loan officers rely upon. They found that in addition to “hard” financial data, loan officers rely on “soft” gut instincts. The latter was even regarded as a more valid indicator of creditworthiness than financial data.
Additional studies of loan officers in controlled experiments showed that the longer the bank’s association with the customer, the larger the requested loan, and the more exciting its associated industry, the more likely are loan officers to underrate loan risks.
Other researchers have found that the more applications that loan officers have to process, the greater the likelihood that bank officers will use non-compensatory (irrational) decision strategies. For example, just because a customer has a high income that doesn’t mean they don’t have a bad credit history.
Loan officers have also been found to reach decisions early in the lending process, tending to ignore information that is inconsistent with their early impressions. Lastly, loan officers often fail to properly weigh the credibility of financial information when evaluating commercial loans.
Enter algorithmic lending
Compared with human bank managers, a computer algorithm is like a devoted apprentice who painstakingly observes each person’s credit history over many years.
Banks already have troves of data on historical loan applications paired with outcomes – whether the loan was repaid or defaulted. Armed with this information, an algorithm can screen each new credit application to determine its creditworthiness.
There are various methods, based on the specific data in each applicant’s profile, from which the algorithm identifies the most relevant and unique attributes.
For example, if the application is filled in by hand and scanned into the computer, the algorithm may consider whether the application was written in block capitals or in cursive handwriting.
The algorithm may have detected a pattern that applicants who write in all-caps without punctuation are usually less educated with a lower earning potential, and thereby inherently more risky. Who knew that how you write your name and address could result in denial of a credit application?
On the other hand, a degree from Harvard University could be viewed favorably by algorithms.
On balance, computers come out ahead
A large part of human decision making is based on the first few seconds and how much they like the applicant. A well-dressed, well-groomed young individual has more chance than an unshaven, dishevelled bloke of obtaining a loan from a human credit checker. But an algorithm is unlikely to make the same kind of judgement.
Some critics contend that algorithmic lending will shut disadvantaged people out of the financial system, because of the use of pattern-matching and financial histories. They argue that machines are by definition neutral and thus usual banking rules will not apply. This is a misconception.
The computer program is constrained by the same regulations as the human underwriter. For example, the computer program cannot deny applications from a particular postal code, as those are usually segregated by income levels and racial ethnicity.
Moreover, such overt or covert discrimination can be prevented by requiring lending agencies (and algorithms) to provide reasons why a particular application was denied, as Australia has done.
In conclusion, computers make lending decisions based on objective data and avoid the biases exhibited by people, while complying with regulations that govern fair lending practices.
Author: Saurav Dutta, Head of School at the School of Accounting, Curtin University