Skip to content

How banks should use responsible AI to tackle financial crime

03/01/2024

While it is true that financial deception is not a recent phenomenon in the financial services industry, it has experienced accelerated growth that requires extensive analysis. As technology advances rapidly, perpetrators have discovered more methods to cross regulatory boundaries, sparking a technological arms race between those seeking to protect consumers and those seeking to harm them. Fraudsters are fusing new technologies with emotional manipulation to scam people out of huge sums of money, leaving banks with the task of increasing their defenses to effectively counter the growing threat.

In response to this ever-increasing wave of fraud, banks themselves are beginning to adopt innovative technologies. Banks have a vast amount of information that has not been used to its full potential, which gives information technologies Artificial Intelligence the ability to allow banks to identify illegal conduct even before it occurs, through the analysis of large data sets.

Error 403 The request cannot be completed because you have exceeded your quota. : quotaExceeded

Increasing fraud risks

It is encouraging to see that governments around the planet are taking a proactive stance on AI, especially in the United States and across Europe. In April, the Biden administration announced an investment of $140 million in research and development of Artificial Intelligence, a great advance without a doubt. However, we cannot underestimate the fraud epidemic and the critical role of this new technology in facilitating criminal behavior, a fact that I believe the government should keep firmly in mind.

In 2022, fraud cost consumers 8.800 million, which represents an increase of 44% compared to 2021. This notable increase can be largely attributed to technology, including Artificial Intelligence, which more and more scammers are starting to manipulate.

The Federal Trade Commission (FTC) indicated that the most frequently reported type of fraud is imposter scams, with losses of $2.600 billion last year. There are various forms of impostor scams, ranging from criminals posing as government entities like the IRS or relatives posing as people in trouble; Both strategies are used to trick vulnerable consumers into voluntarily transferring money or goods.

In March of this year, the FTC issued a new advertisements about criminals using pre-existing audio clips to clone the voices of family members using AI. The warning states “Don't trust the voice,” a clear reminder to help consumers avoid inadvertently sending money to scammers.

Fraud methods used by criminals are becoming more diversified and advanced, and romance scams remain a major problem. The recent Feedzai report, The human impact of fraud and financial crime on customer trust in banks found that 42% of people in the United States have been victims of a romance scam.

The use of generative AI, which has the ability to generate text, images and other media in response to prompts, has allowed criminals operar on a larger scale, discovering new ways to trick consumers into handing over their money. ChatGPT has already been exploited by scammers, allowing them to create extremely realistic messages to trick victims into believing they are someone else, and that's just the tip of the iceberg.

As generative AI becomes more sophisticated, it will be even harder for people to distinguish between what is authentic and what is not. As a result, it is vitally important that banks act quickly to strengthen their defenses and protect their customers.

Using AI as a defense

On the other hand, although AI can be used as a criminal instrument, it can also be a great help to effectively protect consumers. You can work at high speed, analyzing large volumes of data to make smart decisions quickly. At a time when compliance teams are overloaded, AI is making it possible to decide which transactions are fraudulent and which are not.

By adopting AI, some banks are building a complete picture of customers, allowing them to quickly identify any unusual activity. Behavioral data sets, such as transaction trends or when people typically log into their online banking services, can help build a picture of a person's typical “good” behavior.

This is particularly useful for detecting account impersonation fraud, a technique used by criminals to impersonate legitimate customers and gain control of an account in order to make unauthorized payments. If the criminal is in a different time zone or starts trying to access the account erratically, this behavior will be considered suspicious and a SAR, a suspicious activity report, will be generated. AI can speed up this process by automatically generating reports and completing them, allowing compliance teams to save costs and time.

Properly trained AI can also help reduce false positives, a major nuisance for financial institutions. False positives occur when legitimate transactions are flagged as suspicious and could result in a customer's transaction (or, in the worst case, their account) being blocked.

Incorrectly identifying a customer as a fraudster is one of the biggest problems banks face. Feedzai research found that half of consumers would abandon their bank if it stopped a legitimate transaction, even if it managed to resolve the situation quickly. AI can help reduce this burden by creating a single, more complete view of the customer, which can quickly work to decipher whether a transaction is legitimate.

However, it is essential that financial institutions adopt responsible and bias-free AI. Still a relatively new technology, which is based on learning skills from existing behaviors, it can detect biased behaviors and make incorrect decisions that could also negatively impact banks and financial institutions if not implemented properly.

Banks have an obligation to learn more about the Ethical AI and responsible and align with technology partners to monitor and mitigate AI bias while protecting consumers from fraud.

Trust is the greatest asset a bank has and customers want to be assured that their bank is doing everything it can to protect them. By acting quickly and responsibly, financial institutions can use AI to build barriers against fraudsters and be in the best position to protect their customers from ever-evolving criminal threats.

READ MORE ARTICLES ABOUT: Finance with AI.

READ THE PREVIOUS POST: This is the AI ​​that provides information about the Dead Sea scrolls.