Tech

AI is supercharging banking scams—Here’s how fraudsters are using it

Vernon Pillay|Published

.

Image: RON

As Artificial intelligence continues its rapid advance, fraudsters have found a new frontier: AI-powered spoofing scams that mimic legitimate bank communications with alarming realism.

Standard Bank has recently warned customers about the rise of these threats. Scammers are now using voice cloning, deepfake emails, chatbots, and AI-generated voices to impersonate bank officials.

“With the rapid development of artificial intelligence, we have seen an alarming enhancement in spoofing techniques,” Athaly Khan, head of fraud risk management at Standard Bank, said.

“Current scams look and sound more real than ever before.”

The Tech Behind the Deception

AI voice cloning tools can now reproduce a person's voice from just a few seconds of audio, capturing tone, cadence, and even emotional nuance.

These cloned voices, paired with compelling deepfake visuals or chat interfaces, are used to build highly convincing “vishing” attacks, where fraudsters pose as trusted bank officials using seemingly genuine phone numbers.

AI-generated phishing e-mails replicate banking branding, tone, and urgency with eerie accuracy.

Embedded malware or phishing links, via QR codes, attachments, or icons, can also install harmful software or redirect users to spoofed sites. 

The AI Fraud Toolkit

At the core of this evolution is spoofing, the practice of manipulating phone numbers or e-mail addresses to make interactions appear as though they’re coming from a trusted bank.

When combined with “vishing” (voice phishing), scammers are able to replicate not only the caller ID but also the sound of a real bank employee.

Voice cloning now allows criminals to mimic tone, cadence, and even accents, building a false sense of trust. Some fraudsters go further by layering in deepfake video calls or deploying AI chatbots programmed to follow the familiar flow of a bank’s customer service script.

“These calls often include standard security questions and disclaimers, making them almost indistinguishable from the real thing,” Khan explained.

Scammers even weave in fragments of personal data, like addresses or birth dates, to boost credibility, she added. 

The goal of these fraudsters is to push the customer into panic mode and manipulate them into transferring money, clicking on malicious links, or sharing sensitive details such as one-time pins (OTPs) and card information.

Phishing 2.0

Phishing e-mails are also being supercharged by AI. Using generative tools, fraudsters can instantly create convincing e-mails with correct branding, tone, and layout, sometimes indistinguishable from genuine bank communication. 

These e-mails frequently contain malware hidden in attachments, links, icons, or QR codes. Once clicked, the malware installs silently on a device or redirects victims to legitimate-looking but fake websites designed to steal login credentials. To maximise fear, the messages often carry urgent warnings about compliance issues, with short deadlines to act.

How to Stay Ahead

The frightening truth is that AI has dramatically lowered the barrier of entry for sophisticated cybercrime. What once required technical skill and resources is now available through off-the-shelf AI tools.

To defend against this, Standard Bank is urging customers to follow a set of “digital hygiene” rules:

  • Never transfer money to so-called “safe” accounts.

  • Don’t create instant money vouchers at someone else’s request.

  • Avoid clicking links, downloading attachments, or scanning QR codes from unsolicited messages.

  • Never share login details, CVVs, OTPs, or ATM PINs—even if the request looks and sounds authentic.

“Fraud is widespread and constantly changing,” the bank said.

“Scammers use fear and urgency to manipulate victims. Stay calm, think critically, and always remember what not to share,” Standard Bank added. 

FAST COMPANY