Identity scam warning in South Africa

Identity fraud in South Africa has increased by 337% over the past year, and artificial intelligence (AI) has been identified as a significant driving factor of these impersonations.
This is according to the South Africa Banking Risk Information Centre’s (Sabric) AI-Driven Fraud report.
Sabric found that fraudsters are increasingly using the technology to open or take over accounts using stolen identities.
One way this is being done is by using AI to impersonate people in what is known as an injection attack, where threat actors insert fraudulent biometric data into a security system to bypass it.
Sabric said these types of attacks are likely to become more prevalent, given the rate at which AI content is generated.
MiTek has also reported that 90% of online content will be synthetically generated by next year.
Scammers are also increasingly using deepfakes — AI-generated images, videos or audio depicting a person that may or may not exist.
In the case of fraud, deepfakes are most likely used to impersonate another person.
These attacks can be perpetrated by using tools such as Clony AI, Voice Celebrity, and Voice AI to clone voices.
According to Sabric, the tools only need one to two hours of voice content to be used as training data.
These clones can then be used for social engineering attacks to send voice messages, make phone calls, or mimic personal communication styles.
Sabric said that several organisations use voice authentication to verify users.
However, these can be overcome using voice cloning, as proven by a journalist who could access their accounts by cloning their voice.
The report also noted that social engineering often used visual deepfakes to convince victims to send fraudsters money or divulge sensitive information.
Fraudsters often use deepfakes of famous people to lure in their victims.
This recently happened in South Africa when deepfakes of SABC News anchor Francis Herd made the rounds, purporting to advertise a project by Elon Musk.
It happened again at the beginning of 2025 when deepfakes of Patrice Motsepe appeared online promoting fake investments in a firm called Gold Earnings and Africa Gold.

Deepfakes can also be used to bypass selfie verification mechanisms and open bank accounts.
The report said this is becoming popular for expanding money mule networks.
Generative AI is also increasingly used to forge documents, with IDs being the most vulnerable to this type of fraud.
If attackers gain access to an ID document, they can apply for credit cards on someone’s behalf and open new accounts to borrow or launder money.
These identities are typically a combination of actual identities with fabricated information.
However, while AI enables increasingly sophisticated attacks, it also offers businesses a means to defend their customers against such threats.
Sabric said that AI could significantly enhance a system’s ability to detect fraud and anomalies in data.
Unsupervised learning uses autoencoders, which compress data and then attempt to reconstruct it. If the reconstruction error is significantly high, this may indicate an anomaly in the data.
Supervised learning models, on the other hand, involve training models with labelled data. In this case, the data would include both fraudulent and non-fraudulent transactions.
It also includes using logistic regressions to predict the probability of a fraudulent transaction, assigning a probability score to each transaction.