By choosing to continue, you are consenting to the use and functioning of this site as is in accordance with our Privacy Policy.

ORIGINAL THINKING
find an article

 
PRINT | |

ENSight

 

09 Jun 2025
BY Shaaista Tayob AND Aobakwe Motebe

AI impersonation: The rise of deception by design

As artificial intelligence (“AI”) continues to advance, the line between reality and simulation is becoming increasingly blurred. The rise of AI-driven impersonation technology is one of the most troubling developments, as it is designed not just to assist or automate, but to mimic human beings convincingly. This ranges from chatbots imitating people to deepfake videos and cloned voices, with AI now capable of crafting convincing imitations that can deceive even the most discerning among us.

As nations grapple with the implications and victims begin to speak out, a pressing question emerges: how do we regulate something that can so imitate reality?

AI impersonation refers to the use of AI to mimic the identities of humans, typically without their consent. It includes:

  • Chatbots impersonating real people or entities,
  • Deepfake videos portraying someone saying or doing things they never actually did,
  • Voice cloning to simulate a person’s voice in an audio call.

In 2024, the CEO of one of the largest advertising firms based in the UK was targeted by scammers and became the victim of a deepfake scam. The fraudsters created a WhatsApp profile using the CEO’s publicly available image and set up a Microsoft Teams meeting with another executive from the organisation. They deployed an AI-generated video of the CEO, also known as a deep fake and attempted to illicit money and other personal details from another executive member of the organisation. Luckily, this attempt was unsuccessful due to the staff's vigilance.  

In 2023, a series of deepfake videos featuring a prominent business figure and some South African news anchors were also released. The business figure was seen announcing and endorsing an investment platform, which promised impossible profits and led to big financial losses as it was uncovered to be a scam, triggering investigations by the Financial Sector Conduct Authority (“FSCA”).

The reality is that most legal systems are not yet fully equipped to address AI impersonation. However, progress is being made in jurisdictions such as the United States of America (“USA”) and the European Union (“EU”). The EU’s Artificial Intelligence Act (“AI Act”) mandates transparency for AI-generated content, including labelling content such as AI-generated videos or images. This assists in preventing misinformation and protecting fundamental rights. In the USA, the US Congress introduced the “No Fakes Act”  in 2023 and 2024, in an effort to establish rules around the use of AI copies or replicas of people. The act has been publicly supported by organisations such as YouTube.

In South Africa, however, there is no legislation that deals specifically with deepfakes or AI impersonation. Existing laws such as the Cybercrimes Act, 2020  (the “Cyber Crimes Act”) and the Protection of Personal Information Act, 2013 (“POPIA”) offer only partial remedies and will only be in cases of fraud, defamation, or data misuse, but they are not designed to address the unique risks posed by AI-enabled impersonation. This raises the question of whether a victim of such a deepfake cyber attack, which may not strictly fall within the ambit of South Africa’s existing legislation, has any legal rights or remedies that are enforceable against deepfake criminals.

With legislation still playing catch-up, both individuals and organisations must take proactive steps in reducing their risk exposure, with some measures being as follows:

  • Implement multiple layers of security in addition to biometric authentication.
  • Stay informed about cyberthreats and learn how to recognise potential deepfake cyberattacks before responding to any requests.

For organisations, it is important to provide employees with training on advanced cyber threats as part of information security programmes. This will help raise awareness of the risks and ensure staff are familiar with company-wide protocols to follow if they suspect that an unauthorised third party is attempting to access confidential, personal, or proprietary information, or is trying to deceive them into transferring company funds.

For assistance in safeguarding yourself and your business against cyber attacks, contact ENS’ TMT experts below.  

*Reviewed by Isaivan Naidoo, an Executive in ENS’ TMT Department

Shaaista Tayob

Associate | Technology, Media and Telecommunications

stayob@ENSafrica.com

Aobakwe Motebe

Candidate Legal Practitioner | Technology, Media and Telecommunications 

amotebe@ensafrica.com