Breaking News
March 12, 2026 6:27 am

Synthetic Faces, Real Consequences: UAE Security Chief Warns of Escalating Deepfake Fraud Risks

Artificial intelligence has entered a new phase one where digital identities can convincingly mimic real people in real time. From celebrity images circulating online to executive impersonations during live video calls, AI-generated avatars are rapidly blurring the boundary between authentic and artificial.

In the UAE, security professionals are warning that this technology is advancing faster than the systems designed to detect it. Rafal Hyps, Chief Executive Officer of Dubai-based risk management firm Sicuro Group, says organisations must urgently reassess how they verify identity in an era of synthetic media.

When “Seeing Is Believing” Fails

AI avatars can now replicate facial expressions, voice patterns, and real-time movements with striking realism. According to Hyps, this poses a significant risk to organisations that still rely on visual confirmation during video meetings or financial approvals.

He notes that deepfake-enabled fraud has already been used globally to impersonate senior executives and authorise payments. Many identity verification systems, originally designed to confirm physical presence, were not built to detect digitally generated replicas. That gap, he says, is increasingly being exploited.

Cybersecurity researchers and law enforcement agencies worldwide have also reported a rise in AI-driven impersonation scams, particularly in financial and recruitment settings. (Sources: Interpol Cybercrime Report 2023; Europol IOCTA 2023; FBI Internet Crime Report 2023).

How the Fraud Works

Today’s AI tools allow attackers to generate realistic avatars using publicly available images and voice samples. These avatars can be streamed through virtual cameras during video calls, potentially passing traditional “liveness” checks such as blinking or head movement.

Hyps explains that AI systems can also fabricate identity documents paired with matching selfies or video clips. These methods are no longer experimental; they are commercially accessible and increasingly sophisticated.

Recruitment has emerged as an unexpected vulnerability. With remote hiring now common practice, some organisations have unknowingly interviewed synthetic candidates during initial screening rounds.

The Biometric Data Risk

Beyond impersonation, the collection of biometric data introduces long-term concerns. Many avatar platforms require users to upload facial images and may capture facial geometry to generate outputs.

Hyps highlights that, unlike passwords, biometric data cannot be reset. If facial recognition data is compromised, the consequences can be permanent. Global breaches involving biometric databases have already exposed millions of records in recent years. (Source: World Economic Forum Global Cybersecurity Outlook 2024).

He also warns that publicly available professional headshots and social media images can be scraped and used to generate synthetic identities without consent. When combined with job titles, company affiliations, and location data, these digital replicas can become highly convincing.

How to Spot a Deepfake in 2026

Although AI rendering continues to improve, experts suggest three practical real-time checks:

Side-Profile Test: Ask the individual to turn their head sideways. Deepfake overlays may distort around the jawline or ears.

Hand Occlusion Test: Request the person move their hand across their face. AI systems often struggle to correctly render overlapping objects.

Lighting Consistency Check: Observe whether shadows shift naturally when the environment changes. Synthetic images frequently maintain fixed lighting patterns.

Hyps stresses that AI avatars are not inherently malicious. However, without updated safeguards and greater awareness, the risk of financial and reputational damage will increase.

As AI-generated identities become more seamless, one long-standing assumption is rapidly fading: visual confirmation alone is no longer proof of authenticity.

Scroll to Top

Be in the Know