Towergate Issues Warning on Deepfakes and AI Cybercrime

Towergate Sounds Alarm on Deepfake and AI Cyber Threats

Towergate Insurance warns businesses of a growing cyber risk from increasingly realistic deepfake content and AI-driven fraud, urging immediate protective measures to guard against financial and reputational damage.

The call comes in response to a surge in AI-generated deepfakes, highly realistic synthetic videos, images and audio files, which are increasingly being weaponised by cybercriminals to impersonate individuals and infiltrate organisations.

Recent incidents have highlighted the alarming potential of these technologies. In one case, a finance worker was conned into transferring $25 million after cybercriminals used AI to mimic their CFO during a video call. In another, a deepfake voice of an IT staff member convinced an employee to share a multi-factor authentication code, compromising company data.

“As artificial intelligence continues to evolve at pace, the line between real and fake is becoming dangerously blurred,” explained, Marc Rocker, Head of Cyber at Towergate Insurance. “This isn’t just science fiction, it’s happening now, and businesses must be vigilant. We’re seeing AI used to both attack and defend, and the ability to distinguish between genuine and synthetic content is fast becoming a critical part of cyber risk management.”

The insurance broker is encouraging companies to implement robust cyber safety protocols, including regular employee training, the use of AI detection tools, and strong authentication procedures. In addition, Towergate is stressing the importance of comprehensive cyber insurance to mitigate the financial and reputational fallout from such attacks.

“Cyber insurance isn’t just about covering losses,” Rocker added. “It’s about helping businesses recover and continue operations after an incident—and as the methods used by cybercriminals grow more sophisticated, having that safety net has never been more important.”

Rocker also points to the increasing use of AI in more creative ways, such as O2’s “AI Grandma” that stalls scam callers, but warns that these innovations should not distract from the very real risks.

Towergate’s Head of Cyber also points out some of the key signs of a deepfake, “Certain elements to look out for are unnatural blinking, overly smooth skin, inconsistent lighting, and pixelation around facial features. Businesses should educate employees on these red flags and to treat unexpected video or audio communications with caution.”

    Related Posts

    LFF 2 Invests in 33N to Boost Cybersecurity and Tech Innovation
    LFF2 Supports 33N’s Expansion in Cybersecurity, AI, and DevOps with Key Investment
    Everfox Report: Financial Services Struggle Against Rising Cyber Threats
    Everfox Highlights How Regulatory Hurdles Delay Cybersecurity Measures
    AI Technology
    AI Technology to Dominate in 2025, Focusing on Cybersecurity Innovation