Oct 9, 2024
General
AI, Deepfakes, and the New Age of Business Risks
How AI-generated deepfakes are targeting organisations worldwide—and how Australian businesses can prepare
Imagine receiving a video call from your Managing Director. They’re speaking in their usual tone, asking for an urgent fund transfer. Everything seems above board—until you discover moments later that the caller was actually a computer-generated impersonation. This scenario, once confined to science fiction, is now a real concern for organisations in Australia and around the world.
Deepfakes use artificial intelligence (AI) to produce or alter video and audio content in ways that are unsettlingly realistic. While the technology can be used for creative projects—such as re-creating historical figures in documentaries—cybercriminals are harnessing it to impersonate executives, forge convincing “evidence,” and commit financial fraud. As AI tools become more accessible, businesses increasingly need to guard against these sophisticated deceptions.
Recent Deepfake Scams Making Headlines
A Costly CEO Impersonation in Europe
In 2019, a UK-based energy firm fell victim to a deepfake voice scam. According to a report in The Wall Street Journal, the company’s finance chief believed he was speaking with his CEO and authorised a transfer of more than €200,000 to a supposed supplier in Hungary. By the time the ruse was uncovered, the funds had disappeared. (https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402)US$35 Million Lost in a Phone Scam
In another high-profile incident reported by Fortune, a bank in the United Arab Emirates was swindled out of US$35 million in 2020. Cybercriminals used a deepfake voice to pose as a corporate director requesting an urgent payment. Hearing a voice that matched their client’s, the bank manager authorised the transfer—only to realise later it was a sophisticated scam. (https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/)A Ferrari Manager Unmasks a Deepfake
Deepfake attempts aren’t limited to financial scams. In one notable case reported by it-daily.net, a Ferrari manager became suspicious of a virtual meeting request that appeared to come from a high-ranking official at the company. After contacting the official’s office to confirm, it emerged that the request was a clever deepfake ploy designed to gain access to sensitive information. (https://www.it-daily.net/en/shortnews-en/how-a-clever-ferrari-manager-exposed-a-deepfake)Growing Threats in Australia
Here in Australia, businesses are also encountering more deepfake attacks. Insurance Business Australia highlights rising incidents involving impersonated voices and falsified videos used to trick finance teams into transferring funds or revealing sensitive information. These scams can damage corporate reputations, as any leaked deepfake material can spread quickly online, undermining trust among customers, partners, and staff. (https://www.insurancebusinessmag.com/au/news/cyber/australian-businesses-hit-hard-by-rising-deepfake-threats-495922.aspx)Identity Fraud via Deepfakes
The US Department of Homeland Security warns that criminals are using deepfake images and videos to bypass identity verification processes. This allows fraudsters to gain access to systems and data they would otherwise be unable to reach. For an Australian organisation, even a single breach can create legal, financial, and reputational fallout. (https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf)
Why Deepfakes Are So Effective
Deepfake attacks exploit the trust we place in familiar faces and voices. In many cases, staff believe they’re carrying out legitimate instructions from an executive they’ve worked with for years. By the time suspicion arises, damage may already be done. These schemes threaten immediate finances, erode confidence in internal controls, and lead to scepticism among clients and partners.
What Organisations Can Do
Educate and Inform
Provide regular training so employees understand what deepfakes are and how these scams unfold. Encourage them to question any urgent or unusual requests, even if they appear to come from a known executive.Establish Verification Protocols
Implement multi-step checks for high-value transactions, such as requiring approvals from multiple team members. Even a quick text or phone call to a verified contact number can help confirm the authenticity of a request.Consider Technology Solutions
AI-based detection tools are emerging to spot anomalies in voice and video content. While no system is perfect, using updated security solutions can help flag suspicious activity before it causes irreparable harm.Revise Policies and Incident Response Plans
Deepfake-specific scenarios should feature in your organisation’s crisis management playbooks. Outline who must be notified, how to freeze suspicious transactions, and the steps for promptly informing regulators and stakeholders.Review Insurance Coverage
Cyber insurance policies do not always include protection against deepfake fraud. Insurance Business Australia recommends checking whether your current coverage extends to these new threats.444 If not, consider upgrading your policy.
Where Nimbus Fits In
Staying one step ahead of deepfake and AI-driven fraud requires more than just awareness—it involves hands-on training, realistic testing, and continuous education. This is where Nimbus can help. We focus on equipping organisations to detect and respond to deepfake attempts by:
Providing targeted staff training on how to identify suspicious audio and video cues.
Simulating real-world deepfake attacks to test and refine your existing processes.
Offering ongoing guidance to help you adapt to new AI threats as they emerge.
While no single strategy can eliminate the deepfake threat, partnering with specialists like Nimbus can give your organisation practical tools to reduce risks and build a culture of vigilance.
Looking Ahead
As AI continues to evolve, the line between fact and fabrication will blur further. For Australian businesses, deepfakes represent a challenge that blends social engineering with cutting-edge technology. However, by combining staff awareness, stricter verification procedures, emerging detection tools, and additional support from groups like Nimbus, organisations can mount a more effective defence.
Preparation remains key. A culture that encourages employees to question suspicious instructions, even from familiar faces, will go a long way toward minimising the risks of deepfake fraud. And, as with any cyber threat, continued vigilance and adaptation are essential to staying one step ahead of potential attackers.