February 23, 2026
Deepfake Deception: A Guide for Professional Liability Practitioners
As artificial intelligence (AI) tools continue to excel in performance and output, deepfakes—videos, photos, or audio recordings that appear real but have been manipulated or generated using AI—are fueling a new generation of cyberattacks. These sophisticated forgeries are being deployed for extortion, impersonation, market manipulation, and reputational sabotage, affecting everyone from individuals to multinational corporations.
According to the Federal Bureau of Investigation (FBI), more than 4.2 million fraud reports have been filed since 2020, resulting in $50.5 billion in losses, with deepfakes representing an increasing percentage of scams. The challenge is clear: as technologies advance and generate more realistic outputs, deepfakes become increasingly harder to detect. Individuals must heighten awareness of these risks in conjunction with identification education, while professional liability practitioners need to monitor the capabilities, uses, and consequences of deepfakes to calibrate their risk tolerance and mitigation strategies accordingly.
Current Threat Landscape
Cyber incidents can take many forms: ransomware, extortion, business email compromise, wire fraud, social engineering, domain name system attacks, just to name a few. With AI, new threats are not necessarily created, but rather amplified and perfected at scale, due to the widespread accessibility of the tools. This has decreased the barrier to entry for threat actors and increased the productivity of threat actors resulting in an increase in the sophistication of cyberattacks at a higher volume. This is very apparent now with phishing scams. In the past, a tell-tell sign of fraud or spam was bad grammar and spelling mistakes, however now with AI, these red flags are becoming obsolete, and scams are becoming harder to detect, placing increased detection burdens on IT teams and employees.
In December of 2024, the FBI issued a report on criminal use of generative artificial intelligence to facilitate financial fraud, warning the public about criminal exploitation of Generative AI. These threats can come in a variety of forms, however, regardless of form, all threats share common threat patterns such as exploitation of trust relationships, financial theft through various pretexts, identity theft and impersonation, and manipulation of victim’s emotions. Common threats categories include:
- Extortion & Blackmail: Video deepfakes can be used to depict and impersonate key employees or executives in compromising situations to manipulate or blackmail an organization. Criminals leveraging a fear of reputational damage to force organizational compliance with specific demands, such as ransom payments or unauthorized access to sensitive information.
- Social Engineering & Impersonation: In 2024, a company experienced the unbelievable. A finance worker at a multinational company was tricked with the use of deepfake videos of real employees into paying out $25 million to criminals. The event was orchestrated to impersonate company employees in a real time conference call to persuade the release of these funds under false pretenses. In today’s world where professionals are encouraged to maintain a strong online presence through speaking engagements, interviews, and conferences, the threat of deepfakes is stronger than ever. The internet can provide copious source materials for criminals to input into AI technology to mimic voices, likeness, and even the mannerisms of any person, resulting in increasingly lifelike synthetic renditions of real people.
- Financial Fraud & Market Manipulation: Fabricated announcements, doctored earnings reports, or altered speeches from top executives can cause upheavals within an organization, eroding investor confidence and potentially resulting in significant financial losses. This can have significant impact, as synthetic information can mix with real information, causing public confusion and distrust. For example, on April 7, 2025, a misleading tweet on X regarding President Donald Trump’s tariff policy, demonstrating how quickly false information can trigger turmoil in the U.S. stock market.
In a world where cybersecurity threats have become prevalent and commonplace, reputational damage resulting from such incidents has, to a certain extent, become more understandable by the average consumer. The danger with deepfakes, given the advances in AI, is its believability. The concern is that synthetic fraudulent content and authentic content can often coexist, creating confusion and misinformation that can linger even if the content is later proven to be fake.
Threat Landscape Predications
Regulators are taking a stance against synthetic content with several states implementing deepfake regulation. However, the regulation does not safeguard against the advancement and development of deepfakes as a criminal service. In the last several years, we’ve seen an explosion in “Ransomware-as-a-Service” (“RaaS”), wherein threat actors develop and create ransomware tools and the ‘sell’ or ‘license’ the tools to other threat actor groups to facilitate ransomware attacks against organizations. Successful cyber gangs will either take data from an organization’s systems, encrypt an organization’s systems, making it nearly impossible for the victim organization to access its files, or sometimes they will do both: take the victim organization’s information assets and encrypt its systems (double extortion). With the increased use of AI, we can expect similar Software-as-a-Service (“SaaS”) crime models to emerge. For example, “Deepfake-as-a-Service” (“DaaS”) or “Impersonation-as-a-Service” (“IaaS”), where criminals provide turnkey deepfake creation tools to other threat actors, further lowering the barrier to entry and amplifying the scale and sophistication of these attacks. Where traditional RaaS attack vectors typically concern an insured’s system, with IaaS or DaaS, the attack vector is human trust.
Underwriting & Insurance Considerations
Traditional cyber insurance generally covers the financial losses incurred by an insured arising out of a cybersecurity incident; be it business interruption, crisis management costs, reputational harm, or damages arising out of third-party liability claims or regulatory investigations. Policies often require failure in the insured’s network security, which resulted in unauthorized access or data exfiltration. With services such as IaaS and DaaS, this becomes crime/fraud or media/reputation coverage territory, not cyber. For example, a deepfake impersonation could look like phishing, but phishing definitions may require an email compromise. Because the harm stems from fabricated authenticity rather than system failure, insurers may contend that no covered “security failure,” “privacy event,” or “network incident” occurred, pushing the loss into narrower crime, fraud, or media liability coverages — or outside coverage altogether.
This shift reframes cyber risk from infrastructure resilience to human resilience. Traditional controls such as firewalls, EDR tools, and encryption do little to mitigate voice cloning, synthetic video misuse, or AI-driven impersonation. Underwriters evaluating this exposure may increasingly look to governance and operational safeguards as opposed to technical safeguards, such as executive identity verification protocols, multi-factor transaction approvals, deepfake detection capabilities, media monitoring, incident response playbooks for authenticity attacks, employee training against AI-enhanced social engineering, and crisis communications readiness.
Practical Takeaways
Education around AI-enabled deception should not sit in the “annual compliance module” bucket. From a professional liability perspective, employee susceptibility to deepfake or impersonation schemes is a loss driver, not a policy checkbox. Training should be framed as:
- A financial controls safeguard
- A fraud loss prevention tool
- A governance measure demonstrating reasonable care
Governance frameworks should incorporate methodologies to reduce employee vulnerability around DaaS and IasS. This may include:
- Verification protocol
- Multi authorization for sensitive and large transaction
- Voice/video skepticism training
Document controls as part of its defensibility. Post-incident scrutiny focuses on what procedures were in place to prevent an incident from occurring.
Organizations that treat training, governance, and policy review as core resilience strategy — not compliance overhead — will be in a stronger position legally and operationally.
Meet the Authors

Joshua A. Mooney is the head of U.S. cyber and data privacy for Kennedys Law, based in the Philadelphia office. He can be reached at Joshua.Mooney@kennedyslaw.com.
Alecsandra Dragus is an associate at Kennedys Law, based in New York. She can be reached at Alecsandra.Dragus@kennedyslaw.com.
Ashley Pusey is an associate at Kennedys Law, based in New York. She can be reached at Ashley.Pusey@kennedyslaw.com.
News Type
PLUS Blog
Business Line
Cyber Liability, Professional Liability
Contribute to
PLUS Blog
Contribute your thoughts to the PLUS Membership consisting of 45,000+ Professional Liability Practitioners.
Related Podcasts
Icons & Innovators Episode 2
Icons & Innovators is a podcast series spotlighting influential leaders and forward-thinking…
Related Articles
Deepfake Deception: A Guide for Professional Liability Practitioners
As artificial intelligence (AI) tools continue to excel in performance and output,…
PLUS London: Advancing a Global Professional Liability Community
PLUS’s mission is to be the global community for the professional liability…
The Year in Review: Key Management and Professional Liability Decisions Issued in 2025 Webinar Recap
In our most recent webinar, panelists discussed recent court decisions that are…