AI has become such a hot topic in the zeitgeist today with all the algorithms online matching your search history, ChatGPT, deepfakes, and the rapid increase in cyber crime being seen across the world which is increasingly being covered in the media. If you’re in insurance, and cyber insurance in particular, you’re becoming increasingly aware of this exposure. AI has two groups: machine learning, typically seen as algorithms, and generative, also known as deep learning or open AI. As cyber insurance underwriters and vendors, we are on the front lines experiencing the evolution of AI in the cyber market. Some want to call this rising exposure an emerging risk, but I believe AI to be an evolution to the cyber insurance product and market. This evolution will affect our policies, the way we underwrite, the vendors we use, and the risk mitigation process.

Two key exposures we view in cyber underwriting are data and privacy; AI impacts both critically by potentially changing both the way we underwrite and the landscape of those inherent exposures, data and privacy. The overwhelming demand and adoption of AI is requiring our data and cyber governance as well as privacy regulation to evolve; it is not emerging as a new exposure or risk.

Companies using AI to conduct business are going to be held responsible for their use of this technology. All companies are held responsible for the care and custody of their data or the data they hold and the use and interpretation of AI applications will be no exception. We will most likely begin to see disclaimers and opt out requirements, much like BIPA and other related privacy regulations, in regards to AI where the regulatory bodies require companies using AI to notify all users of the intended uses of the AI and provide users the option to opt out of its use. Currently a challenge in the privacy regulation space in regards to AI is that there is not currently a legal definition for AI.

AI is a monumental tool propelling all industries forward, but underwriters need to properly understand the exposures and companies need to verify they’re protecting their data properly. Data has also taken on increasing importance, both as an asset and liability. Ensuring that proper controls are in place, and that care is taken at all times, is critical to responsible usage and reduced risk.

How AI applies or is triggered in a cyber policy is also not a novel concept. The claims where AI is utilized are an evolution of the claims we have been seeing in this space through the maturation of this product. Similarly, the policy language as it stands applies coverage to claims where AI is deployed by threat actors. AI has added to the complexity and sophistication in the threat landscape and is seen more often and this will surely crescendo and evolve further in every loss type as we are currently seeing.

At a high level, one of the core impacts of the advancements in generative AI on business security is that, like in other areas of business, AI now allows people with no experience to quickly learn and do things that once required extensive prior knowledge, experience, and expertise. Generative AI allows much faster acquisition and application of knowledge. This in turn allows much faster iteration and learning.

For enterprise security, this means that the knowledge barriers have been lowered both to become an attacker, and for existing attackers to learn, apply, and iterate on new and more sophisticated attack types. For example, with some simple prompt engineering, an attacker can use ChatGPT to help identify potential vulnerabilities in an organization, draft plans to exploit them, and help provide the actual materials needed to do so. While it likely won’t be perfect, it will help an attacker learn and improve at a much faster rate than they ever could before. This is especially true for social engineering and phishing attacks across both text and voice communication channels. The end result of this is that over time we are likely to see an increase in the total volume of attacks that target organizations, the sophistication of these attacks, and their effectiveness.

When it comes to social engineering attacks, AI is going to dramatically increase the scale and personalization available to attackers. AI can be used to identify targets, gather personal information, and hyper personalize call scripts, email copy, and other materials used to manipulate employees. The rapid advancements in voice cloning technology and the accessibility of voices via social media, public presentations, and voicemail boxes, now mean that employees across organizations are increasingly going to be targeted with deepfakes of their leadership across voice and video communication channels.

While voice phishing, fraud, and social engineering are not new, the inability to trust a familiar voice is a dangerous new phenomenon and will likely increase the effectiveness of these attacks. These types of voice-based attacks will likely also impact business email compromise (BEC), as they are increasingly used in conjunction with traditional email phishing to either prime targets ahead of a phishing email, or reinforce the perceived legitimacy of an existing phishing campaign. An example of this that grabbed headlines this spring was the attack on Arup where they lost $25m after live video deepfakes were used to reinforce a phishing email. While fundamentally still a case of traditional social engineering and phishing, the tools available via generative AI greatly increased the effectiveness of these attacks.

DDoS attacks are another threat type where we’re seeing this sophistication play out in real time, and will only develop further. A consistent theme of AI use being seen in DDoS attacks includes AI algorithms embedded within botnet controllers continuously analyzing incoming traffic patterns and the effectiveness of mitigation measures deployed by targeted systems of networks. These AI-powered botnets possess the capability to dynamically adapt their attack strategies by leveraging machine learning algorithms to analyze incoming traffic patterns, mimic legitimate network behavior, and automate vulnerability exploitation. Through continuous feedback loops and adaptive responses to mitigation measures, AI-driven botnets can evade detection and prolong service disruptions by exploiting weaknesses in DDoS mitigation solutions and target infrastructure. Unlike human operators, AI-powered botnets exhibit a level of automation, adaptability, and scalability that enables them to autonomously learn, evolve, and innovate their attack vectors in real-time, presenting significant challenges for traditional defense mechanisms and highlighting the importance of advanced AI-driven cybersecurity solutions to effectively counter emerging cyber threats. This is adding to the complexity of DDoS attacks and making them more difficult to mitigate.

While the attacks listed above are not the only ways AI will impact the threat landscape, they exemplify some ways how AI is increasing the sophistication and effectiveness of the attacks we see. However, when it comes to coverage, Cyber policy language is relatively standard across the market in terms of what triggers a loss. All of the attack types listed above would be categorized as Network Security of Cyber Crime incidents. Language on most forms is quite broad and does not include any exclusionary language for AI; these claims are coming in and being handled daily without claim denials due to AI. We have seen the cyber insurance product evolve over the years as the main exposure has changed and new coverages have been added and policy language has been adjusted. I do not see this as a time where policy language needs to be adjusted. The forms as they stand seem to trigger coverage appropriately; whether AI is used or not in the cyber incident is not material to the cause of loss. If this were an emerging risk, I believe we would see a change in policy language and possibly a new coverage coming to the market.

While AI is changing the world around is and being adapted in every industry and impacting every job market with insurance being no exception, the way we underwrite and even the way we underwrite Cyber will not change (though AI can assist us in our underwriting). We are still underwriting exposures and controls and while AI has evolved the exposures and the controls, it is just the most recent evolution of this ever-changing product. With this prodigious technology, we are not seeing a change to the types of of losses but an increased sophistication and complexity in how they are executed. This requires an equivalent or stronger reaction along with a proactive approach on the risk mitigation front. The good news is that the evolution of generative AI has also led to the creation of security products and vendors that help businesses combat many of the threats posed by generative AI. For example, vendors like DeepTrust can help businesses defend voice communication channels against deepfakes and AI-powered social engineering, fraud, and voice phishing. Insurers can partner with vendors like these to help ensure that their customers have quality options available to them to help reduce their risk.

This is also a time for insurers and vendors to prioritize using AI to enhance their underwriting, claims and risk mitigation efforts. This is at the core of our philosophy at Cowbell. With our Closed Loop Framework, we feedback our loss, underwriting, and Cowbell Factor data through our data science, underwriting, and claims departments. This continuously improves the datasets of each department making them stronger and increasing the strength of our organization as a whole. Just as AI is trained via continuous learning and improvement, we too can leverage better data to help our teams continuously improve and write better, more profitable risks and provide mitigation efforts.

Meet the Authors

Headshot of Emma Fekkas. Emma Fekkas

Emma is currently the RVP of Underwriting for the East region at Cowbell handling Cyber and Tech E&O risks up to $250M in revenue. She is an active APIW and PLUS member holding a MBA in Finance and Marketing with over 10 years of industry experience. She has handled Cyber and Professional Lines with both traditional carriers and MGA’s handling SMB up to multi-billion revenue risks.

 

Headshot of Noah Kjos. Noah Kjos

Noah is Co-Founder and COO at DeepTrust where he is passionate about helping security teams defend voice communication channels from AI powered social engineering, fraud, and phishing. Alerted to the threat of deepfakes after a scam call targeting a family member, Noah has spent extensive time exploring the security implications of generative AI and voice cloning. Prior to DeepTrust, Noah served as Head of Operations at HomeFlow.

 

News Type

PLUS Blog

Business Line

Cyber Liability

Topic

Professional Liability (PL) Insurance

Contribute to

PLUS Blog

Contribute your thoughts to the PLUS Membership consisting of 38,000+ Professional Liability Practitioners.

Related Podcasts

Related Articles