Two jurisdictions have been out front in terms of passing laws that seek to address the potential risks of artificial technology (“AI”) – California and the European Union (“EU”). In a globalized economy, insurers should be aware of the trend lines emerging from these laws and how they may impact risks associated with insurance operations and potential claims. A number of new California laws regarding the use of AI go into effect in 2025 or later. The EU’s Artificial Intelligence Act became effective on August 1, 2024.

Both California and the EU have sought to formally define what is AI. According to California’s AB-2885, AI will mean “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical and visual environments.” The EU’s AI Act defines an “AI system” as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infer, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The California and EU legal definitions of AI have similarly refer to “input” and “output.” “Input” speaks to the datasets used in the development of AI tools, whereas “output” is the generated results of the AI system’s model. The California legislature and the EU Parliament have, to different degrees, addressed both input and output.

California

For a number of reasons, it is not surprising that California has been proactive in legislating the use of AI. AI technology has largely emerged from California’s technology sector. The potential impact of AI has also been a hot topic in the entertainment industry and creator economy. Governor Gavin Newsom signed a number of bills into law aimed at addressing “deep fakes” (audio and visual media manipulated by AI to appear real), explicit material, and misinformation.

Governor Newsom also signed laws supported by artists and entertainment industry unions. Insurers in the professional liability space may be particularly interested in the following:

  • Training Data Transparency (AB-2013, eff. Jan. 1, 2026): Open-source generative AI providers will be required to share “[a] high-level summary of the datasets used in the development of the [AI] system or service.” Notably, such disclosures would include whether the developer used copyrighted or trademarked material, whether the developer purchased or licensed the datasets, and whether the datasets include personal information. Developers must also disclose other descriptions of the datasets that would help users assess the reliability of the outputs generated.
  • AI Transparency Act (SB-942, eff. Jan. 1, 2026): AI providers will be required to provide users with AI detection tools at no cost. Additionally, this bill contains disclosure requirements pertaining to AI-generated content, including latent metadata disclosures. Violations can result in civil penalties of $5,000 per violation, although the bill does not provide for any private right of action.
  • Health Care Services (AB-3030, eff. Jan. 1, 2025): Health care providers must disclose the use of AI in generating communications made to patients pertaining to clinical information. In addition to making a disclaimer, providers must inform patients how they can contact a human.
  • Privacy (AB-1008, eff. Jan. 1, 2025): This amendment to the California Consumer Privacy Act (“CCPA”) seeks to extend existing privacy protections regarding data processing and use to AI systems.
  • Employment and Intellectual Property (AB-2602 and AB-1836, eff. Jan. 1. 2025): These bills were of particular interest in the entertainment industry and were supported by SAG-AFTRA. Under AB-2602, labor agreements pertaining to the use of a person’s digital replica are unenforceable unless certain requirements are met. Most notably, the requirements include both legal and union representation in the negotiation of such an agreement. Protections pertaining to the use of a person’s digital replica extend after the person’s death pursuant to AB-1836.

The suspense in the business and legal communities was particularly palpable regarding the fate of SB-1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill targeted very large AI models with extraordinary computing power and carried the potential for civil penalties. Ultimately, Governor Newsom vetoed SB-1047. In his comments, Governor Newsom appeared to favor a risk-based approach based on the use of AI in “critical decision-making,” rather than based on the model size or cost.

The EU

The EU is a highly-regulated environment for technology companies, compared to the US. The EU Act is no exception, with broad application to those involved in any aspect of the AI ecosystem with a link to the EU market.

The EU Act classifies AI systems according to four risk levels: (1) unacceptable; (2) high; (3) limited; or (4) minimal. In the unacceptable risk category are those AI systems considered to pose a threat to people by way of cognitive behavioral manipulation, social scoring, biometric categorization, or facial recognition. The high risk category is relevant to the insurance industry, as it includes pricing and risk assessment by life and health insurers. Limited and minimal risk AI systems must still comply with transparency and disclosure requirements pertaining to the input used and indicated in connection with output.

Conclusion

The risks of AI that the California legislature and EU Parliament have sought to address indicate potential areas of risk for insurers leveraging AI tools in their own operations or assessing claims pertaining to the use of AI. Policies often contain exclusions for intentional acts that includes violations of laws. As the legal landscape continues to evolve regarding AI, insurers should stay up to date regarding what does and does not constitute legal uses of AI.

Lawmakers are faced with the challenge of addressing potential AI risks without stifling technological innovation. In the U.S., California appears to be at the forefront, establishing the floor for AI regulation among state legislatures. California’s recent slate of AI-related laws have a fair amount in common with the EU’s AI Act, indicating to business and insurers where AI regulation, including at the U.S. federal level, may be headed.

Meet the Authors

Headshot of Natalie Limber.Natalie Limber

Natalie Limber is counsel in Dentons’ Los Angeles office. She has a broad range of combined law firm and in-house experience with a focus on serving the legal and regulatory needs of the insurance industry. Prior to joining Dentons, she held roles in claims and corporate legal with a large national P&C carrier, including leading the corporate internal investigation team and serving as general counsel to the company’s Insurtech business unit. She was previously a partner at a law firm in Chicago representing insurers in matters around the country with a focus on excess, professional liability, management liability and commercial lines. Her current practice is focused on defending insurers in bad faith litigation across policy lines.

Headshot of Justine Margolis.Justine Margolis

Justine Margolis is a member of Dentons’ Litigation and Dispute Resolution practice. Justine represents companies facing class actions and complex commercial litigation, and has a strong track record of success in trial and appellate courts. She has experience defending insurers and financial services companies in complex cases in both state and federal courts. Her practice covers a wide variety of substantive areas, including securities fraud, consumer fraud and deceptive trade practices, coverage disputes, and breach of contract. Justine also has specialized experience defending companies against allegations regarding actuarial negligence and/or fraud.

Headshot of Frederique de La Chapelle.Frédérique de La Chapelle

Frédérique de La Chapelle is a Partner in the Dentons Paris office and Head of Europe Insurance group. She is renowned in regulatory and dispute resolution matters in the insurance and reinsurance sectors. Frédérique assists domestic and international clients with all questions relating to inter alia regulatory insurance, transactions, partnerships agreements, transactional insurance, the structuring of insurance products (including affinity insurance), review and adaptation of insurance policies, review and negotiations of reinsurance agreements, licensing and, portfolio transfer. Frédérique’s clients include insurers, reinsurers, CAC40 companies and brokers.

 

News Type

PLUS Blog

Business Line

Other Business Lines

Topic

Professional Liability (PL) Insurance

Contribute to

PLUS Blog

Contribute your thoughts to the PLUS Membership consisting of 38,000+ Professional Liability Practitioners.

Related Podcasts

Related Articles