Among the many open questions about large-language models (LLMs) and generative artificial intelligence (AI) are the legal risks that may result from AI-generated content. While AI-specific regulation remains pending and continues to develop in jurisdictions around the world, the following article provides a high-level summary of illegal and harmful content risks under existing law, as well as mitigations that companies may wish to consider when developing baseline models and consumer-facing generative AI tools. (For copyright and intellectual property (IP)-related issues, see Perkins Coie Updates.)Continue Reading Generative AI: How Existing Regulation May Apply to AI-Generated Harmful Content

Samuel Klein
Samuel Klein is a graduate of the American University Washington College of Law, where he was a member of the American University Law Review.
The Latest on the EU’s Proposed Artificial Intelligence Act
The fast-developing innovations brought by generative artificial intelligence (AI) are hastening calls from industry and government to consider new regulatory frameworks. The EU was in the process of implementing its AI Act, first proposed on April 21, 2021 (as we previously summarized), before generative AI chatbots were widely released. While the EU’s AI Act…
FCC Proposes To Strengthen Data Breach Notification Rules for Telecom Operators
In response to the increased frequency and severity of data breaches in the telecommunications industry, the Federal Communications Commission recently published a Notice of Proposed Rulemaking that seeks to strengthen and broaden its breach notification rules arising from the unauthorized disclosure of customer proprietary network information (CPNI).