Just a few years ago, the legal landscape governing health-related personal information was relatively simple: Protected Health Information was regulated under Health Insurance Portability and Accountability Act, a discrete set of rules that applies to a specified set of healthcare plans, clearinghouses, and providers. While narrowly targeted statutes governed particular types of health data and the Federal Trade Commission maintained broad oversight over personal information, any data that could reveal or suggest a health condition or treatment was largely free of regulatory scrutiny and litigation risk.

Today, by contrast, the privacy of health-related personal information is under close scrutiny by the FTC, the U.S. Department of Health and Human Services’ Office for Civil Rights, and state regulators.

Read the full Update here.

The White House recently issued its most extensive policy directive yet concerning the development and use of artificial intelligence through a 100-plus-page Executive Order titled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” and accompanying “Fact Sheet” summary.

Following in the footsteps of last year’s Blueprint for AI Bill of Rights and updates to the National Artificial Intelligence Research and Development Strategic Plan published earlier this year, the EO represents the most significant step yet from the Biden administration regarding AI. Like these previous efforts, the EO acknowledges both the potential and the challenges associated with AI while setting a policy framework aimed at the safe and responsible use of the technology, with implications for a wide variety of companies.

Read the full Update here.

Last week, the UK’s Online Safety Bill received royal assent and became law. With this development, Ofcom, the regulator for the new Online Safety Act, has published a roadmap to explain how the act will be implemented over the next two years.

Ofcom has made it clear that it will move quickly to implement the act and develop codes of practice in three phases: (1) duties regarding illegal content; (2) duties regarding the protection of children; and (3) additional duties for certain designated services to provide transparency and empower users. This Update offers an overview of the scope, key requirements, and expected timeline for each of the three phases.

Read the full Update here.

Under an amendment to the Safeguards Rule under the Gramm-Leach-Bliley Act announced on October 27, 2023, the Federal Trade Commission will require a broad range of nonbank financial institutions to notify the FTC of instances of the unauthorized acquisition of unencrypted, personally identifiable, nonpublic financial information of more than 500 customers.

The new notification obligation will be a significant change for financial institutions covered by the FTC’s Safeguards Rule, as the universe of reportable incidents is vastly broader than is currently covered by other state or federal requirements, notification must be made quickly, and such reports will generally be made public by the FTC.

Read the full Update here.

Overview

California Governor Gavin Newsom recently signed AB 1394, a law that imposes new obligations on social media platforms to prevent and combat child sexual abuse and exploitation. The law is scheduled to take effect on January 1, 2025, and has two primary requirements for social media platforms (SMP): (1) implement a notice-and-staydown requirement for child sexual abuse material (CSAM); and (2) a prohibition against “knowingly facilitat[ing], aid[ing], or abet[ing] commercial sexual exploitation,” as defined by the statute. If a social media company violates the law, it may be liable to the reporting user for actual damages sustained and statutory damages of up to $250,000 per violation.

The law allows for a civil action to be brought by, or on behalf of, a person who is a minor and a victim of commercial sexual exploitation. The law includes a safe harbor provision for platforms that conduct safety audits. Social media platforms may face damages of up to $4 million per violation.

Continue Reading California Law Requires Platforms To Take More Action Against Child Sexual Exploitation

Among the many open questions about large-language models (LLMs) and generative artificial intelligence (AI) are the legal risks that may result from AI-generated content. While AI-specific regulation remains pending and continues to develop in jurisdictions around the world, the following article provides a high-level summary of illegal and harmful content risks under existing law, as well as mitigations that companies may wish to consider when developing baseline models and consumer-facing generative AI tools. (For copyright and intellectual property (IP)-related issues, see Perkins Coie Updates.)

Continue Reading Generative AI: How Existing Regulation May Apply to AI-Generated Harmful Content

After a flurry of legislative activity across the United States related to kids’ privacy and safety online, in recent weeks, federal courts in Arkansas and California have enjoined two notable state laws. A federal court in Arkansas preliminarily enjoined the Arkansas Social Media Safety Act (AR SMSA) on August 31, the day before the statute was scheduled to take effect for social media platforms in scope. The U.S. District Court for the Western District of Arkansas found that the plaintiff, NetChoice, LLC, is likely to succeed on the merits of its constitutional challenges.

Less than three weeks later, on September 18, the U.S. District Court for the Northern District of California also preliminarily enjoined California’s Age-Appropriate Design Code (CA AADC), holding that NetChoice is likely to succeed in showing that 10 CA AADC requirements violate the First Amendment.

Continue Reading Federal Courts Preliminarily Enjoin Arkansas Social Media Safety Act and California Age-Appropriate Design Code

The U.S. Department of Homeland Security announced new policies on September 14, 2023, regarding its use and acquisition of artificial intelligence technologies, including facial recognition and face capture technologies. DHS also appointed Eric Hysen as the department’s first chief AI officer.

Highlighting the potential “privacy, civil rights, and civil liberties” issues associated with the use of AI technologies by the department, Secretary of Homeland Security Alejandro N. Mayorkas explained that DHS must harness AI “effectively and responsibly.” To that end, the new policies, which were developed by the DHS Artificial Intelligence Task Force, focus on two areas: (1) acquisition and use of AI technologies and (2) use of facial recognition and face capture technologies.

Read the full Update here.

The Supreme Court of New Jersey unanimously held that a wiretap order, rather than a search warrant, is required to seek “prospective electronically stored information” from Meta Platforms, Inc., the provider of the Facebook and Instagram services. Facebook, Inc. v. State, 254 N.J. 329, 341 (2023). The court reasoned that “the nearly contemporaneous acquisition of electronic communications … is the functional equivalent of wiretap surveillance and is therefore entitled to greater constitutional protection.” Wiretap orders are subject to heightened privacy protections, providing greater protections for users.

Continue Reading NJ Supreme Court: Wiretap Order Required for Prospective Online Communications

The UK Online Safety Bill was passed by Parliament earlier this week and is expected to soon become law through royal assent. The Online Safety Act (UK OSA) will impose a series of sweeping obligations, including risk assessment, content moderation, and age assurance requirements, on a variety of online services that enable user-generated content, including but not limited to social media and search providers.

Among the most notable aspects of the UK OSA are its “duties of care.” The law will impose a series of affirmative obligations to assess and mitigate safety risks.

Continue Reading UK Parliament Passes a Sweeping and Controversial Online Safety Bill