On February 1, 2024, the Federal Trade Commission announced a complaint and proposed consent order against Blackbaud, Inc. concerning a 2020 data security incident that included a ransomware demand and payment. According to the FTC’s complaint, Blackbaud’s allegedly unfair and misleading conduct included not just deficient data security practices but also a delay in providing accurate notice to its business customers about the breach, including the inclusion of deceptive statements about the scope and severity of the breach in its initial notice to those customers.

The FTC highlighted that this case is the first time it has brought standalone Section 5 unfairness claims arising out of the alleged failure to (1) implement and enforce reasonable data retention practices and (2) accurately communicate the severity and scope of the breach.

Read the full Update here.

Safety risk assessments are becoming a preferred regulatory tool around the world. Online safety laws in Australia, Ireland, the United Kingdom, and the United States will require a range of providers to evaluate the safety and user-generated content risks associated with their online services.

While the specific assessment requirements vary across jurisdictions, the common thread is that providers will need to establish routine processes to determine, document, and mitigate safety risks resulting from user-generated content and product design. This Update offers practical steps for providers looking to develop a consolidated assessment process that can be easily adapted to meet the needs of laws around the world.

Read the full Update here.

California Attorney General Rob Bonta announced an investigatory sweep into popular streaming apps and devices, timed to coincide with Data Privacy Day on January 28. The California Attorney General’s Office explained that it is sending letters to such streaming services alleging a failure to comply with the requirement to offer an easy mechanism to opt out of the sale or sharing of personal information under the California Consumer Privacy Act (CCPA).

Continue Reading California Announces Sweep on Streaming Services and More Enforcement To Come

Less than 10 days after announcing its complaint and proposed settlement against location data broker X-Mode, the Federal Trade Commission (FTC) followed its recent spate of enforcement in the location and sensitive data space with the announcement of another enforcement action and proposed settlement with InMarket Media, Inc. (InMarket). 

Continue Reading The FTC Continues its Focus on Location and Sensitive Data

On January 9, 2024, the Federal Trade Commission (FTC) announced its complaint and proposed settlement with location data broker X-Mode Social, Inc. and its successor Outlogic, LLC (collectively X‑Mode). Under the order, X-Mode will be prohibited from sharing or selling any “sensitive location data”—location data that identifies visits to sensitive locations such as medical facilities, religious organizations, and other locations that allow potentially sensitive inferences. The FTC’s action reflects the FTC’s continued focus on location data, particularly that reflects potentially sensitive information, and is similar to the case it is currently litigating against Kochava regarding its sales of precise geolocation data.

Continue Reading FTC Cracks Down on Collection and Sharing of Sensitive Location Data With Proposed X-Mode Settlement

The Federal Trade Commission announced its first enforcement action alleging that discriminatory use of artificial intelligence was an unfair practice under Section 5 of the FTC Act on December 19, 2023. 

The enforcement action signals that the FTC is using and will continue to use its Section 5 unfairness authority to require reasonable safeguards on the use of automated tools, including those relying on facial recognition and other biometric technology, to ensure their accuracy and absence of bias. What is more, the case provides the most concrete guidance from the FTC to date regarding the measures that the FTC would like to see companies take to help ensure that AI systems operate accurately and without bias.

Read the full Update here.

The Federal Trade Commission gave privacy lawyers a long-awaited Christmas gift on December 20, 2023: its notice of proposed rulemaking to amend the Children’s Online Privacy Protection Act Rule. The NPRM follows a review of the COPPA Rule initiated by the FTC four years ago and the submission of over 175,000 public comments. The FTC last modified the COPPA Rule in 2013.

While the FTC declined to propose a number of changes to the COPPA Rule in response to public comments, the NPRM nevertheless sets forth a host of new requirements that, if adopted, would impose substantial new obligations on operators subject to the COPPA Rule.

Read the full Update here.

Just a few years ago, the legal landscape governing health-related personal information was relatively simple: Protected Health Information was regulated under Health Insurance Portability and Accountability Act, a discrete set of rules that applies to a specified set of healthcare plans, clearinghouses, and providers. While narrowly targeted statutes governed particular types of health data and the Federal Trade Commission maintained broad oversight over personal information, any data that could reveal or suggest a health condition or treatment was largely free of regulatory scrutiny and litigation risk.

Today, by contrast, the privacy of health-related personal information is under close scrutiny by the FTC, the U.S. Department of Health and Human Services’ Office for Civil Rights, and state regulators.

Read the full Update here.

The White House recently issued its most extensive policy directive yet concerning the development and use of artificial intelligence through a 100-plus-page Executive Order titled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” and accompanying “Fact Sheet” summary.

Following in the footsteps of last year’s Blueprint for AI Bill of Rights and updates to the National Artificial Intelligence Research and Development Strategic Plan published earlier this year, the EO represents the most significant step yet from the Biden administration regarding AI. Like these previous efforts, the EO acknowledges both the potential and the challenges associated with AI while setting a policy framework aimed at the safe and responsible use of the technology, with implications for a wide variety of companies.

Read the full Update here.

Last week, the UK’s Online Safety Bill received royal assent and became law. With this development, Ofcom, the regulator for the new Online Safety Act, has published a roadmap to explain how the act will be implemented over the next two years.

Ofcom has made it clear that it will move quickly to implement the act and develop codes of practice in three phases: (1) duties regarding illegal content; (2) duties regarding the protection of children; and (3) additional duties for certain designated services to provide transparency and empower users. This Update offers an overview of the scope, key requirements, and expected timeline for each of the three phases.

Read the full Update here.