Last week, the period for comments closed on the California Privacy Protection Agency’s (CPPA) latest version of the draft implementing regulations for the California Privacy Rights Act (CPRA) amendments to the California Consumer Privacy Act (CCPA) (Revised Regs). The Revised Regs were first released with modifications and an Explanation of Modified Text of Proposed Regulations at the end of October. Shortly thereafter, the CPPA released the current version of the Revised Regs, which, compared to the initial draft regulations (Initial Draft Regs), include many substantive modifications to key compliance areas.

Continue Reading One Step Closer: California Privacy Protection Agency Reviews Comments for CCPA Regulations

This is the second in a series of updates addressing the bilateral data access agreement (Data Access Agreement or agreement) between the United States and the United Kingdom under the Clarifying Lawful Overseas Use of Data Act (CLOUD Act). The agreement, which entered into force on October 3, 2022, is designed to facilitate cross-border criminal investigations involving communications data. This Update focuses on what the Agreement says and the processes it establishes for service of process.

Click here to read the full Update.

California and New York recently passed laws that seek to change how social media platforms and social media networks design and report their content moderation practices. The New York law will require a hateful conduct policy and reporting mechanism starting in December 2022. The California laws will impose content policy and transparency requirements starting in 2023 and 2024.

Service providers that may be subject to these new state laws may refer to an overview of their requirements and some practical steps to consider in this Update.

Click here to read the full update.

The Office of Science and Technology Policy (OSTP), a part of the Executive Office of the President, recently published a white paper titled “The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (Blueprint). This Blueprint offers a nonbinding framework for the responsible development of policies and practices around automated systems, including artificial intelligence (AI) and machine learning (ML) technologies.

Background

The Blueprint comes on the heels of bipartisan executive orders seeking to balance the benefits and potential risks of AI and ML technologies, and direct executive agencies and departments to develop policies around the responsible use of AI and ML. In one of his final executive orders, former President Trump required federal agencies to adhere to a set of principles when deploying AI technologies with the intention of fostering public trust in AI. And in one of his first executive orders, President Biden directed executive departments and agencies to address systemic inequities and embed fairness in decision-making processes. Some U.S. government agencies, including the Government Accountability Office (GAO) and the U.S. Department of Energy (DOE), have already developed frameworks for identifying and mitigating risks in the use of AI technology.

Scope and Purpose

The Blueprint is a nonbinding framework designed “to support the development of policies and practices that protect civil rights and promote democratic values” with automated systems. The OSTP emphasizes that the Blueprint is not a binding regulation. Rather, it outlines five guiding principles that the OSTP asserts should be applied to any automated system that could meaningfully affect civil rights, equal opportunity, or access to critical resources and services. The Blueprint also includes a handbook that provides detailed guidance on how to implement these principles in practice.

The Blueprint does not specifically define or limit itself to “artificial intelligence.” Instead, it places any “automated system” under its scope, which is broadly defined as “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities.” The OSTP notes the definition explicitly includes, but is not limited to, AI, ML, and other data processing techniques. By its terms, the definition likely includes a much broader set of technologies that are not traditionally considered AI or ML.

The Five Principles

The OSTP Blueprint identifies five principles that should guide the “design, use, and deployment” of automated systems, which are summarized as follows:

  1. Safe and Effective Systems. Automated systems should be developed in consultation with diverse stakeholders to help identify risks. They should be designed to protect from foreseeable harms and undergo both pre-deployment testing and ongoing monitoring to mitigate potential harmful outcomes.
  2. Algorithmic Discrimination Protections. Developers and deployers should take efforts to protect the public from algorithmic discrimination from their automated systems. These efforts should include proactive equity assessments, use of representative data, and ensuring accessibility in the design process. After the system is deployed, there should be ongoing disparity testing and mitigation.
  3. Data Privacy. Automated systems should have built-in data privacy protections that give users agency over how their data is used. Only data that is strictly necessary for the specific context should be collected and any data collection should conform to users’ reasonable expectations. Developers and deployers should seek consent regarding collection and use of data using consent requests that are brief and easily understandable. Sensitive data, including health, work, education, criminal, and children’s data, should receive enhanced protections. Surveillance technologies should be subject to heightened oversight and continuous surveillance, and monitoring should not be used in education, work, housing, or in other contexts where its use is likely to limit rights, opportunities, or access.
  4. Notice and Explanation. The public should be informed of where and how an automated system is being used and how it affects outcomes. Automated systems should come with publicly accessible, plain language documentation that provides notice that an automated system is being used and describes the function of the system, the purpose of the automation, the entity responsible for the system, and the outcomes.
  5. Human Alternatives, Consideration, and Fallback. Where appropriate, users should be able to opt out from automated systems in favor of a human alternative. Automated systems should be connected to a fallback and escalation process with human consideration in the event that an automated system produces an error, fails, or an affected party otherwise wants to contest an automated decision.  

For all five principles, the Blueprint emphasizes the use of independent evaluations and public reporting wherever possible to confirm adherence to the principles.

The OSTP has also published guidance on how to apply the Blueprint, as well as a 41-page “Technical Companion” which explains, for each of the five principles, (1) why the principle is important, (2) what should be expected of automated systems, and (3) how these principles can move into practice.

Foreshadowing the Future of AI Regulation

The Blueprint is the latest in a series of guidelines and frameworks recently published by various government entities and international organizations concerning the safe use of AI technologies, including the European Union’s Ethics Guidelines for Trustworthy AI, the Organisation for Economic Co-operation and Development’s AI Principles, and the GAO’s AI Accountability Framework.

While these efforts have not yet yielded binding regulations or obligations, the growing focus on mitigating the potential harms of AI by both government entities and nongovernmental organizations suggests that future regulation is likely, particularly if an industry is unable to self-regulate against the potential harms. These guidelines and frameworks also portend the types of regulations that the future may bring. For example, they suggest that new laws, agency guidance, and industry policies could all be used to effectuate the goals of the Blueprint. The Artificial Intelligence, Machine Learning & Robotics industry group at Perkins Coie will continue to monitor changes to the AI regulatory landscape to better help clients navigate potential legal and regulatory issues during the development, testing, and launch of AI and ML products and services.

Introduction

While candy sales skyrocketed and trick-or-treaters donned costumes this past Halloween weekend, the California Privacy Protection Agency (Agency) Board was busy holding its first public meeting since September. Over the course of the two-day meeting on Friday and Saturday, October 28 and 29, the Agency welcomed new board member Alastair Mactaggart and discussed and debated numerous provisions of the Modified Draft Proposed California Consumer Privacy Act Regulations (Draft CCPA Regulations). Most importantly, it unanimously passed a motion directing the Agency staff to take all steps necessary to prepare and notice modifications to the text of the proposed regulatory amendments for an additional 15-day comment period.

Continue Reading This is Not a Drill: CPPA Gets Closer to Finalizing Certain Privacy Regulations

Amazon and Microsoft won summary judgment in two class action lawsuits filed in federal court in the U.S. District Court for the Western District of Washington asserting violations of the Illinois Biometric Information Privacy Act. While the facts of the cases are unique—the defendants received a data set that was developed by a third party to help reduce bias in facial recognition technology—the decisions should be persuasive benchmarks for future courts considering the geographic reach of BIPA.

Click here to read the full update.

After a five-day trial and only an hour of deliberation, the nation’s first trial under the Illinois Biometric Information Privacy Act (BIPA) ended with a bang. The jury found that the defendant, BNSF Railway Company, recklessly or intentionally violated BIPA 45,600 times (once per class member), resulting in a $228 million judgment.

Click here to read the full update.

The Colorado attorney general’s office sent shockwaves throughout the privacy world on September 30, 2022, when it published its proposed Colorado Privacy Act (CPA) draft rules (Draft Rules). The Draft Rules are complex and comprehensive; at 38 pages of single-spaced text, they are longer than the CPA itself. The Draft Rules are accompanied by a proposed timeline for stakeholder meetings and a public hearing.

Coming on the heels of this announcement, on October 10, California announced that it will hold meetings on October 21 and October 22 to discuss “possible adoption or modification of the text [of the draft California Privacy Rights Act (CPRA) regulations].”

Below we outline and analyze some of the key provisions of the Draft Rules and call out certain differences between the Colorado Draft Rules and the CPRA draft regulations released in May.

Click here to read the full update.

President Biden issued an executive order (EO) increasing protections and safeguards for personal data subject to signals intelligence activities. It also establishes a redress mechanism for residents of qualifying states who allege they were harmed by U.S. signals intelligence activity conducted in violation of U.S. law. The EO is intended to address perceived deficiencies in U.S. surveillance law identified by the Court of Justice of the European Union (CJEU) in its July 16, 2020 judgment (Schrems II) and to establish protections under U.S. law for personal data equivalent to those provided by the European General Data Protection Regulation (GDPR). The EO has been expected since the United States and the European Commission entered into an agreement on a Trans-Atlantic Data Privacy Framework in March of 2022. It places EU-to-U.S. data transfers on more solid footing under European Union (EU) law and is expected to support a new finding by the European Commission that the United States is among the handful of jurisdictions globally that provides adequate protection to personal data transferred from the EU.

Click here to read the full update.

Following the European Council’s approval last week, the Digital Services Act (DSA) has been officially adopted, starting the countdown to the law’s entry into force later this year. The DSA builds on the Electronic Commerce Directive 2000 (e-Commerce Directive) and regulates the obligations of digital services that act as intermediaries in connecting consumers with third-party goods, services, or content.

The DSA is a paradigm-shifting law that features due diligence and transparency obligations for all online intermediaries. In addition to the more commonly seen notice-and-takedown and transparency requirements for “illegal content,” the DSA contains novel and extensive obligations related to global content moderation, advertising, data access, and product design practices.

Click here to read the full update.