Recent weeks have seen action from various European regulators regarding artificial intelligence/machine learning (AI/ML) and algorithms.
Opening of the European Centre for Algorithmic Transparency
The European Centre for Algorithmic Transparency (ECAT) was officially inaugurated on April 17, 2023, by the European Commission’s Joint Research Centre in Seville, Spain. The ECAT plans to leverage an interdisciplinary team of data scientists, AI experts, social scientists, and legal experts to perform technical analyses and evaluations of algorithms used by Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) governed by the Digital Services Act (DSA). The ECAT believes that doing so will help encourage transparency and risk-mitigation, particularly for systemic issues identified by the DSA, including possible amplification of illegal content and disinformation, impacts on freedom of expression or media freedom, gender-based violence, protection of minors online, and their mental health. Researchers at the ECAT will also study the long-term societal impact of algorithms.
UK Governmental White Paper Proposing a “Pro-Innovation” Regulatory Approach for AI
The UK government announced and published a “pro-innovation” white paper on March 29, 2023, setting out plans for the regulation of artificial intelligence in order to “drive responsible innovation and maintain public trust in [the] revolutionary technology.” The white paper considers how to create a framework that makes it easier for businesses to harness the benefits of AI in innovation, growth, and job creation. This step follows creation of a new expert task force designed to build the UK’s capacity in core technology, including generative AI language models and funding of £2 million to create a sandbox to help businesses test AI rules before going to market. The white paper articulates a plan that gives leeway to businesses to innovate and empowers existing regulators, such as the Health and Safety Executive, Equality and Human Rights Commission, and Competition and Markets Authority, to design sector-specific guidance. Additionally, the white paper suggests that these five general principles should be followed by regulators and industry:
- Safety, security, and robustness: Applications of AI should function in a secure, safe, and robust way where risks are carefully managed.
- Transparency and explainability: Organizations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI.
- Fairness: AI should be used in a way that complies with the UK’s existing laws, for example the Equality Act 2010 or the UK General Data Protection Regulation (GDPR), and must not discriminate against individuals or create unfair commercial outcomes.
- Accountability and governance: Measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes.
- Contestability and redress: People need to be provided clear routes to dispute harmful outcomes or decisions generated by AI.
Interested parties can submit feedback on the white paper through June 21, 2023.