EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Read more

Safeguards for accelerated market authorization of vaccines in Europe

by Suzan Slijpen & Mauritz Kop

This article has been published by the Stanford Law School ‘Center for Law and the Biosciences’, Stanford University, 15 March 2021. link to the full text: https://law.stanford.edu/2021/03/15/safeguards-for-accelerated-market-authorization-of-vaccines-in-europe/

The first COVID-19 vaccines have been approved

People around the globe are concerned about safety issues encircling the accelerated introduction of corona vaccines. In this article, we discuss the regulatory safeguards for fast-track market authorization of vaccines in Europe. In addition, we explain how the transmission of European Union law into national Member State legislation works. We then clarify what happens before a drug can be introduced into the European market. We conclude that governments should build bridges of mutual understanding between communities and increase trust in the safety of authorized vaccines across all population groups, using the right messengers.

Drug development normally takes several years

Drug development normally takes several years. The fact that it has been a few months now seems ridiculously short. How is the quality and integrity of the vaccine ensured? That people - on both sides of the Atlantic - are concerned about this is entirely understandable. How does one prevent citizens from being harmed by vaccines and medicines that do not work for everyone, because the admission procedures have been simplified too much?

The purpose of this article is to shed a little light upon the accelerated market authorization procedures on the European continent, with a focus on the situation in the Netherlands.

How a vaccine is introduced into the market

In June 2020, the Dutch government, in close cooperation with Germany, France and Italy, formed a Joint Negotiation Team which, under the watchful eye of the European Commission, has been negotiating with vaccine developers. Its objective: to conclude agreements with drug manufacturers at an early stage about the availability of vaccines for European countries. In case these manufacturers are to succeed in developing a successful vaccine for which the so-called Market Authorization (MA) is granted by EMA or CBG, this could lead to the availability of about 50 million vaccines (for the Netherlands alone).

Who is allowed to produce these vaccines?

Who is allowed to produce these vaccines? The Dutch Medicines Act is very clear about this. Only "market authorization holders" are allowed to manufacture medicines, including vaccines. These are parties that have gone through an extensive application procedure, who demonstrably have a solid pharmaceutical quality management system in place and have obtained a pharmaceutical manufacturing license (the MIA, short for Manufacturing and Importation Authorisation). This license is granted after assessment by the Health and Youth Care Inspectorate of the Ministry of Health, Welfare & Sport (IGJ) – by Farmatec. Farmatec is part of the CIBG, an implementing body of the Ministry of Health, Welfare and Sport (VWS). The M-license is mandatory for parties who prepare, or import medicines.

Read more at the Stanford Center for Law and the Biosciences!

Read more on manufacturing licenses, fast track procedures and market authorization by the European Medicines Agency (EMA) and the EC, harmonisation and unification of EU law, CE-markings, antigenic testing kits, mutations, reinfection, multivalent vaccines, mucosal immunity, Good Manufacturing Practices (GMP), pharmacovigilance, the HERA Incubator, clinical trials, compulsory vaccination regimes and continuous quality control at Stanford!

Read more