EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Read more

Democratic Countries Should Form a Strategic Tech Alliance

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 1/2021

New Stanford innovation policy research: “Democratic Countries Should Form a Strategic Tech Alliance”.

Exporting values into society through technology

China’s relentless advance in Artificial Intelligence (AI) and quantum computing has engendered a significant amount of anxiety about the future of America’s technological supremacy. The resulting debate centres around the impact of China’s digital rise on the economy, security, employment and the profitability of American companies. Absent in these predominantly economic disquiets is what should be a deeper, existential concern: What are the effects of authoritarian regimes exporting their values into our society through their technology? This essay will address this question by examining how democratic countries can, or should respond, and what you can do about it to influence the outcome.

Towards a global responsible technology governance framework

The essay argues that democratic countries should form a global, broadly scoped Strategic Tech Alliance, built on mutual economic interests and common moral, social and legal norms, technological interoperability standards, legal principles and constitutional values. An Alliance committed to safeguarding democratic norms, as enshrined in the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). The US, the EU and its democratic allies should join forces with countries that share our digital DNA, institute fair reciprocal trading conditions, and establish a global responsible technology governance framework that actively pursues democratic freedoms, human rights and the rule of law.

Two dominant tech blocks with incompatible political systems

Currently, two dominant tech blocks exist that have incompatible political systems: the US and China. The competition for AI and quantum ascendancy is a battle between ideologies: liberal democracy mixed with free market capitalism versus authoritarianism blended with surveillance capitalism. Europe stands in the middle, championing a legal-ethical approach to tech governance.

Democratic, value-based Strategic Tech Alliance

The essay discusses political feasibility of cooperation along transatlantic lines, and examines arguments against the formation of a democratic, value-based Strategic Tech Alliance that will set global technology standards. Then, it weighs the described advantages of the establishment of an Alliance that aims to win the race for democratic technological supremacy against disadvantages, unintended consequences and the harms of doing nothing.

Democracy versus authoritarianism: sociocritical perspectives

Further, the essay attempts to approach the identified challenges in light of the ‘democracy versus authoritarianism’ discussion from other, sociocritical perspectives, and inquires whether we are democratic enough ourselves.

How Fourth Industrial Revolution (4IR) technology is shaping our lives

The essay maintains that technology is shaping our everyday lives, and that the way in which we design and utilize our technology is influencing nearly every aspect of the society we live in. Technology is never neutral. The essay describes that regulating emerging technology is an unending endeavour that follows the lifespan of the technology and its implementation. In addition, it debates how democratic countries should construct regulatory solutions that are tailored to the exponential pace of sustainable innovation in the Fourth Industrial Revolution (4IR).

Preventing authoritarianism from gaining ground

The essay concludes that to prevent authoritarianism from gaining ground, governments should do three things: (1) inaugurate a Strategic Tech Alliance, (2) set worldwide core rules, interoperability & conformity standards for key 4IR technologies such as AI, quantum and Virtual Reality (VR), and (3) actively embed our common democratic norms, principles and values into the architecture and infrastructure of our technology.

Read more

Een Juridisch-Ethisch Kader voor Quantum Technologie

Een bewerkte versie van deze bijdrage is gepubliceerd op platform VerderDenken.nl van het Centrum voor Postacademisch Juridisch Onderwijs (CPO) van de Radboud Universiteit Nijmegen. https://www.ru.nl/cpo/verderdenken/columns/we-nederland-voorbereiden-kwantumtoekomst/

Nederland moet zich voorbereiden op de toepassing van kwantumtechnologie, zegt jurist en Stanford Law School Fellow Mauritz Kop. Op het gebied van regulering, intellectueel eigendom en ethiek is er nog veel werk aan de winkel.

De Quantum Age roept veel juridische vragen op

Het gedrag van de natuur op de kleinste schaal kan vreemd en contra-intuïtief zijn. Hoe kunnen beleidsmakers de toepassingsgebieden van kwantumtechnologie, zoals quantum computing, quantum sensing en het quantum internet op een maatschappelijk verantwoorde manier reguleren? Dienen ethische kwesties een rol te spelen in regulering? De Quantum Age roept veel juridische vragen op.

Hoe kunnen we kwantumtechnologie reguleren?

Regulering van transformatieve technologie is een dynamisch, cyclisch proces dat de levensduur van de technologie en de toepassing volgt. Het vraagt om een flexibel wetgevend systeem dat zich snel kan aanpassen aan veranderende omstandigheden en maatschappelijke behoeften.

De eerste regelgevende stap om te komen tot een bruikbaar juridisch-ethisch kader is het koppelen van de Trustworthy AI-principes aan kwantumtechnologie. Die vullen we vervolgens aan met horizontale, overkoepelende regels die recht doen aan de unieke natuurkundige eigenschappen van quantum. Aan deze horizontale kernregels voegt de wetgever tenslotte verticale, industrie- of sectorspecifieke voorschriften toe. Die verticale voorschriften en gedragscodes zijn risk-based en houden rekening met de uiteenlopende behoeftes van economische sectoren waar het duurzame innovatiestimuli betreft. Zo ontstaat een gedifferentieerde, sectorspecifieke benadering aangaande incentives en risks.

Bewustwording van ethische, juridische en sociale aspecten

Een belangrijk onderdeel van het synchroniseren van onze normen, waarden, standaarden en principes met kwantumtechnologie is het creëren van bewustwording van de ethische, juridische en sociale aspecten ervan. De architectuur van systemen die zijn uitgerust met kwantumtechnologie moet waarden vertegenwoordigen die wij als samenleving belangrijk vinden.

Vooruitlopend op spectaculaire doorbraken in de toepassing van kwantumtechnologie is de tijd nu rijp voor regeringen, onderzoeksinstellingen en de markt om regulatoire en intellectuele eigendomsstrategieën voor te bereiden die passen bij de power van de technologie.

Nederland moet zich voorbereiden op een kwantumtoekomst, want die komt eraan.

Read more