De Wet op de Artificiële Intelligentie

Een bewerkte versie van deze bijdrage is gepubliceerd op platform VerderDenken.nl van het Centrum voor Postacademisch Juridisch Onderwijs (CPO) van de Radboud Universiteit Nijmegen. https://www.ru.nl/cpo/verderdenken/columns/wet-artificiele-intelligentie-belangrijkste-punten/

Nieuwe regels voor AI gedreven producten, diensten en systemen

Op 21 april 2021 presenteerde de Europese Commissie haar langverwachte Wet op de Artificiële Intelligentie (AI). Deze concept Verordening geeft regels voor de ontwikkeling, commodificatie en gebruik van AI gedreven producten, diensten en systemen binnen het territorium van de Europese Unie. Het was bemoedigend te zien dat het team van President Ursula von der Leyen een belangrijk aantal van onze strategische aanbevelingen op het gebied van de regulering van AI heeft overgenomen, danwel zelfstandig tot dezelfde conclusies is gekomen.

Doelstellingen wettelijk kader voor AI

De concept Verordening biedt horizontale overkoepelende kernregels voor kunstmatige intelligentie die op alle industrieën (verticals) van toepassing zijn. De wet beoogt de hoge maatstaven van het EU Trustworthy AI paradigma te codificeren, dat voorschrijft dat AI wettig, ethisch en technisch robuust dient te zijn en daartoe 7 vereisten hanteert.

De Wet op de Artificiële Intelligentie heeft de volgende 4 doelstellingen:

“1. ervoor zorgen dat AI-systemen die in de Unie in de handel worden gebracht en gebruikt, veilig zijn en de bestaande wetgeving inzake grondrechten en waarden van de Unie eerbiedigen;

2. rechtszekerheid garanderen om investeringen en innovatie in AI te vergemakkelijken;

3. het beheer en de doeltreffende handhaving van de bestaande wetgeving inzake grondrechten en veiligheidsvoorschriften die van toepassing zijn op AI-systemen, verbeteren;

4. de ontwikkeling van een eengemaakte markt voor wettige, veilige en betrouwbare AI-toepassingen vergemakkelijken en marktversnippering voorkomen.“

Risico gebaseerde aanpak kunstmatig intelligente applicaties

Om deze doelstellingen te realiseren combineert de concept Artificial Intelligence Act een risk-based approach op basis van de pyramid of criticality, met een modern, gelaagd handhavingsmechanisme. Dit houdt onder meer in dat er voor AI applicaties met een verwaarloosbaar risico een licht wettelijk regime geldt, en onacceptabel risico applicaties verboden worden. Tussen deze 2 uitersten gelden er naarmate het risico toeneemt strengere voorschriften. Deze variëren van vrijblijvende zelfregulerende soft law impact assessments met gedragscodes, tot zwaar, multidisciplinair extern geauditeerde compliance vereisten inzake kwaliteit, veiligheid en transparantie inclusief risicobeheer, monitoring, certificering, benchmarking, validatie, documentatieplicht en markttoezicht gedurende de levenscyclus van de toepassing.

Handhaving en governance

De definitie van hoog risico AI applicaties binnen de diverse industriële sectoren is nog niet in steen gehouwen. Een ondubbelzinnige risicotaxonomie zal bijdragen aan rechtszekerheid en biedt belanghebbenden een adequaat antwoord op vragen over aansprakelijkheid en verzekering. Om ruimte voor innovatie door SME’s waaronder tech-startups te waarborgen, worden er flexibele AI regulatory sandboxes geïntroduceerd en is er IP Action Plan opgesteld voor intellectueel eigendom. De concept Verordening voorziet tenslotte in de installatie van een nieuwe handhavende instantie op Unieniveau: het European Artificial Intelligence Board. De EAIB zal op lidstaatniveau worden geflankeerd door nationale toezichthouders.

Read more

Democratic Countries Should Form a Strategic Tech Alliance

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 1/2021

New Stanford innovation policy research: “Democratic Countries Should Form a Strategic Tech Alliance”.

Exporting values into society through technology

China’s relentless advance in Artificial Intelligence (AI) and quantum computing has engendered a significant amount of anxiety about the future of America’s technological supremacy. The resulting debate centres around the impact of China’s digital rise on the economy, security, employment and the profitability of American companies. Absent in these predominantly economic disquiets is what should be a deeper, existential concern: What are the effects of authoritarian regimes exporting their values into our society through their technology? This essay will address this question by examining how democratic countries can, or should respond, and what you can do about it to influence the outcome.

Towards a global responsible technology governance framework

The essay argues that democratic countries should form a global, broadly scoped Strategic Tech Alliance, built on mutual economic interests and common moral, social and legal norms, technological interoperability standards, legal principles and constitutional values. An Alliance committed to safeguarding democratic norms, as enshrined in the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). The US, the EU and its democratic allies should join forces with countries that share our digital DNA, institute fair reciprocal trading conditions, and establish a global responsible technology governance framework that actively pursues democratic freedoms, human rights and the rule of law.

Two dominant tech blocks with incompatible political systems

Currently, two dominant tech blocks exist that have incompatible political systems: the US and China. The competition for AI and quantum ascendancy is a battle between ideologies: liberal democracy mixed with free market capitalism versus authoritarianism blended with surveillance capitalism. Europe stands in the middle, championing a legal-ethical approach to tech governance.

Democratic, value-based Strategic Tech Alliance

The essay discusses political feasibility of cooperation along transatlantic lines, and examines arguments against the formation of a democratic, value-based Strategic Tech Alliance that will set global technology standards. Then, it weighs the described advantages of the establishment of an Alliance that aims to win the race for democratic technological supremacy against disadvantages, unintended consequences and the harms of doing nothing.

Democracy versus authoritarianism: sociocritical perspectives

Further, the essay attempts to approach the identified challenges in light of the ‘democracy versus authoritarianism’ discussion from other, sociocritical perspectives, and inquires whether we are democratic enough ourselves.

How Fourth Industrial Revolution (4IR) technology is shaping our lives

The essay maintains that technology is shaping our everyday lives, and that the way in which we design and utilize our technology is influencing nearly every aspect of the society we live in. Technology is never neutral. The essay describes that regulating emerging technology is an unending endeavour that follows the lifespan of the technology and its implementation. In addition, it debates how democratic countries should construct regulatory solutions that are tailored to the exponential pace of sustainable innovation in the Fourth Industrial Revolution (4IR).

Preventing authoritarianism from gaining ground

The essay concludes that to prevent authoritarianism from gaining ground, governments should do three things: (1) inaugurate a Strategic Tech Alliance, (2) set worldwide core rules, interoperability & conformity standards for key 4IR technologies such as AI, quantum and Virtual Reality (VR), and (3) actively embed our common democratic norms, principles and values into the architecture and infrastructure of our technology.

Read more

Shaping the Law of AI: Transatlantic Perspectives

Stanford-Vienna Transatlantic Technology Law Forum, TTLF Working Papers No. 65, Stanford University (2020).

New Stanford innovation policy research: “Shaping the Law of AI: Transatlantic Perspectives”.

The race for AI dominance

The race for AI dominance is a competition in values, as much as a competition in technology. In light of global power shifts and altering geopolitical relations, it is indispensable for the EU and the U.S to build a transatlantic sustainable innovation ecosystem together, based on both strategic autonomy, mutual economic interests and shared democratic & constitutional values. Discussing available informed policy variations to achieve this ecosystem, will contribute to the establishment of an underlying unified innovation friendly regulatory framework for AI & data. In such a unified framework, the rights and freedoms we cherish, play a central role. Designing joint, flexible governance solutions that can deal with rapidly changing exponential innovation challenges, can assist in bringing back harmony, confidence, competitiveness and resilience to the various areas of the transatlantic markets.

25 AI & data regulatory recommendations

Currently, the European Commission (EC) is drafting its Law of AI. This article gives 25 AI & data regulatory recommendations to the EC, in response to its Inception Impact Assessment on the “Artificial intelligence – ethical and legal requirements” legislative proposal. In addition to a set of fundamental, overarching core AI rules, the article suggests a differentiated industry-specific approach regarding incentives and risks.

European AI legal-ethical framework

Lastly, the article explores how the upcoming European AI legal-ethical framework’s norms, standards, principles and values can be connected to the United States, from a transatlantic, comparative law perspective. When shaping the Law of AI, we should have a clear vision in our minds of the type of society we want, and the things we care so deeply about in the Information Age, at both sides of the Ocean.

Read more

Machine Learning & EU Data Sharing Practices

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 1/2020

New multidisciplinary research article: ‘Machine Learning & EU Data Sharing Practices’.

In short, the article connects the dots between intellectual property (IP) on data, data ownership and data protection (GDPR and FFD), in an easy to understand manner. It also provides AI and Data policy and regulatory recommendations to the EU legislature.

As we all know, machine learning & data science can help accelerate many aspects of the development of drugs, antibody prophylaxis, serology tests and vaccines.

Supervised machine learning needs annotated training datasets

Data sharing is a prerequisite for a successful Transatlantic AI ecosystem. Hand-labelled, annotated training datasets (corpora) are a sine qua non for supervised machine learning. But what about intellectual property (IP) and data protection?

Data that represent IP subject matter are protected by IP rights. Unlicensed (or uncleared) use of machine learning input data potentially results in an avalanche of copyright (reproduction right) and database right (extraction right) infringements. The article offers three solutions that address the input (training) data copyright clearance problem and create breathing room for AI developers.

The article contends that introducing an absolute data property right or a (neighbouring) data producer right for augmented machine learning training corpora or other classes of data is not opportune.

Legal reform and data-driven economy

In an era of exponential innovation, it is urgent and opportune that both the TSD, the CDSM and the DD shall be reformed by the EU Commission with the data-driven economy in mind.

Freedom of expression and information, public domain, competition law

Implementing a sui generis system of protection for AI-generated Creations & Inventions is -in most industrial sectors- not necessary since machines do not need incentives to create or invent. Where incentives are needed, IP alternatives exist. Autonomously generated non-personal data should fall into the public domain. The article argues that strengthening and articulation of competition law is more opportune than extending IP rights.

Data protection and privacy

More and more datasets consist of both personal and non-personal machine generated data. Both the General Data Protection Regulation (GDPR) and the Regulation on the free flow of non-personal data (FFD) apply to these ‘mixed datasets’.

Besides the legal dimensions, the article describes the technical dimensions of data in machine learning and federated learning.

Modalities of future AI-regulation

Society should actively shape technology for good. The alternative is that other societies, with different social norms and democratic standards, impose their values on us through the design of their technology. With built-in public values, including Privacy by Design that safeguards data protection, data security and data access rights, the federated learning model is consistent with Human-Centered AI and the European Trustworthy AI paradigm.

Read more