Netherlands
Artificial Intelligence
Introduction
To ensure this technology is used ethically and responsibly, new legislation and guidelines have recently been introduced. The long-awaited European Union (EU) AI Act was approved by the Council of the European Union on 21 May 2024, after the European Parliament voted to adopt the legislation on 13 March 2024.
The EU AI Act will enter into force later in 2024 (20 days after its publication in the official journal), and will generally be effective after 24 months, with some provisions, such as those on prohibited AI, coming into effect earlier.
The Netherlands continues to play a pivotal role in the technology sector. As one of the first EU Member States, the Dutch government presented its vision on GenAI earlier this year, highlighting both the opportunities and challenges of this disruptive technology. Additionally, the Dutch Data Protection Authority (DPA) announced plans to increase its oversight on the use of AI and related algorithms. In a proactive move, the DPA has requested OpenAI to provide more clarity on the use of personal data during the training of ChatGPT.
1 . Constitutional law and fundamental human rights
While the implementation of AI brings massive benefits and opportunities to its users, it also poses significant risks to fundamental human rights such as privacy, autonomy, health, freedom of expression, and the right to a fair trial.
In the Netherlands, these rights are safeguarded by domestic laws, particularly the Dutch Constitution (Grondwet), as well as by international treaties like the European Convention on Human Rights (ECHR) and the Charter of Fundamental Rights of the European Union (Charter).
1.1. Domestic constitutional provisions
Considering the overarching provisions in international treaties, Dutch domestic constitutional protections will be discussed in the context of specific human rights concerns in Section 1.2 below.
1.2. Human rights decisions and conventions
Right to privacy and data protection
The right to privacy is enshrined in several legal provisions, including Article 10 of the Dutch Constitution, Article 8 of the ECHR, and Article 7 of the Charter. This right covers various aspects, such as personal autonomy, respect for private and family life, and the protection of home and correspondence. Interference with this right is allowed under specific circumstances, such as national security or public safety, provided it is legally justified and necessary in a democratic society.
AI’s extensive data processing capabilities can infringe on privacy rights, especially through smart devices and Internet-of-Things (IoT) applications. Government data collection in smart cities further exacerbates these concerns.
The Court of Justice of the European Union (CJEU) has ruled that government surveillance can violate privacy rights. This principle extends to employer surveillance of electronic communications using AI, as highlighted in the European Court of Human Rights (ECHR) ruling in Bărbulescu v. Romania (2017).
In the Netherlands, System Risk Indication (SyRI), which is a profiling system used to detect social benefits and tax fraud, was ruled by the District Court of The Hague to infringe on Article 8 of the ECHR due to its significant interference with privacy and lack of transparency.
Autonomy and health
AI can also threaten individual autonomy, particularly through emotional AI, which recognises and measures human emotions via behaviour, facial expressions, body language, and voice. While offering potential benefits, emotional AI risks surveillance and manipulation, potentially leading to self-censorship and chilling effects on freedoms of expression, association, and thought.
Freedom of expression
Freedom of expression, including the right to access information, is protected by Article 7 of the Dutch Constitution, Article 10 of the ECHR, and Article 11 of the Charter. Restrictions can only be imposed through formal legislation, with a strong prohibition against preventive censorship. Governments must ensure access to diverse and unbiased information. However, AI used by search engines and social media can create “filter bubbles”, limiting information diversity. While AI can combat hate speech, it risks censoring legitimate, controversial expressions. Balancing legitimate goals with preserving freedom of expression is crucial.
Right to a fair trial
The right to a fair trial is safeguarded by Articles 6 and 13 of the ECHR and Article 47 of the Charter. This includes fair proceedings, impartial judges, and well-reasoned judgments. AI's use in judicial decisions can jeopardise these principles, especially if the AI system lacks transparency or introduces biases. In addition, AI in judicial contexts is considered high-risk under the EU AI Act.
2 . Intellectual property
Intellectual property rights are extremely relevant to the development and use of AI solutions.
2.1. Patents
In the Netherlands, patents are governed by the Dutch Patents Act (Rijksoctrooiwet) and the European Patent Convention (EPC). To obtain patent protection, an invention must be novel, inventive, and capable of industrial application. Various aspects of AI systems, such as inference models, network architectures, and training methods, may fall within the scope of patentability.
According to the European Patent Office’s (EPO) Guidelines for Examination, algorithms and models are generally considered abstract mathematical concepts, which are excluded from patentability when claimed as such. However, this exclusion does not apply when they are included in a computer program or implemented in a computer. It’s also notable that the EPO has rejected patent applications where an AI system is indicated as the inventor, as the EPC requires the inventor to be a natural person (EPO Legal Board of Appeal, cases J 8/20 and J 9/20, oral proceedings on 21 December 2021).
2.2. Copyright
According to the Dutch Copyright Act, a work is protected by copyright if it is original and personal, meaning it is the author’s own intellectual creation.
With the rise of AI applications like ChatGPT, questions about copyright ownership of AI-generated works have emerged. In the United States, the US Copyright Office has declined copyright protection for works created with AI, citing a lack of human authorship, even when elaborate human instructions are given. In contrast, courts in China and South Korea have granted copyright protections, recognising human instructions and pre-selection as sufficient creativity.
In the Netherlands and the EU, no similar cases have yet arisen. However, the US position aligns with the general EU consensus on copyright protection. In 2011, the CJEU ruled that creative input in various phases (e.g., preparation, selection, and editing) is necessary for copyright protection, which may be applied to AI-generated content. The extent of human creativity required for sufficient authorship remains an open question, awaiting future court rulings for clarification.
The output of GenAI applications may also result in copyright infringements if users fail to prevent the creation of works similar to existing ones, as GenAI bases its output on existing data. Additionally, there is a concern about whether machine learning based on copyright-protected works constitutes infringement. The EU AI Act addresses this issue by establishing certain provisions. GenAI models are classified as “General purpose AI models” with accompanying copyright provisions. Machine learning qualifies as “text and data mining” under the EU Directive on copyright and related rights in the Digital Single Market (Directive). Article 53(1)(c) of the EU AI Act mandates that providers of general purpose AI models must implement policies to respect EU copyright law, particularly regarding the reservations of rights expressed under Article 4(3) of the Directive.
2.3. Trade secrets and confidentiality
Trade secrets are protected by the Dutch Trade Secrets Protection Act, implementing the EU Directive 2016/943/EU. Information is considered a trade secret if:
- it is not generally known to the public or known by or easily accessible to persons who normally deal with that type of information;
- it has commercial value because it is a secret; and
- it is subject to reasonable measures to keep it secret.
The use or implementation of AI tools, such as ChatGPT, has substantial risks with regards to the protection of trade secrets. In many cases, by accepting the terms of use of such providers, individuals accept that the provider may use the input to further develop and improve its services. As a result, after having shared information (some of which may constitute trade secrets), such information is no longer considered to be a secret anymore and consequently loses its protected status under the Dutch Trade Secrets Protection Act. It is, therefore, advisable to implement strict internal guidelines and reasonable measures, and provide obligatory training for employees with the purpose of raising awareness of the potential risks.
2.4. Notable cases
Currently, there are no ongoing EU proceedings or judgments concerning the implications of AI regarding intellectual property.
3 . Data
Data is an essential building block for AI. AI systems rely on data inputs from the initial development stage, through the training phase, to the phase of actual use. Given the broad definition of personal data under European data protection laws, AI systems’ development and use will frequently result in the processing of personal data.
3.1. Domestic data law treatment
In the Netherlands, a “database” is protected under the Dutch Database Act (Databankenwet). A database is defined as a collection of data that is systematically arranged and the creation of which required substantial investment. The Database Act may therefore also apply to training datasets or large language models to the extent the relevant datasets concerned are structured data.
The Database Act aims to protect companies that put significant investments and effort in creating a database or dataset, and prevents third parties from using the same database or dataset without having to make such investments. Database rights are infringed if a person extracts (which includes permanent or temporary transfer) or reutilises (which means making the contents available to the public) all or a substantial part of the contents of a protected database without the rightsholder’s consent.
3.2. General Data Protection Regulation
Any processing of personal data is governed by the General Data Protection Regulation (GDPR) and, in the Netherlands, the Dutch GDPR Implementation Act (Implementatiewet Algemene Verordening Gegevensbescherming).
The term “processing” is defined broadly under the GDPR and encompasses virtually all handling of personal data, including storage. As such, there is often a scenario where AI systems end up handling personal data at some point.
AI and GDPR
Although the GDPR does not specifically mention AI, it does indirectly regulate AI, in particular through provisions governing automated decision-making (ADM, Article 22 of the GDPR). As AI systems are often used to take such automated decisions, the ADM requirements under the GDPR can be seen as an additional form of protection, on top of the protection served under the EU AI Act.
In some respects, there is tension between the GDPR and AI. For example, the GDPR requires that personal data are collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. AI systems, however, typically require large datasets, especially during their training phase, and their capacity to perform a wide variety of tasks can make it hard to pin down the exact purposes for which data is being processed. Compliance with the GDPR’s transparency requirements may also be challenging given that most AI systems will have complex algorithms and continuous learning capabilities.
Interplay with the EU AI Act
Insofar as the design, development or use of AI systems involves the processing of personal data, the EU AI Act will apply in addition to the GDPR. The EU AI Act does not affect the obligations of providers and deployers of AI systems under the GDPR, and data subjects continue to enjoy all the rights and guarantees awarded to them under the GDPR. Moreover, there is a clear overlap between many of the data protection principles and the principles and requirements established by the EU AI Act for the safe development and use of AI systems.
Dutch DPA
The DPA developed rules of thumb for organisations developing and using AI (see www.autoriteitpersoonsgegevens.nl/themas/algoritmes-ai/algoritmes-ai-en-de-avg/regels-bij-gebruik-van-ai-algoritmes). The DPA states that organisations should ensure compliance with the GDPR’s general processing principles and, in the development phase of an AI system, also the principles of privacy by design and default. It also urges organisations to conduct a data protection impact assessment and a human rights impact assessment prior to using or developing a new AI system.
In terms of enforcement, the Dutch DPA has been rather active in recent years:
- In 2021, the Dutch DPA fined the Dutch Tax Authority (Belastingdienst) for discriminatory and unlawful behaviour in relation to the use of a risk-classification model (see www.autoriteitpersoonsgegevens.nl/documenten/boete-belastingdienst-kinderopvangtoeslag). For years, the Tax Authority used this model to detect incorrect applications for childcare benefits and potentially fraudulent use of the benefits system. The risk-classification model included a self-learning algorithm, in which the fact that a person had a non-Dutch nationality was incorporated as a risk indicator. This led to algorithmic bias that caused many parents and caregivers to be wrongfully considered fraudulent. The DPA held that the use of this model was in violation of Articles 5 and 6 of the GDPR and imposed a fine of EUR 2.75 million on the Dutch Tax Authority. The Dutch government resigned over this scandal, known as the ‘benefits affair’.
- In 2023, the Dutch DPA intensified its supervision on the Dutch Institute for Employee Benefit Schemes (UWV) in relation to the UWV’s use of an algorithm, without a sufficient legal basis under the GDPR (see www.autoriteitpersoonsgegevens.nl/actueel/ap-ziet-toe-op-hersteloperatie-uwv-na-illegale-inzet-algoritme). The algorithm was used to find out if people were wrongly receiving benefits from the Dutch government while they were living abroad.
- In 2024, the Dutch Authority for Digital Infrastructure imposed a fine of EUR 175,000 on Odido for privacy violations in relation to the training of an algorithm with location data from mobile phones. The Dutch DPA is now closely watching the remedial actions Odido is taking as a result of the fine (see www.cbs.nl/nl-nl/corporate/2022/42/cbs-in-gesprek-met-autoriteit-persoonsgegevens-over-privacybeleid).
The Dutch DPA was appointed as the competent regulator for algorithms in the Netherlands on 1 January 2023. In this capacity, the Dutch DPA cannot (yet) exercise specific investigative powers but this is expected to change in the near future. For further details see Section 7.3 below.
3.3. Open data and data sharing
Synergy between open data and AI
Open data and AI have the potential to support and enhance each other’s capabilities. On the one hand, open data can improve AI systems. In general, open data serves as a valuable resource for AI systems, providing them with a wealth of diverse data. The more varied and extensive the data an AI system has access to, the better its predictions and functionality. For instance, an AI designed to forecast consumption trends in Europe would be more accurate if it is trained on a wide-ranging dataset that includes transactions from various countries, cities, and socio-economic backgrounds. In this way, the availability of open data contributes to better performing AI.
On the other hand, AI can unlock additional value from open data. By applying AI to extensive datasets, it is possible to uncover patterns and insights that might not have been revealed through more traditional methods of analysis. As such, AI can be a powerful analysis tool, leveraging the value of open data.
An example of open data being combined with AI, is the European Commission’s European Cancer Imaging Initiative (see digital-strategy.ec.europa.eu/en/policies/cancer-imaging). The initiative aims to create an open dataset that links existing resources and databases across Europe, working towards a more open, available, and user-friendly infrastructure for cancer imaging. As part of the initiative, cancer images will be made available to an AI testing and experimentation facility.
Data sharing regulations
Increasing access to high-quality open data is key to unlock the synergy between open data and AI. In the Netherlands, several local laws are in place that may contribute to this aim, including:
- The Government Information Reuse Act (Wet hergebruik van overheidsinformatie), which sets the rules under which citizens and companies can request specific information from governmental bodies for reuse. In general, the requested information must be provided, unless there are exemptions or restrictions, such as GDPR compliance.
- The Open Government Act (Wet open overheid), which establishes provisions for actively making governmental information public and easily accessible. The act aims to enhance government transparency.
On a European level, new regulations and directives have been made available, as part of the European strategy for data:
- The Data Governance Act: this regulation entered into force on 23 June 2022 and, following a 15-month grace period, has been directly applicable in all Member States since September 2023. The Data Governance Act aims to increase data availability and promote data sharing between sectors and EU countries and governs the processes and structures facilitating voluntary data sharing.
- The Data Act: this regulation entered into force on 11 January 2024 and will apply from September 2025 in all Member States. The Data Act complements the Data Governance Act by providing clarity on who has the ability to create value from data and the conditions under which they can do so. The act aims to promote a fair distribution of the value derived from data by establishing clear and equitable rules for accessing and utilising data within the European data economy.
3.4. Biometric data: voice data and facial recognition data
Under the EU AI Act, the use of biometric categorisation systems that rely on sensitive characteristics is prohibited, as well as the untargeted collection of facial images from the internet or CCTV footage for the purpose of creating facial recognition databases. Additionally, the EU AI Act forbids the use of AI for emotion recognition in workplaces and schools, social scoring, predictive policing solely based on profiling or assessing individual characteristics, and any AI systems that manipulate human behaviour or exploit vulnerabilities.
The GDPR contains a general prohibition on processing biometric data (which includes voice data and facial recognition data) for the purpose of uniquely identifying a natural person. Specific exemptions to this prohibition apply and are set forth in the Dutch GDPR Implementation Act, including when prior explicit consent has been obtained, or where processing biometric data is necessary for authentication and security purposes. When biometric data are processed for other purposes than uniquely identifying a natural person, such as for the purpose of training algorithms, then the general processing prohibition does not apply.
The Dutch DPA states in recent guidance that facial recognition is prohibited in most cases (see www.autoriteitpersoonsgegevens.nl/documenten/juridisch-kader-gezichtsherkenning). The Dutch DPA also confirms in this guidance that they consider the processing of biometric data for the purpose of confirming someone’s identity to equally fall under the processing prohibition of Article 9 of the GDPR.
4 . Bias and discrimination
The use of AI may pose risks related to bias and the prohibition of discrimination
4.1. Domestic anti-discrimination and equality legislation treatment
The right to non-discrimination is enshrined in Article 1 of the Dutch Constitution, Articles 21 and 23 of the Charter, and Article 14 of the ECHR. Discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or other opinion, membership of a national minority, property, birth, disability, age, or sexual orientation is strictly prohibited. Dutch national law also includes specific legislation that implements various EU principles, such as the General Equal Treatment Act, as well as various specific acts addressing the aforementioned grounds.
AI systems may threaten the right to equal treatment by producing biased outcomes, a phenomenon known as algorithmic bias. This bias can arise from several factors:
- Replication and exaggeration of societal biases: AI systems can replicate and even exaggerate existing biases or discrimination present in the training data.
- Design bias: Bias or discrimination can stem from the instructions provided by the AI system’s designers.
- Contextual fairness issues: AI systems may fail to replicate contextual notions of fairness without human involvement, especially when deployed in social contexts that already undermine the rights of certain groups.
The opacity of AI decision-making processes, often referred to as the black box problem, can further exacerbate these issues. For example, the Dutch DPA ruling on the childcare benefits system highlighted how lack of transparency can lead to biased outcomes. Systems that were meant to detect misuse of the benefits scheme mistakenly labelled over 20,000 parents as fraudsters, amongst which a disproportionate amount of those receiving such label had an immigration background.
The Netherlands Institute for Human Rights
In 2022, the Netherlands Institute for Human Rights ruled in favour of a university student who presented sufficient evidence for a presumption of algorithmic discrimination. The student alleged that the university discriminated against her by using anti-cheating software during the COVID-19 crisis. The software employed facial recognition algorithms that failed to detect her when logging in for online exams, which the student argued was due to her darker skin colour.
This case underscores the importance of addressing algorithmic bias in AI systems to ensure compliance with anti-discrimination laws and uphold the principles of equality and fairness.
While AI technology offers significant advantages, its deployment must be carefully managed to prevent and mitigate biases that could lead to discrimination. Ensuring transparency, accountability, and fairness in AI systems is crucial to protecting fundamental human rights.
5 . Cybersecurity and resilience
Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. As also described in the EU AI Act’s recitals, cyberattacks against AI systems can leverage AI-specific assets, such as training datasets (e.g. data poisoning) or trained models (e.g. adversarial attacks or membership inference), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures, such as security controls, should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
In addition, organisations using, developing or providing AI systems should ensure compliance with applicable cybersecurity laws and requirements. On a European level, a range of cybersecurity-related directives and regulations have been proposed and adopted in recent years, including the following:
- Cyber Resilience Act: imposes new requirements for products with digital elements, i.e. hardware and software products. When high-risk AI systems fulfil the essential requirements of the Cyber Resilience Act, they should be deemed compliant with the cybersecurity requirements set out in the EU AI Act as well.
- NIS-2 Directive: requires essential and important entities to meet new requirements concerning, among others, the implementation of appropriate and proportionate technical, operational and organisational measures to manage cybersecurity risks, and reporting obligations of significant incidents. Entities are encouraged to use cybersecurity-enhancing technologies, such as AI or machine learning systems, to enhance their capabilities and overall security of network and information systems.
- Digital Operational Resilience Act: often referred to as DORA, aims to strengthen the IT security of financial entities to enhance their resilience against cybersecurity incidents.
- Cyber Solidarity Act: still subject to formal approval by the European Parliament and Council, the Act aims to strengthen the capability to detect, prepare and respond to cyberthreats and incidents, and includes the setting up of a European Cybersecurity Alert System, which will use technologies such as AI, which will enable authorities to detect threats and alert other authorities across the EU.
Partly driven by the European Commission’s EU Cybersecurity Strategy, the legal framework governing cybersecurity is rapidly evolving. It is imperative for organisations to monitor relevant developments closely to guarantee the prompt adoption and implementation of relevant requirements.
5.1. Domestic technology infrastructure requirements
In the Netherlands, cybersecurity-related requirements are laid down in the Network and Information Systems Security Act (NISSA, Wet beveiliging netwerk- en informatiesystemen), implementing the NIS-1 Directive (EU) 2016/1148. A draft implementation act of NIS2 was published in May 2024 (the Cybersecurity Act; Cyberbeveiligingswet) and was open for consultation until 1 July 2024.
6 . Trade, anti-trust and competition
The use of AI has several consequences for trade, anti-trust and competition law as well. Notable examples are algorithmic pricing and deceptive design patterns.
6.1. AI related anti-competitive behaviour
Algorithmic collusion
AI is anticipated to transform market competition, potentially resulting in new types of anti-competitive practices such as algorithmic collusion. Unlike traditional collusion, which involves direct communication between competitors, the Commission defines “algorithmic collusion” as any form of anti-competitive agreement or coordination among competing firms facilitated or implemented through automated systems.
AI can enable price adjustments and interactions with competitors on platforms, which may foster anti-competitive behaviour. Additionally, dominant companies might use AI to exclude competitors from the market by giving preferential treatment to their own products and services or by manufacturers to monitor, track, and control resale prices.
These actions could potentially violate Article 6 of the Dutch Anti-Competition Act (Mededingingswet) and Article 101 of the Treaty on the Functioning of the European Union (TFEU).
6.2. Domestic regulation
Competition is regulated by the Dutch Competition Act and the TFEU. The Netherlands Authority for Consumers and Markets (Autoriteit Consument en Markt; ACM) is the supervisory authority regarding competition and has published a Position Paper regarding the oversight of algorithms in 2020, which companies may consult.
7 . Domestic legislative developments
7.1. Proposed and/or enacted AI legislation
Although certain guidelines at a domestic level have been published, currently no local laws regulating AI systems have been implemented in the Netherlands, nor are there any legislative initiatives ongoing or announced. At the European level, the EU AI Act is expected to enter into force later in 2024. As mentioned previously, the EU AI Act introduces comprehensive regulations for AI applications across the EU.
7.2. Proposed and/or implemented Government strategy
The Dutch government is continuously taking steps to ensure that the Netherlands remains at the forefront of developing innovative and responsible AI. Some key initiatives include:
- Focusing on Responsible AI Applications: The Dutch AI Coalition (NL-AIC) collaborates with government, businesses, educational institutions, research organisations, and societal groups to develop socially responsible AI applications.
- AiNed Program: This public–private multi-year program, part of the National Growth Fund, aims to position the Netherlands among leading AI nations. It contributes to economic recovery and growth, strengthens the economic base, and promotes human-centred and responsible AI use.
- Government Support for AI in Education: The Ministries of Economic Affairs and Climate Policy (EZK) and Education, Culture, and Science (OCW) are significantly investing in the National Education Lab AI (NOLAI). This initiative fosters collaboration among teachers, scientists, and companies to advance responsible AI innovations in education.
- Safe AI Investments: Significant investments have been made in safe AI through the Innovation Centre for Artificial Intelligence (ICAI), which conducts extensive experimentation and research with a total budget of EUR 87 million.
- Supervision: The Dutch DPA has established the Algorithms Coordination Directorate (DCA) to enhance the coordination of algorithm supervision.
Earlier this year, as one of the first EU Member States, the Dutch government outlined its comprehensive vision on GenAI. This vision is rooted in a values-driven approach, aligning with the Value-Driven Digitalisation Work Agenda, the ‘Coalities voor de Digitale Samenleving’ Agenda, the Digital Economy Strategy, and the EU’s coordinated plan on AI.
Domestic oversight
The Dutch DPA is currently the competent regulator for algorithms in the Netherlands. As algorithm regulator, the DPA focuses on: (i) identifying and analysing cross-sectoral and overarching risks and effects of algorithms and sharing knowledge about them, (ii) optimising (existing) collaborations with colleges, market supervisors and state inspectorates, and mapping overarching supervision in the field of algorithms and AI, and (iii) arriving at joint standard implementation and creating overview in legal and other frameworks (by means of guidance). The algorithm regulator cannot (yet) exercise specific investigative powers.
Supervision and oversight of the EU AI Act is still to be decided. The Dutch DPA and the Dutch Authority for Digital Infrastructure (RDI) proposed a model for AI oversight in the Netherlands to the Dutch government in June 2024, placing the Dutch DPA at the centre while maintaining a role for sectoral authorities. Adequate cooperation between supervisory authorities is considered paramount in the supervision of the EU AI Act. The proposal was made based on discussions with the more than 20 supervisory authorities in the Netherlands. Key recommendations include:
- AI supervision in various sectors should be aligned as much as possible with existing product supervision. The supervision of high-risk AI products that already require CE marking can remain the same. For example, the Netherlands Food and Consumer Product Safety Authority (NVWA) will continue to inspect toys, even if they contain AI, and the Health and Youth Care Inspectorate (IGJ) will supervise AI in medical devices.
- The supervision of high-risk AI applications for which no CE marking is currently required should largely lie with the DPA, as the so-called market surveillance authority, to ensure that there is sufficient specialist knowledge and expertise, and because companies developing such high-risk AI often do not do so for only one sector. The market surveillance authority should ensure that AI placed on the market actually meets requirements in areas such as training AI, transparency and human control.
- The supervisory authorities propose two exceptions for the market surveillance authority approach:
- Financial sector: the Dutch Authority for the Financial Markets (AFM) and De Nederlandsche Bank (DNB) will handle market surveillance; and
- Critical infrastructure: this will be subject to supervision of the Human Environment and Transport Inspectorate (ILT) and the RDI.
- Financial sector: the Dutch Authority for the Financial Markets (AFM) and De Nederlandsche Bank (DNB) will handle market surveillance; and
Additionally, the market supervision of AI systems used for judicial purposes must be set up in such a way that the independence of judicial authorities is ensured.
- The supervisory authorities propose that the DPA be responsible for supervising prohibited AI within the meaning of Article 5 of the EU AI Act.
- The supervisory authorities have urged the Dutch government to quickly appoint the relevant supervisory authorities concerned, so that guidance, enforcement and practical preparation for these new tasks can begin in time.
8 . Frequently asked questions
8.1. How do we know which, if any, provisions of the EU AI Act will apply to my organisation, and how can we best prepare?
The EU AI Act has a risk-based approach. Your organisation should determine which category of risk applies to the AI system concerned. Based on that assessment, it will become clear which specific regulations may apply to the use of the AI system and what related actions should be taken.
8.2. Which supervisory authorities will oversee compliance with the EU AI Act in the Netherlands?
This is still to be formally decided. It is expected that the Dutch DPA will continue to be tasked with a coordinating role and with the more general oversight on the responsible use of algorithms, whereby they will act in close cooperation with other, sectoral-specific supervisory authorities. As a result, companies will likely be confronted with several supervising authorities, depending on the sectors they are active in. See Section 7.2 for further details.
8.3. When will the EU AI Act be fully applicable?
The EU AI Act will enter into force in late 2024; it will come into effect on the twentieth day after its publication in the official journal. It has a phased implementation as follows:
6 months: Ban on AI systems with unacceptable risk.
12 months: Governance rules and obligations for GenAI become applicable.
24 months: The majority of rules in the EU AI Act will take effect, including Annex III (list of high-risk use cases).
36 months: Obligations for high-risk systems because they are subject to specified EU product safety legislation will apply.
3-6 PQE Corporate M&A Associate
Job location: London
Projects/Energy Associate
Job location: London
3 PQE Banking and Finance Associate, Jersey
Job location: Jersey