Ireland
Artificial Intelligence
Introduction
At both a European and international level, Ireland continues to showcase its rapidly growing and evolving artificial intelligence (AI) market. Ireland’s low corporation tax rate, its position as the sole English-speaking common-law jurisdiction in the European Union (EU), and its regulatory environment have fuelled AI-driven investment.
Ireland is fast becoming a key operator in the emerging AI space and significant investments have been made in this regard:
- AI is being deployed at scale in Ireland, with many organisations implementing AI solutions. Giants such as OpenAI (the maker of ChatGPT) and Anthropic (the maker of Claude) have established their EU offices in Dublin, Ireland.
- The Irish Government’s (Government) national AI strategy, “AI – Here for Good: National Artificial Intelligence Strategy for Ireland” (National AI Strategy), published in August 2023, sets out how Ireland can be an international leader in using AI to benefit the Irish economy and society.
- While the governance of AI still falls within Ireland’s existing domestic laws, the EU Artificial Intelligence Act (AI Act) will soon be applicable in Ireland. Like other EU Member States, Ireland will soon move to adopt transposing legislation, and decide which regulator will be tasked with oversight of this enormous new regulatory area.
- Beyond implementing other EU legislation (such as the Digital Services Act, the AI Act, and pieces of legislation contained in the EU Digital Reforms Package), Ireland is considering the use of facial recognition technologies by the Irish police force (an Garda Síochána) in its domestic legislation, with the relevant bill to be reviewed by the Irish Parliament (Oireachtas).
- Ireland has established a national centre for applied AI, known as CeADAR. CeADAR acts as a bridge between applied AI research and data analytics, facilitating their commercial deployment.
1 . Constitutional law and fundamental human rights
The main sources of law in Ireland are the 1937 Constitution of Ireland (Constitution), EU law (including the Charter of Fundamental Rights), domestic legislation, and decisions of the Irish courts.
1.1. Domestic constitutional provisions
Although currently untested, existing constitutional protections apply to the use of AI. As in the case of other emerging technologies, constitutionally protected rights such as privacy and other fundamental rights will impact on AI.
Right to privacy
Although the Constitution does not explicitly specify a right to privacy, there exists an unenumerated right to privacy under Article 40.3.1 of the Constitution. In McGee v. Attorney General [1974] IR 284, the court recognised a right to privacy as being inherent to every person due to their human personality. Kennedy & Ors v. Ireland [1984] 1 IR 587 confirmed that the right to damages for breaches of the right to privacy may not be limited to claims against the State alone, allowing for claims against others.
The right to privacy is not absolute. Irish case law has established that the right is subject to the constitutional rights of others, to the requirements of public order, public morality, and the common good (for example Ryan v. The Attorney General [1965] 1 IR 294 and Norris v. The Attorney General [1984] 1 IR 36).
Freedom of expression, assembly, and association
The rights to freedom of expression, assembly, and association are protected by Article 40.6.1 of the Constitution. People have the right to express their convictions and opinions, to assemble peaceably and without arms, and to form associations and unions. These freedoms are subject to limitation based on public order and public morality.
The Irish courts, in The Irish Times v. Ireland [1998]1 IR 359, held that this extends to the dissemination of information, as well as the expression of convictions and opinions, and is primarily concerned with public activities. The courts have been reluctant to limit freedom of expression in Ireland (Ryan v. The Attorney General and Norris v. The Attorney General are examples of this).
Article 40.1 contains explicit constitutional protection that all citizens, as human persons, are equal before the law, including a ban on discriminatory behaviour.
These rights should be considered in the use of AI which could (inadvertently) have the effect of creating forms of bias and discrimination.
1.2. Human rights decisions and conventions
The European Convention on Human Rights Act 2003 gave full effect in Irish law to the European Convention on Human Rights (ECHR), including Article 8 (right to respect for private and family life); Article 9 (freedom of thought, conscience, and religion); Article 10 (freedom of expression); Article 11 (freedom of assembly and association); and Article 14 (prohibition of discrimination).
The Charter of Fundamental Rights of the European Union is applied in Ireland. It is relevant when the Oireachtas is implementing or affecting EU law. While not explicitly referred to in the Irish Constitution, Article 29 does reference the primacy of EU law.
2 . Intellectual property
Ireland’s robust Intellectual Property (IP) regime, like all other EU Member States, is facing novel issues spawned by AI in the areas of patents, copyright, and trade secrets.
2.1. Patents
Patents in Irish law are governed by the Patents Act 1992 as amended (the Patents Act). Under section 9(1), a patent shall be patentable if “it is susceptible of industrial application, is new and involves an inventive step”. However, computer programs are not considered to be an invention under section 9(2), meaning much of the scope for patentability relating to AI is untested under Irish law.
There is no decision that a machine can be classified as an inventor under the Patents Act. “Inventor” is defined as “the actual deviser of an invention”, which appears to leave the question open, however, section 80, relating to co-ownership of patents, refers to co-owners as “two or more persons”. This aligns with the decisions of the European Patent Office (EPO) in J8/20 and J9/20, which concerned an application to designate the AI system called DABUS as an inventor, in which the Legal Appeal Board of the EPO confirmed the EU position under the European Patent Convention (EPC) that an inventor must be a person with legal capacity. Since these decisions, the UK Supreme Court unanimously came to the same conclusion in Thaler (Appellant) v. Comptroller-General of Patents, Designs and Trademarks (Respondent) [2023] UKSC 49. Without an Irish judgment on this issue, the UK Supreme Court decision has persuasive value in Ireland, not least since the definition of “inventor” in the UK Patents Act 1977 is the same as the Irish definition contained in the Patents Act.
2.2. Copyright
Authorship of AI-generated works has become increasingly relevant due to the rise in popularity of technologies such as ChatGPT.
The Copyright and Related Rights Act 2000 (CRRA) governs copyright in Ireland. It protects copyright in a “computer program” where “a program which is original in that it is the author’s own intellectual creation and includes any design materials used for the preparation of the program”.
Under section 2 of the CRRA, a “computer-generated” work is a work that is generated by a computer in circumstances “where the author of the work is not an individual”. The author of this type of work is the person by whom the arrangements necessary for the creation of the work are undertaken. Section 21(f) states: “In this Act, “author” means the person who creates a work and includes: (f) in the case of a work which is computer-generated, the person by whom the arrangements necessary for the creation of the work are undertaken”. This legislation was clearly not designed to deal with the AI-generated works which we have today.
The CRRA appears to derive from the idea of a legal entity model, i.e., one that implies the existence of natural persons behind a legal entity instructing it, although it could have been trying to capture the concept of a machine authoring a work as opposed to a human. This approach of attributing authorship to the individual responsible for facilitating the creation of a work could serve as a means of copyrighting AI-generated works. However, as AI becomes more advanced, issues may arise where robots, acting autonomously, are not acting on the instructions of humans. This could make section 21(f) difficult to reconcile with the concept of machine learning. An absence of case law means that the legislation has yet to be tested. Notably, this Irish provision is seen as lying outside the EU’s copyright acquis, which generally takes an anthropocentric approach to authorship, requiring human involvement for copyright to vest in a work.
Text and data mining
Text and data mining (TDM) is an automated process that selects and analyses large amounts of data for purposes such as extraction, pattern recognition, and semantic analysis. It is vital in the sourcing, compiling and use of the enormous datasets which are used to train AI models. Article 4 of Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market (CDSM Directive) deals with commercial TDM. It was transposed into Irish law by Regulation 4 of S.I. No. 567/2021 — European Union (Copyright and Related Rights in the Digital Single Market) Regulations 2021 (Irish CDSMD Regulations). The CDSM Directive provides that the reproduction of copyright works for TDM, even if for commercial purposes, does not need the authorisation of the rights holder if the rights in those works have not been expressly reserved “in an appropriate manner” against this use. The “appropriate manner” includes, for online works, metadata and terms and conditions for a website or a service, and if not available online, must be communicated to any individual who has lawful access to the work.
The text of the TDM exception diverges between the Irish CDSMD Regulations and the CDSM Directive. Regulation 4 of the Irish CDSMD Regulations provides for “authors” to expressly reserve the use of their work for TDM purposes, whereas the CDSM Directive provides for “rights holders” to expressly reserve the use of their work for TDM purposes. This could lead to legal ambiguity in the future since “rights holders” is a more encompassing term, and includes, for example, a scenario where a company would own a copyrighted work that a contractor created — the company would be the rights holder and the contractor would be the author. Under Regulation 4 of the Irish CDSMD Regulations, only the author can reserve their rights against TDM, not new downstream rights holders.
2.3. Trade secrets and confidentiality
Due to the difficulties in patenting abstract ideas, trade secrets have become increasingly important in protecting the IP of AI systems and AI-generated works. Acquiring meaningful patents on AI systems is not straightforward. Some companies are using trade secret protection to protect their AI-related IP. Trade secrets are governed by common law and the European Union (Protection of Trade Secrets) Regulations 2018, whose provisions mirror the definition of “trade secret” contained in the equivalent EU Directive (2016/943). Under Irish law, in order for an algorithm to be classified as a trade secret, there are three essential criteria:
- it must be actually secret;
- it must have actual or potential commercial value; and
- there must be reasonable efforts made to keep it a secret.
Trade secrets may become increasingly important, due to the commonly held position that AI-generated works do not attract copyright protection. If AI-generated works cannot be protected by copyright, trade secrets could be utilised to prevent disclosure of elements, such as source code, that were created by AI.
2.4. Notable cases
No seminal cases related to AI have yet been adjudicated in the Irish courts. Ireland operates within the framework of common law. As such, legal commentators are keenly monitoring developments and verdicts in prominent AI-related cases from other common law jurisdictions.
3 . Data
3.1. Domestic data law treatment
Ireland’s domestic law features legislation that is core to the use and application of AI, the Data Protection Act 2018 (DPA 2018). In addition, the European Communities (Electronic Communications Network and Services) (Privacy and Electronic Communications) Regulations 2011 provides for data privacy in electronic communications. The DPA 2018 implemented certain operational and discretionary national matters as required by the General Data Protection Regulation (GDPR).
3.2. General Data Protection Regulation
Several provisions of the GDPR apply to the governance of AI in Ireland. Article 35 requires those processing personal data “using new technologies” to carry out a data protection impact assessment where that processing is “likely to result in a high-risk to the rights and freedoms of natural persons”. Those developing AI to process personal data are likely to need to conduct data protection impact assessments.
While the CDSM Directive and the Irish CDSMD Regulations allow for exceptions for reproduction for TDM purposes, the legislation specifically disallows the processing of personal data unless it complies with the GDPR.
There is a prohibition on individuals being subject to a decision based solely on automated processing (Article 22), which is relevant to “profiling”. Similarly, the overarching principle of transparency (Article 5), obliges controllers to be clear about the processing of personal data undertaken. Also, controllers must build privacy by default into new technologies (Article 25).
Irish context
The GDPR allows for derogations to be made by EU Member States. Irish law may restrict the scope of data subjects’ rights and controllers’ related obligations in Articles 12 to 22, 34, and 5 (as it relates to the rights and obligations in Articles 12 to 22) of the GDPR in certain circumstances. The DPA 2018 provides that this can be done when processing personal data for archiving in the public interest, scientific or historical research, or statistical purposes (section 61) or where processing for purely journalistic purposes or academic, artistic, or literary expression (section 43).
Further, Article 8 of the GDPR provides that countries must set a minimum age at which online service providers can rely on a child’s own consent to process their personal data. The DPA 2018 sets the age of this digital consent at 16. This means that companies deploying AI algorithms may need to obtain the consent of a child’s parent or guardian in order to rely on consent as the legal basis for processing a child’s personal data where that child is under the age of 16. The protection of children’s data has become a priority for data protection regulators. In 2023, the Irish Data Protection Commission (DPC) placed a significant fine of EUR 345 million on TikTok for the misuse of children’s data and it is expected that the European Data Protection Board (EDPB) will issue guidelines in 2024 on the processing of children’s data. As such, deployers of AI systems will be under increased regulatory scrutiny regarding parental consent and parental controls, age assurance and age verification, children’s privacy, and content regulation for children (including under Ireland’s Online Safety and Media Regulation Act 2022).
3.3. Open data and data sharing
The EU Digital Reforms package will allow machine learning, AI, and the Internet of Things (IoT) to become more accessible, address emerging barriers to publicly funded information, and stimulate digital innovation, especially regarding AI.
Ireland has implemented a suite of legislation that is based on the EU Digital Reforms package which will facilitate data sharing and will impact public bodies and private companies alike. The Government has implemented the European Union (Open Data and Re-use of Public Sector Information) Regulations 2021, to give effect to the EU Open Data Directive 2019/1024.
Placed on a legislative footing, these regulations impose an obligation on Member States to make high-value datasets available for re-use free of charge in machine-readable formats and via APIs and, where relevant, as a bulk download. High-value datasets offer significant benefits for society, the environment, and the economy, as they are suitable for developing applications and value-added services.
The EU’s Data Act came into force on 11 January 2024, although most of its provisions will apply in Ireland from 12 September 2025. The Data Act will facilitate more extensive data sharing across the EU, which could be particularly beneficial for Irish small and midsize enterprises (SMEs). They will have improved access to data which is necessary for business growth and innovation. The requirements of the Data Act will also apply in circumstances where data is derived from IoT devices and is used to train AI systems.
Finally, the European Health Data Space Regulation (EHDS) will become effective in Ireland from 2026. Irish citizens will have more control over their health data with its enactment. Part of its aim is to foster the development of new AI-based healthcare products and services, which significantly improve patient safety and wellbeing, while preserving privacy and security.
3.4. Biometric data: voice data and facial recognition data
The GDPR lists biometric data as a form of special category of personal data, which is used for the purposes of uniquely identifying an individual. Like all forms of special category personal data, the GDPR is more restrictive about the processing of biometric data. The DPA 2018 does not provide additional restrictions on the processing of biometric data, including voice data and facial recognition data, beyond that of the GDPR.
The Garda Síochána (Recording Devices) (Amendment) Bill 2023 provides for smart body-cameras to be worn by an Garda Síochána members in certain circumstances. This may facilitate automatic facial recognition, automatic profiling, and tracking of individuals. The power to utilise biometric identification has been specifically set out in this bill as there was previously no legislative basis within the DPA 2018 for an Garda Síochána to process biometric data.
Under the EU AI Act, the use of real-time remote biometric identification data in public spaces for law enforcement is prohibited, except for specific necessary objectives. These include: targeted searches for victims of abduction, trafficking or sexual exploitation or missing persons; or the prevention of threats to life, safety or genuine threat of a terrorist attack; or to identify suspects of serious crimes. Outside of these circumstances, AI systems that employ biometric data are considered high-risk AI systems under the AI Act.
4 . Bias and discrimination
4.1. Domestic anti-discrimination and equality legislation treatment
Discrimination in AI can stem from biased training data, skewed algorithms, lack of diversity in data, and the unconscious biases of those implementing and deploying the AI itself. Ireland’s anti-discrimination regime is well established and implements the Equal Treatment Directive 76/207. The Equal Status Acts 2000–2018 prohibit discrimination in the provision of goods and services, accommodation and education. This will be used to protect individuals from discrimination in certain instances such as where AI is utilised in the selection of tenants in residential properties, or in the case of applicants for schools and colleges.
In the employment sphere, the Employment Equality Act 1998 was enacted to affirm the European principles of non-discrimination. In Ireland this applies to discrimination on the basis of nine equality grounds which, by law, must be respected in employment and recruitment processes. AI systems are increasingly being adopted by employers in the recruitment process to increase efficiency and reduce costs. However, there is the potential for employees and job applicants to bring a range of claims against employers, where AI has made or influenced a biased or discriminatory decision affecting their employment. Such claims will also be made easier with the introduction of the proposed EU AI Liability Directive.
Employee monitoring through the use of AI systems will be protected by GDPR transparency requirements. Businesses that have increased their use of monitoring software, allowing employers to view employees’ messages and emails, webcam footage, microphone input and keystrokes, must ensure that they are fully transparent in deploying such AI monitoring software, in order to avoid this level of monitoring being considered a violation of an employee’s right to privacy under the GDPR.
5 . Cybersecurity and resilience
5.1. Domestic technology infrastructure requirements
NIS2, DORA and Cyber Resilience Act 2022
AI and cybersecurity must go hand-in-hand as AI is a new frontier for bad actors to seize. Ireland’s cybersecurity framework is driven primarily by EU legislation which includes the NIS2 Directive, the EU Digital Operational Resilience Act (DORA) and the Cyber Resilience Act 2022 (CRA).
The NIS2 Directive was transposed by Ireland in the European Union (Network and Information Security) Regulations 2021. These regulations establish specific requirements for Operators of Essential Services (OESs) and Digital Service Providers (DSPs) who will need to ensure that their business operations have robust cybersecurity systems in place. Organisations are encouraged to make use of machine learning or AI systems to enhance their cybersecurity capabilities.
DORA focuses on addressing digital operational resilience for the financial sector. DORA introduces specific obligations for financial entities and ICT service providers including establishing a risk management framework and conducting resilience testing. AI could be leveraged in this respect by entities to enhance the detection of cyber threats, automate responses to incidents and improve their overall cyber resilience. This regulation is already in force and will apply fully from January 2025.
Ireland has yet to transpose the CRA, a legal framework that describes the cybersecurity requirements for hardware and software products with digital elements placed on the market of the EU. Once transposed, the CRA’s remit will extend to AI systems that are integrated into products that pose cybersecurity risks.
6 . Trade, anti-trust and competition
Data as a raw material for deploying AI, and the control of its supply as a raw material, could potentially generate a market-distorting advantage if left unregulated.
6.1. AI related anti-competitive behaviour
Indications of the adaption of Ireland’s regulatory regime to potential market abuses are beginning to become visible. The Competition and Consumer Protection Commission, in a consultation response on Ireland’s AI Strategy published in 2019, stated that, in the case of personalised pricing algorithms, there should be specific information requirements which mirror the European Commission’s New Deal for Consumers, to inform the consumer “that the price of the goods, digital content, digital service or service was personalised on the basis of automated decision-making”. This is reflected in Schedule 3 of the Consumer Rights Act 2022.
6.2. Domestic regulation
The Competition Act 2002 (as amended) (2002 Act) prohibits anti-competitive behaviour by undertakings in Irish law. Section 4 of the 2002 Act is based by analogy on Article 101 of the Treaty on the Functioning of the European Union (TFEU) and is concerned where undertakings come together bilaterally to create anti-competitive agreements, conduct concerted practices or take anti-competitive decisions. Section 5 of the 2002 Act is concerned where an undertaking, acting unilaterally, abuses its dominant position in trade for any goods or services. Section 5 is based on Article 102 of the TFEU. Whether the 2002 Act or the TFEU is invoked is dependent on the territorial impact of the arrangements.
The issue of “algorithmic pricing”, i.e., the automated recalibration of prices based on internal and external factors such as market data or competitors’ prices, may constitute potential anti-competitive behaviour or concerted practices. As it currently stands, if AI is used to conceal or comprise anti-competitive agreements this will fall foul of national legislation in the normal way (i.e., an infringement of section 4).
7 . Domestic legislative developments
7.1. Proposed and/or enacted AI legislation
AI in Ireland will mainly be governed by the AI Act. Ireland will enact legislation related to the AI Act, including nominating a regulator who will oversee this area. The regulation of AI will also be affected by multiple other pieces of legislation and elements of the EU Digital Reforms Package such as NIS2, DORA, the Cybersecurity Act and the Data Act, which will have an impact on AI systems and their component parts.
Another key piece of legislation which will govern AI, and in particular recommender systems, which are often driven by AI, is the Digital Services Act (DSA) Regulation 2022/2065. This legislation was implemented in Ireland by the Irish Digital Services Act 2024 which was signed into law on 11 February 2024. Coimisiún na Meán has been appointed as the Digital Services Coordinator and the lead Competent Authority for enforcing the DSA in Ireland. While the DSA primarily focuses on intermediary services and online platforms, it also has implications for AI. The DSA requires Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to conduct AI-specific risk assessments, to identify and mitigate risks associated with AI technologies used on their platforms. In particular, the regulation of AI recommender systems pursuant to the DSA will be a key component of Ireland’s AI regulation.
7.2. Proposed and/or implemented Government strategy
The Government has established an AI Advisory Council, an independent task force to provide expert advice to the Government, with a specific focus on building public trust and promoting the development of trustworthy, person-centred AI. The Advisory Council is made up of 14 members. Its role includes providing expert guidance, insights, and recommendations on AI in response to specific requests from the Government, as well as developing and delivering its own workplan of advice on issues in AI policy.
Ireland is committed to fostering the growth of its technology sector and the National AI Strategy has set a target for Ireland to have 75% of Irish businesses using AI by 2030. Several goals outlined in the National AI Strategy have been achieved so far: Ireland’s “AI Ambassador” was appointed, and an Enterprise Digital Advisory Forum was established which focuses on industry adoption of AI; an AI Innovation Hub was established to offer support and services to SMEs such as specialist AI training and project feasibility work; Ireland has joined the Global Partnership on AI, which is a multi-stakeholder initiative originating in the Organisation for Economic Co-operation and Development (OECD); the National Standards Authority of Ireland (NSAI) has also published the AI Standards and Assurance Roadmap.
In January 2024, the Government published interim guidelines on the use of AI by public sector bodies, which we discuss below.
8 . Frequently asked questions
8.1. Is there anything from an ethical perspective that we should be aware of in our use of AI?
The rapid pace at which generative AI systems have become ubiquitous in the business strategies of many organisations raises questions about the potential deployment of these AI systems for unethical purposes by certain actors. AI has the potential to amplify existing biases in human decisions. As such, ethical safeguards are needed to prevent biases from persisting in AI systems.
A theme that runs through the entire National AI Strategy is the Government’s commitment to an ethical approach to AI in the private and public sectors. Further, the Government’s interim guidelines on the use of AI by public sector bodies provides that ethical considerations should be built into AI systems, outlined in user manuals, and an ethics subject-matter expert should be included in the design process for an AI system. In the interim guidelines, the Government sets out seven principles that public sector bodies must be committed to when employing AI systems:
- human agency and oversight;
- technical robustness and safety;
- privacy and data governance;
- transparency;
- diversity, non-discrimination and fairness;
- societal and environmental well-being; and
- accountability.
The guidelines recommend that a thorough examination of possible biases, through consultation with various stakeholders, is necessary before an AI system is introduced by a public sector body. The guidance further recommends that where the AI system is developed in-house or procured from a third-party vendor, testing for accuracy and bias is essential. A dataset which incorporates the diversity of potential end users in real-life scenarios should be employed when testing the AI system.
8.2. What is the role of boards of directors in respect of AI?
As AI becomes increasingly rooted in the business operations of many companies, directors will need to be aware of their new obligations under the AI Act and their existing obligations under the Companies Act 2014, to ensure compliance with the law when deploying AI in their businesses.
A survey conducted by the Institute of Directors Ireland of its members in late 2023 found that over 50% of directors and senior executives did not have a board-approved AI and cybersecurity strategy in place. Over 75% were not aware of the extensive regulatory scope and effect of the AI Act, and over 60% did not use AI in their organisations.
To comply with the AI Act, directors need to be aware of the broad definition afforded to an AI system under the AI Act and whether their systems fall within its scope. Risk assessments will need to be carried out to ascertain whether an AI system is deemed to be a prohibited or high-risk AI system, a general-purpose AI model, or an AI system with minimal or limited risk. The AI Act prescribes differing rules in respect of each case. If an AI system is deemed high-risk, directors will need to consider the regulatory compliance regime, and should prepare formal risk assessments and data processing impact assessments, and ensure that the requirements of human oversight are implemented.
The Companies Act 2014 stipulates the fiduciary duties of directors, providing that company directors have a duty to act in good faith in the best interests of the company, act honestly and responsibly in relation to conducting the affairs of the company, and exercise due care, skill and diligence. As AI systems have unparalleled impacts on businesses, a decision by a board of directors for (or against) deploying an AI system will be relevant in assessing the compliance of their duties. Directors who choose to implement an AI system will need to rigorously mitigate the risks of the AI system, by considering bias and issues in the structure and quality of the data employed, to ensure that the decision is taken in the best interest of the company.
8.3. In the context of commercial contracts, how should legal liability relating to AI be allocated between parties?
There are a number of specific challenges that arise in contracting for AI technology, including the fact that it is difficult for AI vendors to offer a performance guarantee as the contents and performance of AI systems depend heavily on large volumes of training data. The development of AI systems is also inherently a process of trial and error, and the outputs from AI systems may not be 100% accurate.
Legal liability, specifically the allocation of liability between the parties, is frequently identified as a sticking point between customers and AI vendors and this needs to be looked at in its many guises in a contract. Many AI vendors are start-up companies which may not have sufficient capital to adhere to indemnity and liability clauses. While dependant on the circumstances, legal advice should be taken before entering into an agreement with an AI vendor. Customers should seek certain indemnities and warranties from vendors, especially relating to IP and data protection laws, but these will often be rejected or not available without significant conditions.
The most significant issue in relation to AI regarding warranties and liability is the black-box nature of AI systems, whereby the weightings, for example, that are used in convolutional neural networks utilised in machine learning are dynamic and subject to constant change and flux. It may be difficult to settle upon warranties and liabilities when both parties accept that there may be a lack of explainability in the use of AI systems or certainty on outputs.
A way in which AI liability could be addressed is in the technical documents and instructions accompanying AI systems, and disputes would then centre around whether users of AI strayed outside the intended parameters of the system’s intended use. Contracts incorporating AI systems might increasingly concentrate on this element of AI as a product.
With the increased scrutiny on data protection, and with the volume of data used by AI systems, due diligence on adherence to data protection laws is crucial not least given the extensive fines possible under the GDPR. A warranty/indemnity should be sought by a customer that the vendor (and its AI system) will comply with all applicable data protection laws (though this might prove difficult to obtain from many vendors). There is increasingly an issue with the retention and return of data, especially where that data is personal data, as retention of datasets is a necessary part of the evolution of AI systems. If the data is to be retained, this increased risk may need to be reflected in a decreased purchase price.
Where legal liability could be at issue, then applicable insurance cover comes to the fore. While many vendors will have cyber insurance, this may only cover instances of a hack or a deliberate data breach, rather than damage caused by the AI system itself. It is important to look at the specificities of the cover and ensure it is adequate for the agreement involved, which would require legal advice.
3-6 PQE Corporate M&A Associate
Job location: London
Projects/Energy Associate
Job location: London
3 PQE Banking and Finance Associate, Jersey
Job location: Jersey