Oct 2024

Canada

Law Over Borders Comparative Guide:

Artificial Intelligence

Introduction

Canada was the first country in the world to adopt a national artificial intelligence (AI) strategy. According to Stanford University’s most recently published Global AI Vibrancy Tool, Canada ranked fifth of 29 countries, and third among G7 nations, in AI research, development and economy. Government support for the AI sector has been strong and consistent, as seen with the Pan-Canadian Artificial Intelligence Strategy at the Canadian Institute for Advanced Research (CIFAR) and Canada’s Global Innovation Clusters program, both launched in 2017, as with the government’s recent decision to invest CAD 2.4 billion to increase Canada’s AI computing capacity. Two of the three “godfathers” of modern AI research, Geoffrey Hinton and Yoshua Bengio, are Canadian. AI research in the academic sphere is concentrated at three National AI Institutes: Amii (Edmonton), Mila (Montréal), and the Vector Institute (Toronto). Private sector AI investment is concentrated in vibrant ecosystems based in Toronto and Montréal. According to Deloitte’s report “Impact and opportunities: Canada’s AI ecosystem — 2023”, Canada’s cohort of AI talent rose an average of 38% annually in each of the preceding five years, outpacing the United States, United Kingdom, Germany, France and Sweden, while Canada ranked third among G7 countries in per capita venture capital (VC) investment in AI, trailing only the United States and United Kingdom.

Top

1 . Constitutional law and fundamental human rights

Top

1.1. Domestic constitutional provisions

In Canada, the powers for the Federal Government and the provinces/territories are defined in the Constitution Acts, 1867 and 1982, which are part of the “Constitution of Canada” (Constitution). The Constitution is the supreme law of Canada, and any law that is inconsistent with its provisions is of no force or effect, to the extent of the inconsistency.

The formal structure of the Constitution suggests that the various legislative bodies are each confined to their own jurisdiction and act independently of each other. However, effective policies often require joint or coordinated action. This is particularly true where human rights call for nationwide minimum standards. In Canada, human rights are protected by both federal and provincial legislation. We discuss the Charter in this section and the impact of provincial human rights legislation below in Section 4: Bias and discrimination.

The Charter is Canada’s most important piece of human rights legislation. It sets out the rights and freedoms that Canadians believe are necessary in a free and democratic society. Sections 7, 8, 9 and 15 of the Charter (referenced below) are likely to play a prominent role in ensuring that AI, and particularly government-controlled AI, is developed in a manner that respects fundamental human rights. 

Top

1.2. Human rights decisions and conventions

Section 7 of the Charter asserts the right of everyone to not be deprived of “life, liberty and security of the person” except in accordance with the principle of fundamental justice. Liberty interests are at stake when a law imposes the penalty of imprisonment or where a prisoner has residual liberty restricted through transfer to more secure institutions (May v. Ferndale Institution 2005 SCC 82). Security of the person has been interpreted to mean health, safety, and personal autonomy (R. v. Morgentaler (No. 2) [1988] 1 S.C.R. 30; Carter v. Canada 2015 SCC 5). Applications of AI that could offend section 7 of the Charter include using AI to assess recidivism in prisoners, or to evaluate priority in access to life-saving care.

Section 8 of the Charter asserts the right to be “secure against unreasonable search or seizure”, which limits the techniques available to the police to look for and obtain evidence of wrongdoing. Privacy rights are behind the protection against unreasonable search or seizure (Hunter v. Southam [1984] 2 S.C.R. 145). For example, the Supreme Court of Canada has held that seizure of personal information, such as medical information, without a warrant by the authorities, is an unreasonable seizure (R. v. Dersch [1993] 3 S.C.R. 768). Personal information has also been held to include electronic information such as the names and addresses associated with ISP subscriber information (R. v. Spencer 2014 SCC 43). 

Section 9 of the Charter asserts the right not to be “arbitrarily detained or imprisoned”, including by police for the purpose of investigation. The right not to be arbitrarily detained places limits on investigative detention by the police. The police may only detain a person briefly, who may have been implicated in a recent or ongoing criminal offence, if they have “reasonable suspicion” (R. v. Mann 2004 SCC 52). Police departments that use AI systems that involve profiling or facial-recognition technologies, for example, will have to meet the burden of showing reasonable grounds to suspect an individual is connected to particular crime. Moreover, biased data that leads to inadvertent racial profiling may result in unconstitutional detention, since the Supreme Court of Canada has held that using racial profiling in detaining an individual will be relevant in determining whether detention is arbitrary (R. v. Le 2019 SCC 34).

Finally, section 15 of the Charter asserts the right to equality before the law and to equal protection and equal benefit without discrimination. In Andrews v. Law Society of B.C. [1989] 1 S.C.R. 143, the Supreme Court of Canada held that section 15 of the Charter requires substantive and not merely formal equality. Therefore, it applies to laws that are:

  • discriminatory on their face; 
  • discriminatory in their effect; and 
  • discriminatory in application. 

Substantive equality allows the court to consider and identify adverse effects on a class of persons distinguished by a listed or analogous personal characteristic in otherwise facially neutral laws (Fraser v. Canada 2020 SCC 28). 

Given the breadth of the above-referenced Charter protections, AI systems that involve profiling or discriminatory outcomes based on any of the Charter-protected attributes, or that impede upon any of the other referenced fundamental rights, will be subject to particular scrutiny. 

Top

2 . Intellectual property

The development of intellectual property law relating to AI remains at a very early stage in Canada. Two key areas (patent and copyright) are premised on inventors or authors who have been understood to be natural people, not an autonomous form of intelligence. However, legislators are considering whether and how to amend the copyright regime to deal with the challenges of AI technologies. AI is drawing wide attention in the area of intellectual property. For example, and although unrelated to the acquisition or enforcement of rights, the Federal Court of Canada, which decides most intellectual property cases heard in Canada, has already issued “Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence” calling for accountability and transparency “for any potential use of AI in its decision-making function” in the litigation process (see www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence).

Top

2.1. Patents

As in other jurisdictions, Canada’s Patent Act only permits the patenting of certain subject matters. Most relevant to AI, a “mere scientific principle[s] or abstract theorem[s]” cannot be patented (see section 27(8) at www.canlii.ca/t/7vkn#sec27).

The Canadian Intellectual Property Office (CIPO) developed specific practices to assess “computer-implemented inventions” as set out in its Manual of Patent Office Practice (Manual) (see www.ic.gc.ca/eic/site/cipointernet-internetopic.nsf/eng/h_wr00720.html) and supplemented by a guidance document in November 2020 (see www.ic.gc.ca/eic/site/cipointernet-internetopic.nsf/eng/wr04860.html). In the Manual, CIPO states that computer programs, data structures and computer-generated signals cannot be claimed as patentable subject matters. In the Guidance Document, CIPO explains that, in order to be patentable, a scientific principle or abstract theorem — including AI systems — must form part of a combination of elements that “has physical existence or manifests a discernible physical effect or change and relates to the manual or productive arts”. Just because a computer is necessary to put it into practice does not mean there is patentable subject matter. By way of example, CIPO suggests that “if running an algorithm on the computer improves the functioning of the computer, then the computer and the algorithm would together form a single actual invention that solves a problem related to the manual and or productive arts”. 

On the question of whether an autonomous AI system could be an inventor, the Patent Act does not define the term “inventor” and, unlike in the United States, does not refer to “individuals” in defining an “invention”. The Federal Court of Canada has interpreted an “inventor” as being a “natural person”, but the question of whether an autonomous AI system could be considered an inventor has not yet been squarely put before the court. CIPO is on the front lines. It has issued a compliance notice in respect of a patent application by an AI-generated application stating: “it does not appear possible for a machine to have rights under Canadian law or transfer those rights to a human”, but allowed the application could potentially be remedied “by submitting a statement on behalf of the [AI] machine” and identifying the human applicant “as the legal representative of the machine”. 

One can imagine numerous issues with AI inventorship. A “patentee” is a necessary party to litigation over a patent under the current version of the Patent Act. How would an autonomous AI participate, or instruct its lawyers in litigation? Could an autonomous AI initiate an action for infringement, or seek to be self-represented in litigation over its patent? Normally, one has the right to examine an inventor for discovery in litigation. This raises obvious issues. Will the witness be the machine’s legal representative?

Other more practical issues spring to mind. Patents have been traditionally viewed in Canada, as in most jurisdictions, as being addressed to hypothetical persons having ordinary skill in the art of the patent who can bring to bear their common general knowledge (at the relevant date) when reading a patent and attempting to put its teachings into practice. In litigation, human experts are usually called to testify about the common general knowledge, the skilled person or team, and how they would read and understand the patent and its claims as of the relevant date. Are AI-generated patents to be judged by the same standard? Are AI-inventor patents now to be viewed as being addressed to autonomous AI systems directly? Will AI (somehow) testify as to the common general knowledge and how a patent would be read and understood? What about the inventor’s course of conduct? And since adaptive AI is constantly learning, how will prior inconsistent statements or incorrect answers be used (especially since AI is traditionally understood as not being able to lie or deceive)? 

Guidance may eventually come either through legislative reform or a test case.

Top

2.2. Copyright

In Canada, the Copyright Act governs copyright and protects key components of AI technologies including software (source and object code) and databases. 

The Government of Canada has not yet updated the Copyright Act to address AI specifically, but it has launched consultations and published papers to shine light on how the challenges created by the intersection of copyright and AI might be addressed in future amendments to the act. The most recent consultation on “Copyright in the Age of Generative Artificial Intelligence” concluded in January 2024 (see https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/consultation-paper-consultation-copyright-age-generative-artificial-intelligence#s22).

Under current jurisprudence interpreting the Copyright Act, it is unlikely that an AI system would be considered an “author” of works it generates. Copyright case law suggests “authorship” must be attributed to a natural person, as reflected by connecting the term of protection to the human author’s lifespan and the concept of “moral rights” presupposing a human author with certain inalienable interests connecting to that person’s honour or reputation. Although CIPO has registered AI-generated works, this is far from conclusive — unlike other intellectual property offices, CIPO does not substantively assess the claims made in an application for copyright registration.

The Copyright in the Age of Generative Artificial IntelligenceConsultation Paper raises three possible approaches to address authorship and ownership of AI-generated works

  • make AI-generated work without a minimum creative contribution from a human author ineligible for copyright;
  • attribute authorship to the human(s) who arranged for the AI-generated work to be created, based on factors to distinguish AI-assisted works that meet the human authorship threshold from those that do not; and
  • create a new, unique set of rights for AI-generated “authorless” works that would grant economic rights to persons like the AI developer, deployer or user even where they did not provide any original contribution to the works but without deeming them an author.

The Consultation Paper also deals with:

  • Text and data mining copyright-protected works and the complexity of obtaining a vast number of authorizations for data used to train AI systems. There is at least the possibility of a targeted exception for copying for the purpose of informational analysis. 
  • Infringement and liability issues surrounding who might be liable for infringing AI technology. Culpability for copying becomes less clear as human involvement decreases.

(See www.ic.gc.ca/eic/site/693.nsf/eng/00316.html.)

Top

2.3. Trade secrets and confidentiality

In Canada, trade secret law is based on the common law of confidential information (except in Québec, where civil law principles apply). In Canada, algorithms, including AI systems, can be protected as confidential information, including formulae, compilations of information, techniques, processes and patterns, often through confidentiality agreements and control measures. No registration is required and confidential information law does not require specific subject matter, unlike patent law. In Canada, obtaining or communicating a trade secret by fraudulent means is a crime under section 391(1) of the Criminal Code. 

Top

2.4. Notable cases

Patent law

The Federal Court of Appeal most recently referred to the patentability of AI in obiter in Canada (AG) v. Benjamin Moore & Co2023 FCA 168. The decision dealt with the patentability of computer-implemented inventions more generally, and the FCA cautioned the analysis is “highly fact specific”, especially as technology is becoming more and more complex with “the advent of artificial intelligence”. AI patentability will be assessed case-by-case for now.

Copyright law

The Alberta Court of King’s Bench held in Geophysical Service Inc. v. Encana Corp. 2016 ABQB 230 that “a human author is required to create an original work for copyright purposes”, but accepted that human intelligence can be significantly assisted by computer systems. A modicum of “skill and judgment” in assembling the work with the help of technology is required. In Stross v. Trend Hunter Inc. 2020 FC 201, the Federal Court considered whether an AI technology generating consumer trend insights for its clients was eligible for the “fair dealing” defence to an infringement claim. The Court found that the technology was a computerized form of market research that passed the first “allowable purposes” stage of the fair dealing analysis. However, the technology failed the second “fairness” stage of the fair dealing analysis because it was used for commercial purposes rather than a broader public interest purpose, and because the defendant did not follow its own copyright policies screening out images with a copyright notice or a watermark.

General Litigation

The British Columbia Supreme Court was the first Canadian court to issue a decision on the use of generative AI in preparing legal submissions in Zhang v. Chen 2024 BCSC 285. Counsel used ChatGPT to conduct legal research and included the fictitious cases it provided in their legal briefing without confirming the cases existed. The Court concluded: “generative AI is still no substitute for the professional expertise that the justice system requires of lawyers” and competence in selecting and using technology remains critical.

Top

3 . Data

Top

3.1. Domestic data law treatment

The Federal Government has in recent years put data governance at the forefront with the publication in 2019 of Canada’s Digital Charter. Its stated goal is to make Canada a “competitive, data-driven, digital economy”, notably to favor AI innovation. Following the adoption of this Charter, the Federal legislator launched a project to replace the Personal Information Protection and Electronics Documents Act (PIPEDA) with a new Consumer Privacy Protection Act (CPPA), largely modelled on the European Union General Data Protection Regulation (EU GDPR) pursuant to Bill C-27. 

Top

3.2. General Data Protection Regulation

The applicable data protection laws for the private sector are PIPEDA and the substantially similar Provincial statutes:

  • Alberta’s Personal Information Protection Act, SA 2003, c P-6.5.
  • British Columbia’s Personal Information Protection Act, SBC 2003, c 63.
  • Quebec’s Act Respecting the Protection of Personal Information in the Private Sector, CQLR c P-39.1 (Quebec Act).

There are also a variety of Provincial statutes that apply to Personal Health Information. 

In September 2022, the Province of Quebec enacted Law 25, which amends the Quebec Act to bring it closer to the EU GDPR model. The Quebec Act is the first Canadian law to regulate automated decision-making, imposing transparency and explainability requirements in relation to automated decision-making involving personal information (see section 12.1). 

Top

3.3. Open data and data sharing

Canada has introduced a wide variety of Open Data Initiatives to better streamline secured sharing of personal information. At the Provincial level, for instance, Quebec introduced Bill-19 which allows for the sharing of a patient’s medical information for the purposes of research. At the Federal level, the government has been examining potential avenues to bring Open Banking to Canada. In April 2021, the Advisory Committee in their Final Report recommended that a system of Open Banking could be introduced that would allow Canadians to safely and conveniently share their banking information with accredited third-party service providers. 

PIPEDA governs the disclosure of personal information by private businesses and specifically carves out an exception for the disclosure and use of personal information without the need for proper knowledge and consent, if it is for the purpose of statistical, scholarly study or research (see subsections 5(2)(c) and 5(3)(f)). However, under the Accountability Principle of PIPEDA, private businesses that share the personal information in its possession or custody remain responsible for that information even if that personal information is transferred to a third party for processing. Therefore, private businesses should ensure that they put in place contractual or other safeguards to ensure personal information is properly protected and handled once transferred. Similar requirements apply pursuant to provincial privacy legislation. In addition, the Quebec Act imposes a requirement to conduct privacy impact assessments in respect of cross-border data transfers.

Top

3.4. Biometric data: voice data and facial recognition data

The direct regulation of AI in the context of facial recognition and voice data is limited in Canada. Quebec’s Act to Establish a Legal Framework for Information Technology requires organizations to disclose to the Commission d’accès à l’information (CAI) the existence of biometric databases and biometric identification or verification technologies that it intends to use prior to its entry into service. Moreover, as expanded upon in Section 7 below (Domestic legislative developments), the current version of AIDA would regulate AI systems that process biometric data in certain circumstances.

Privacy laws also play a role in regulating such technology. PIPEDA and the provincial private sector privacy laws regulate the use of AI in facial recognition and voice data technologies through their consent requirements. The Quebec Act explicitly requires express consent for the collection, use and disclosure of biometric data.

Several decisions shine light on how the consent principle has been applied to regulate use cases of biometrics and AI. In October 2020, the Office of the Privacy Commissioner of Canada (OPC) along with the privacy commissioners of Alberta and British Columbia jointly investigated Cadillac Fairview’s use of facial recognition technology at various malls across the country to analyze age and gender of shoppers. The commissioners found that these practices were occurring without the individuals’ knowledge or consent. At the conclusion of the investigation, Cadillac Fairview agreed to delete the personal information that was not required for legal purposes and had advised that they had ceased using the technology in July 2018 and had no current plans to resume using it (see PIPEDA Findings #2020-004 at www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2020/pipeda-2020-004). In a subsequent decision (2021), the privacy commissioners found that Clearview AI was maintaining a database of billions of photos of individuals taken from the internet without consent to train its facial recognition technology. The commissioners found that Clearview AI had violated federal and provincial privacy laws even if it was no longer offering its technology in Canada (see PIPEDA Findings #2021-001 at https://www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2021/pipeda-2021-001).

In March 2022, the OPC reported on its investigation into Rogers Communications Inc.’s use of a VoiceID program. The OPC’s report found that, while the use of voice biometrics for authenticating account holders is an effective solution for addressing the legitimate need for account authentication and security, Rogers’ VoiceID program was not compliant with the consent obligations under PIPEDA. Among other requirements, PIPEDA mandates obtaining an individual’s express consent before creating a voiceprint (see PIPEDA Findings #2022-003 at www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2022/pipeda-2022-003).

Furthermore, in October 2023, the OPC launched a public consultation aimed at providing information on the new Draft Guidance for processing biometrics — for organizations. This guidance outlines privacy obligations, considerations, and best practices for handling biometric information. Once adopted, the guidelines are likely to reinforce privacy obligations, including those related to obtaining consent and limiting the use, disclosure, and retention of biometric data.

Top

4 . Bias and discrimination

AI and machine learning systems in Canada are regulated, in part, by Canada’s various human rights statutes, which promote the equal treatment of Canadians and protect against discrimination. 

Unrepresentative data, flawed assumptions and restrictive algorithmic classifications can lead to discriminatory results. In order to stay on the right side of Canada’s domestic human rights regime, AI and machine learning systems operating in the Canadian context must mitigate against the risk of propagating or exacerbating bias and potentially discriminatory patterns in their decision-making.

Top

4.1. Domestic anti-discrimination and equality legislation treatment

As referenced in Section 1 above, the Charter affords Canadians certain protections against human rights violations committed by governments in relation to governmental activities. However, it does not provide any guarantee of equality or protection against discrimination by non-governmental entities. These concepts are instead enshrined in Canada’s federal Human Rights Act, R.S.C. 1985, c. H-6, as well as its various provincial and territorial human rights acts and codes.

These laws, and their other provincial and territorial equivalents, flow in large part from section 15 of the Charter (as well as Canada’s international human rights obligations) and prohibit discrimination on specific, enumerated grounds including: gender, race, age, religion, disability and family status. While varying slightly in their language and scope, each promote the equal treatment of Canadians in employment, commercial and other contexts. 

Canada’s human rights laws recognize that discrimination can take place in various forms, including direct, indirect, constructive and/or “adverse effect” discrimination. Some statutes, including Manitoba’s Human Rights Code, C.C.S.M, c. H 175 recognize “systemic discrimination”. This concept has been interpreted by the Supreme Court of Canada as referring to “practices or attitudes that have, whether by design or impact, the effect of limiting an individual’s or a group’s right to the opportunities generally available because of attributed rather than actual characteristics” (Moore v. British Columbia (Education) [2012] 3 S.C.R. 360, referencing Canadian National Railway Co. v. Canada (Canadian Human Rights Commission) [1987] 1 S.C.R. 1114).

The Supreme Court of Canada has defined discriminatory distinctions as ones “based on grounds relating to personal characteristics of the individual or group” that create “burdens, obligations, or disadvantages on such individual or group not imposed upon others…” (Andrews v. Law Society of British Columbia [1989] S.C.J. No. 6). AI systems are often built around the use of categorized sets of data, and algorithms which classify individuals based on gender, race, age or any other enumerated human rights grounds pose a far greater risk of generating discriminatory outcomes under Canadian law (and triggering a corresponding human rights claim) than those which avoid such classifications.

The Law Society of Ontario has recently called for comprehensive AI regulatory reform, in Ontario and across the country, including to better ensure that AI systems comply with Canada’s human rights laws. 

Top

5 . Cybersecurity and resilience

Top

5.1. Domestic technology infrastructure requirements

On June 14, 2022, the House of Commons introduced Bill C-26: An Act respecting cybersecurity, amending the Telecommunications Act and making consequential amendments to other Acts. This bill includes two main parts:

  • an amendment to the Telecommunications Act to promote the security of the Canadian telecommunications system; and
  • the enactment of the Critical Cyber Systems Protection Act (CCSPA), designed to protect critical cyber services and systems that are vital to national security or public safety or are delivered or operated within the legislative authority of Parliament.

Further to the first part, Bill C-26 would amend the Telecommunications Act to enable the Federal Government to make certain orders concerning telecommunications service providers (TSPs), namely orders that would mandate the:

  • use of products and services of specific vendors and other TSPs in telecommunications networks (i.e. regulate the inputs into telecommunications networks); and
  • provision of specific telecommunications services in Canada (i.e. regulate the type of telecommunications services offered using telecommunications networks).

Additionally, the proposed amendments to the Telecommunications Act would grant the Minister of Industry the power to direct TSPs to do anything (or refrain from doing anything) that is, in the Minister’s opinion, necessary to secure the Canadian telecommunications system, including the following:

  • prohibiting TSPs from entering service agreements for any product or service;
  • requiring TSPs to terminate a service agreement;
  • prohibiting the upgrade of any specified product or service; and
  • subjecting the TSPs’ procurement plans to a review process.

Pursuant to the second part, the CCSPA is designed to enable the Federal Government to protect systems and services of national importance and establishes a broad regulatory framework enabling the Federal Government to:

  • define and strengthen baseline cybersecurity for systems and services of critical national importance (e.g. telecommunications services, interprovincial or international pipelines and power line systems, nuclear energy systems, transportation systems within the jurisdiction of the Federal Government, banking systems, clearing and settlement systems, and any other industry regulated by the Federal Government that may be added to the scope of this Act at a later date);
  • require designated operators to develop and implement certain cybersecurity programs;
  • ensure that cyber incidents impacting vital systems and services are reported;
  • issue binding Cybersecurity Directions; and
  • encourage compliance through the introduction of administrative monetary penalties (i.e. maximum penalties of CAD 15 million for designated operators and CAD 1 million for directors and officers).

Bill C-26 has passed the second reading stage in the House of Commons and, at the time of writing these lines, continues its legislative journey towards becoming law.

Top

6 . Trade, anti-trust and competition

Top

6.1. AI related anti-competitive behaviour

Thus far, there has been no legislation specifically addressing the intersection of AI and competition law in Canada, nor has there been any enforcement actions or litigation under the Competition Act. However, recent statements from the government have confirmed that the Competition Act is being reviewed to consider how best to tackle “today’s digital reality”. Three key issues are outlined below in turn.

Conspiracy 

Many have expressed concern about AI both facilitating collusion between competitors and making it harder for regulators to detect collusion. Types of potential algorithmic collusion include a messenger conspiracy (where companies use algorithms to implement a cartel) and the algorithm-enabled hub-and-spoke conspiracy (where firms use the same algorithm and centralize their decisions through the same company that offers algorithmic pricing). However, in its 2018 “Big Data” publication, the Bureau stated that it does not believe computer algorithms require a rethinking of competition law enforcement. Irrespective of the medium used to facilitate the agreement, any agreement between competitors to fix prices, allocation markets, or lessen supply contravenes section 45 of the Competition Act and is a criminal offense. 

Algorithmic collusion may also be used to facilitate wage-fixing and no-poach agreements and bid rigging practices, which are criminalized under sections 45(1.1) and 47 of the Competition Act, respectively.

Abuse of dominance

Section 79 of the Competition Act regulates abuse of dominance.

AI raises two key issues in this context. First, AI can be instrumental in establishing dominance, particularly in digital markets. AI can be a differentiator among firms in terms of how quickly they are able to respond to market changes, how easily and accurately they can forecast and interpret data, how efficiently they are able to develop better and/or cheaper products that respond to consumer preferences, etc. This can all contribute to the accumulation of market power by a single firm. Second, AI may also be used for abusive purposes. For example, digital platforms may use AI to gate-keep or self-preference. In fact, the Bureau initiated an investigation into Amazon’s self-preferencing tactics in 2020, which is still in progress. 

Mergers 

Under the Competition Act, the Bureau reviews mergers to determine whether they are likely to harm competition in a market in Canada. With the advent of AI technology, the Bureau must now determine how to assess the value of data and AI in merger reviews, such as the impact on economies of scope and scale, how it can be leveraged in vertical and conglomerate mergers, and the added challenges of predicting competitive effects in a constantly evolving space. Thus far, unlike other jurisdictions, the Bureau has not actively reviewed or opposed investments by incumbents into AI start-ups, at least to public knowledge.

Based on past statements, the Bureau generally believes the merger provisions of the Competition Act are sufficiently flexible as to account for mergers in technologically advanced markets. However, the Bureau is advocating for amendments that may impact how they consider AI-related harms. 

Top

6.2. Domestic regulation

Currently, there is no specific legislation or regulation that addresses AI in the competition context in Canada. The Competition Act relies on general, flexible standards for assessing conduct, rather than industry or subject-specific rules. However, the Bureau is live to the unique nature of AI markets. In March 2024, the Bureau published a discussion paper on AI and competition for public consultation. The discussion paper explores several key considerations for how AI may affect competition in Canada, including which markets are likely to be involved in the creation of AI infrastructure and the development and deployment of AI in their operations and which areas of competition law are likely to be impacted and how. 

Top

7 . Domestic legislative developments

Top

7.1. Proposed and/or enacted AI legislation

On June 16, 2022, the Federal Government introduced Bill C-27, titled The Digital Charter Implementation Act, 2022. One of three pillars of the bill is the introduction of the Artificial Intelligence and Data Act (AIDA). If adopted, the AIDA would become Canada’s first law specifically dedicated to regulating AI. A significant portion of key obligations under AIDA will be subject to regulations that have yet to be drafted. As of May 2024, Bill C-27 is under review by the House Standing Committee on Industry, Science and Technology.

AIDA defines AI in a manner consistent with Organisation for Economic Cooperation and Development (OECD) definition: “a technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decision”. AIDA’s framework would impose varying obligations depending on two variables: the type of AI system and the role(s) that the actor has in making the system available to the market. 

Type of AI systems

Inspired by the EU AI Act, AIDA would regulate two types of AI systems: “high impact systems” and “general purpose AI systems”. The most current draft of AIDA references seven initial categories of AI systems that are deemed high-impact systems in a schedule, namely AI systems that: (1) make decisions concerning employment; (2) determine an individuals’ access to services; (3) relate to biometric identification; (4) regulate and arrange the importance of content on digital platforms; (5) relate to health care and emergency services; (6) are used by a court or tribunal to make decisions about an individual who is party to proceedings; or (7) are used in policing. In addition, AIDA sets out specific obligations for a general-purpose AI system, which refers to “an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes and activities not contemplated during the system’s development”, and, if integrated into a high-impact system, for any machine learning model, which refers to “a digital representation of patterns identified in data through the automated processing of the data using an algorithm designed to enable the recognition or replication of those patterns”. Under AIDA, a single AI system could conceivably be both a “high impact system” and a “general purpose system”.

Role in the AI value chain

The proposed structure of AIDA sets out tailored obligations based on the role(s) that an actor has in the supply chain of an AI system, being organizations that make an AI system available for use and those who manage its operations once the system is available to users. Actors who make AI systems available for use for the first time would have obligations to perform risk assessments, implement and test risk mitigation measures, ensure that human oversight is possible, and meet transparency obligations. Actors who operationalize AI systems would be required to ensure that the foregoing requirements are monitored, tested and respected, and to implement further prescribed governance measures. Under AIDA, a single business could conceivably play both of these defined roles.

Before a high-impact system is made available the person who makes it available must ensure that (in accordance with regulations): 

  • an assessment of the adverse impacts that could result from the intended use or from any other use of the system that is reasonably foreseeable has been carried out; 
  • measures have been taken to assess and mitigate any risks of harm or biased output; 
  • the effectiveness of the mitigation measures is tested; 
  • human oversight of the AI system is permitted; 
  • the system is performing reliably and as intended and is robust even in adverse or unusual circumstances; and
  • a manual is maintained on the system’s operations.

A person who manages the operations of a high-impact system must (in accordance with the regulations):

  • ensure that the requirements of the person who makes it available are met if there are reasonable grounds to believe that they have not been accomplished; 
  • establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system and carry out tests of the effectiveness of the mitigation measures; 
  • ensure that humans are overseeing the system’s operations; 
  • establish measures allowing users to provide feedback on the system’s performance; 
  • if there are reasonable grounds to suspect that the use of the system has resulted in serious harm or that the mitigation measures are not effective in mitigating risks of serious harm, assess whether the use of the system did actually result in serious harm or the measures are actually not effective in mitigating those risks and, if so, cease the system’s operations until additional or modified measures are put in place that will mitigate risks of serious harm and comply with notification obligations; and
  • keep records demonstrating compliance. 

Before a general-purpose system is made available the person who makes it available for the first time must ensure (in addition to requirements generally applicable to high-impact systems) that: 

  • measures respecting the data used in developing the system have been established in accordance with regulations; 
  • a plain-language description has been prepared of:
    • the system’s capabilities and limitations;
    • the risks of harm or biased output; and
    • any other information prescribed by regulation;
  • if the system generates digital output consisting of text, images or audio or video content:
    • best efforts have been made so that members of the public, unaided or with the assistance of software that is publicly available and free of charge, are able to identify the output as having been generated by an artificial intelligence system; and
    • all measures prescribed by regulation have been taken so that members of the public are able to identify the output as having been generated by an artificial intelligence system.

AIDA would also introduce significant penalties for violations, including (according to the most recently published amended version of the bill) fines of up to the greater of USD 25,000,000 and 5% of gross global revenues.

Top

7.2. Proposed and/or implemented Government strategy

Canada launched the first phase of its Pan-Canadian Artificial Intelligence Strategy in March 2017 with an initial budget allocation of CAD 125,000,000 and a focus on the growth and retention of top AI research talents in the country. Management of this budgetary envelope was given to the Canadian Institute for Advanced Research (CIFAR), notably famous for its visionary support of early machine and deep learning research (see www.ised-isde.canada.ca/site/innovation-better-canada/en/canadas-digital-charter-trust-digital-world). Innovation, Science and Economic Development Canada (ISED) unveiled phase 2 of this strategy in June 2022, committing an additional CAD 443,000,000 over ten years to go beyond research and into supporting commercialization and adoption of AI in a responsible manner (see www.canada.ca/en/innovation-science-economic-development/news/2019/05/government-of-canada-creates-advisory-council-on-artificial-intelligence.html). Still under CIFAR’s oversight, this second phase is notably focused on the funding of top AI institutes working on commercialization of AI solutions (CAD 60,000,000), the funding of Canada’s Global Innovation Cluster to develop made-in-Canada AI (CAD 125,000,000) and the attraction of top talents (CAD 160,000,000) (see www.newswire.ca/news-releases/canada-funds-125-million-pan-canadian-artificial-intelligence-strategy-616876434.html).

Although not framed as a third phase of the Pan-Canadian AI Strategy, the Federal Government’s 2024 budget includes a host of additional measures to grow Canada’s AI advantage, most significantly with the announcement of a CAD 2 billion to support the newly created AI Compute Access Fund and the Canadian AI Sovereign Compute Strategy. As a point of comparison, phase 2 of the Pan-Canadian AI Strategy had only dedicated CAD 40,000,000 on computing initiatives (the industrial backbone of most advanced AI systems), which underscores the evolution of Canada’s AI strategy from an approach focused on talent acquisition to one seeking to develop the infrastructure needed to sustain AI growth. 

As a parallel effort, in May 2019, Canada’s Federal Government launched its Digital Charter, following several months of national consultations with Canadians (see www.canada.ca/en/innovation-science-economic-development/news/2022/06/government-of-canada-launches-second-phase-of-the-pan-canadian-artificial-intelligence-strategy.html). The goal of the Digital Charter is to built trust in digital technologies by reinforcing principles of privacy protection, human centredness and innovation. This broad strategy is focused on ten main principles that encompass, but are not limited to AI: (1) universal access to the digital world; (2) safety and security; (3) control and consent; (4) transparency, portability and interoperability; (5) open and modern digital government; (6) a level playing field for fair competition; (7) data and digital for good; (8) strong democracy; (9) free from hate and violent extremism; and (10) strong enforcement and real accountability. 

Since 2019, the Digital Charter has been the catalyst for Canada’s responsible innovation strategy. In parallel to the launch of the Digital Charter, the Federal Government created in 2019 a special Advisory Council on Artificial Intelligence to advise the Canadian government on AI opportunities and risks (with Prof. Bengio as its current co-chair) (see www.canada.ca/en/innovation-science-economic-development/news/2022/06/government-of-canada-launches-second-phase-of-the-pan-canadian-artificial-intelligence-strategy.html) To date, the most important AI-related initiative that emerged from the Digital Charter is, as discussed in the previous section, the introduction of a dedicated AI bill (AIDA) in the federal parliament as part of Bill C-27. 

Consequently, Canada has  a broad AI strategy that is supported with substantial funding and legislative initiatives. In addition to the federal measures, provincial governments have also proposed their own approaches, including the Quebec government’s Research and Innovation Strategy which committed CAD 217,200,000 in 2022 to support AI innovation.

Top

8 . Frequently asked questions

8.1. Do individuals in Canada have a right not to be subject to automated decision-making?

Unlike Article 22 of the EU GDPR, Quebec’s Law 25 does not contain a specific right not to be subject to an automated decision. Instead, Law 25 provides for rights of transparency and explainability, to correct errors in personal information used to make the automated decision, and to submit observations.

8.2. What is the likelihood that AIDA will be adopted?

As of May 2024, AIDA is undergoing a line-by-line review of proposed amendments. The Parliamentary process is moving so slowly that it is increasingly doubtful that AIDA will be adopted prior to the next election.

8.3. Is the use of biometric information subject to incremental regulatory obligations in Canada?

Yes. In July 2020, the CAI updated its guide on the use of biometrics titled “Biometrics: Principles and Legal Duties or Organizations” (www.dataguidance.com/sites/default/files/cai_g_biometrie_principes-application_eng.pdf). This guide specifically applies in cases where biometry is used for identification or authentication purposes. The CAI defines biometry as “the set of techniques used to analyze one or more of a person’s unique physical, behavioral or biological characteristics in order to establish or prove his or her identity”. Biometric data is personal information (PI) whether it is in the form of a static print (fingers, hand geometry, voice, keyboard strokes, etc.) or dynamic (animated images or print, or images or prints with a time dimension). It also includes biometric data in digital forms or translated into codes “that are derived from the images by means of an algorithm”. The CAI also considers biometric data as a highly sensitive form of PI that is subject to express consent requirements. Moreover, requirements to disclose biometrics characteristics databases are found in Quebec’s Act to establish a legal framework for information technology, at sections 44 and 45. We note that Law 25 modified those sections. Section 44 now provides that an organization cannot use biometric data to verify or confirm an individual’s identity except: 

  • if such verification or disclosure was previously disclosed to the CAI; and 
  • with the express consent of the individual concerned. 

Section 45 now also provides that the creation of a database of biometric characteristics must be disclosed to the CAI “promptly and not later than 60 days before it is brought into service”. The CAI also has the power to make orders to determine how the database is set up, used, consulted, released and retained, and how the data is to be archived or destroyed. The CAI can suspend or prohibit the bringing into service or order the destruction of the database, if the database does not respect its orders or constitutes an invasion of privacy. 

EXPERT ANALYSIS

Chapters

Australia

Kit Lee
Philip Catania

Austria

Sonja Dürager

Belgium

Benjamin Docquir

China

Lewis Chen
Xinyao Zhao

European Union

Benjamin Docquir

Germany

Alexander Tribess

Iceland

Lára Herborg Ólafsdóttir

Ireland

Barry Scannell
David Cullen
Jordie Sattar
Leo Moore

Italy

Enrico Fabrizi
Federico Ferrara
Gianluigi Marino

Netherlands

Coen Barneveld Binkhuysen
Joanne Zaaijer

Spain

Rafael García del Poyo

Switzerland

Martina Arioli

Turkey

Begüm Alara Şahinkaya
Burak Özdağıstanli
Göksu Tuğrul
Hatice Ekici Tağa
Sümeyye Uçar

United Kingdom

Amy Moylett
David Cubitt
Joachim Piotrowski
John Buyers
Katherine Kirrage
Tamara Quinn
Tom Sharpe
Emily Tombs

United States

David V. Sanker, Ph.D

Powered by SimSage

Jobs from Nicholas Scott

3-6 PQE Corporate M&A Associate

Job location: London

Projects/Energy Associate

Job location: London

Popular Articles

Latest Articles

Bonus season gets underway as quartet of Big Law firms fall in line with Milbank

4h

Weil loses trio of high-profile partners to Paul Weiss and Latham & Watkins

7h

US restaurant franchisor Craveworthy Brands hires inaugural general counsel

22h

Lawyers among 45 pro-democracy activists sentenced to prison by Hong Kong court

22h

IP firms AA Thornton and Venner Shipley to join forces

1d